Llama 3.2 models are now available to run locally in VSCode, providing a lightweight and secure way to access powerful AI tools directly from your development environment.
With the integration of Ollama and CodeGPT, you can download and install Llama models (1B and 3B) on your machine, making them ready to use for any coding task.
In this guide, I’ll walk you through the installation process, so you can get up and running with Llama 3.2 in VSCode quickly.
Step-by-Step Installation Guide: Llama 3.2 in VSCode
Step 1: Install Visual Studio Code (VSCode)
To start, make sure you have Visual Studio Code installed. If you don’t have it yet, download it from here and follow the instructions for your operating system.
Step 2: Install CodeGPT Extension
The CodeGPT extension is necessary to integrate AI models like Llama 3.2 into your VSCode environment. Here’s how to get it:
- Open VSCode.
- Click on the Extensions icon on the left sidebar.
- Search for “CodeGPT” in the marketplace.
Step 3: Install Ollama
Ollama enables local deployment of language models. To install it:
- Visit the Ollama website.
- Download the appropriate installer for your operating system.
- Follow the installation instructions provided on the site.
- Once installed, verify it by typing the following in your terminal:
ollama --version
output: ollama version is 0.3.12
Step 4: Download Llama 3.2 Models
With CodeGPT and Ollama installed, you’re ready to download the Llama 3.2 models to your machine:
- Open CodeGPT in VSCode
- In the CodeGPT panel, navigate to the Model Selection section.
- Select Ollama as the provider and choose the Llama 3.2 models (1B or 3B).
Click “Download Model” to save the models locally.
Step 5: Verify Your Setup
Once the model is downloaded, you can verify it’s ready to use:
- Open a code file or project in VSCode.
- In the CodeGPT panel, make sure Llama 3.2 is selected as your active model.
- Begin interacting with the model for code completions, suggestions, or any coding assistance you need.
Ready to Use Llama 3.2 in VSCode
That’s it! With Llama 3.2 running locally through CodeGPT, you’re set up to enjoy a secure, private, and fast AI assistant for your coding tasks — all without relying on external servers or internet connections.
If you found this guide helpful, let us know in the comments, and feel free to reach out if you encounter any issues during the setup!
Top comments (0)