DEV Community

Cover image for How to Install Llama-3.3 70B Instruct Locally?
Ayush kumar for NodeShift

Posted on

How to Install Llama-3.3 70B Instruct Locally?

Image description

The Llama 3.3-70B-Instruct model, developed by Meta, is a powerful multilingual language model designed for text-based interactions. It excels in multilingual dialogue scenarios, offering support for languages like English, German, French, Hindi, and more. Built on an optimized transformer architecture, it uses supervised fine-tuning and reinforcement learning to ensure it aligns with human preferences for helpfulness and safety. With a massive 70 billion parameters and a 128k context length, the model is trained on over 15 trillion tokens of diverse publicly available data, making it one of the most capable tools for generating text and code across a range of applications.

Image description

Image description

Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.

Model Release Date:

  • 70B Instruct: December 6, 2024

Model Resource

Hugging Face

Link: https://huggingface.co/papers/2412.08905

Ollama

Link: https://ollama.com/library/llama3.3

Prerequisites for Installing Llama-3.3 70B Instruct Locally

GPU:

  • Memory (VRAM):

-- Minimum: 24GB (with 8-bit or 4-bit quantization).
-- Recommended: 48GB for full precision (FP16 or BF16) for smooth execution.

  • Type: NVIDIA GPUs with Tensor Cores (e.g., A100, H100, RTX A6000, RTX 4090).
  • Multi-GPU Setup (optional for heavy workloads): -- 2x NVIDIA A100 (80GB) or equivalent for full performance. #### Disk Space:
  • Minimum: 40GB of free SSD space to store model weights and temporary files.
  • Recommended: At least 1TB SSD for storing checkpoints, logs, and datasets. #### RAM:
  • Minimum: 24GB RAM.
  • Recommended: 48GB RAM for smoother execution, especially for preprocessing and model loading. #### CPU:
  • Minimum: 24 cores (multi-core processor).
  • Recommended: 48 cores for optimal performance and faster data preprocessing.

Step-by-Step Process to Install Llama-3.3 70B Instruct Model Locally

For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.

Step 1: Sign Up and Set Up a NodeShift Cloud Account

Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.

Follow the account setup process and provide the necessary details and information.
Image description

Step 2: Create a GPU Node (Virtual Machine)

GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Image description

Image description

Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deployment.

Step 3: Select a Model, Region, and Storage

In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.

Image description

We will use 1x RTX A6000 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.

Step 4: Select Authentication Method

There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Image description

Step 5: Choose an Image

Next, you will need to choose an image for your Virtual Machine. We will deploy Llama-3.3 70B Instruct on an NVIDIA Cuda Virtual Machine. This proprietary, closed-source parallel computing platform will allow you to install Llama-3.3 70B Instruct on your GPU Node.
Image description
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Image description

Step 6: Virtual Machine Successfully Deployed

You will get visual confirmation that your node is up and running.
Image description

Step 7: Connect to GPUs using SSH

NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.

Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.
Image description

Image description

Now open your terminal and paste the proxy SSH IP or direct SSH IP.

Image description

Next, if you want to check the GPU details, run the command below:

nvidia-smi
Enter fullscreen mode Exit fullscreen mode

Image description

Step 8: Install Ollama

After connecting to the terminal via SSH, it’s now time to install Ollama from the official Ollama website.

Website Link: https://ollama.com/
Image description

Run the following command to install the Ollama:

curl -fsSL https://ollama.com/install.sh | sh
Enter fullscreen mode Exit fullscreen mode

Image description

Step 9: Serve Ollama

Run the following command to host the Ollama so that it can be accessed and utilized efficiently:

ollama serve
Enter fullscreen mode Exit fullscreen mode

Image description

Now, “Ollama is running.”

Step 10: Select Llama 3.3 70b Instruct Model

Link: https://ollama.com/library/llama3.3:70b

Llama 3.3 70b Instruct model is available in only one size: 70B. We will run it on our GPU virtual machine.

Image description

Step 11: Connect with SSH

Now, open a new tab in the terminal and reconnect using SSH.

Image description

Step 12: Check Commands

Run, the following command to see a list of available commands:

ollama
Enter fullscreen mode Exit fullscreen mode

Image description

Step 13: Check Available Models

Run the following command to check if the downloaded model are available:

ollama list
Enter fullscreen mode Exit fullscreen mode

Image description

Step 14: Pull Llama 3.3 70b Instruct Model

Run the following command to pull the Llama 3.3 70b instruct model:

ollama pull llama3.3:70b
Enter fullscreen mode Exit fullscreen mode

Image description

Step 15: Run Llama 3.3 70b Instruct Model

Now, you can run the model in the terminal using the following command and interact with your model:

ollama run llama3.3:70b
Enter fullscreen mode Exit fullscreen mode

Image description

Image description

Image description

Image description

Image description

Image description

Note: This is a step-by-step guide for interacting with your model. It covers the first method for installing Llama-3.3 70B Instruct locally using Ollama and running it in the terminal.

Option 1: Using Ollama (Terminal)

  • Install Ollama: Download and install the Ollama tool from the official site.
  • Pull the Model: Run the following command to download the desired model:
ollama pull llama3.3:70b
Enter fullscreen mode Exit fullscreen mode
  • Run the Model: Start the model in the terminal:
ollama run llama3.3:70b
Enter fullscreen mode Exit fullscreen mode

Option 2: Using Open WebUI

  • Set Up Open WebUI: Follow our Open WebUI Setup Guide to configure the interface. Ensure all dependencies are installed and the environment is correctly set up.
  • Refresh the Interface: Confirm that the Llama-3.3 70B Instruct model has been downloaded and is visible in the list of available models on the Open WebUI.
  • Select Your Model: Choose the Llama-3.3 70B Instruct model from the list. This model is available in a single size.
  • Start Interaction: Begin using the model by entering your queries in the interface. The Llama-3.3 70B Instruct model is designed for high-quality instruction-based interactions, so input clear and detailed queries for the best results.

Option 3: Using Hugging Face and Jupyter Notebook

  • Follow our Jupyter Notebook Setup Guide to configure your notebook environment. Ensure that all required dependencies are installed and that your Jupyter Notebook is set up correctly for optimal use.

When choosing an image for your Virtual Machine, select the Jupyter Notebook image. This open-source platform allows you to install and run the Llama-3.3-70B-Instruct model on your GPU node. By running this model on a Jupyter Notebook, you can avoid using the terminal, simplifying the process and reducing setup time. This approach enables you to configure the model in just a few steps and minutes.

Note: NodeShift provides multiple image template options, such as TensorFlow, PyTorch, NVIDIA CUDA, Deepo, Whisper ASR Webservice, and Jupyter Notebook. With these options, you don’t need to install additional libraries or packages to run Jupyter Notebook. You can start Jupyter Notebook in just a few simple clicks.

You need to agree to share your contact information to access this model. Fill in all the mandatory details, such as your name and email, and then wait for approval from Hugging Face and Meta to gain access and use the model.

Image description

You will be granted access to this model within an hour, provided you have filled in all the details correctly.

Image description

GPU Recommendations

For Inference Only:

  • 1x NVIDIA A100 (40GB or 80GB) or equivalent.
    For Fine-tuning or Heavy Inference:

  • 2x NVIDIA H100 GPUs (80GB each).
    Budget Setup:

  • 1x RTX A6000 (48GB VRAM) or 2x RTX 3090 GPUs (24GB each) with quantization.

System Configuration Summary

After setting up the VM and running your Jupyter Notebook, start installing the Llama-3.3-70B-Instruct model.

  • Install Dependencies Ensure Python 3.9+ is installed. Then, run the following command to install the dependencies:
pip install torch transformers safetensors bitsandbytes
Enter fullscreen mode Exit fullscreen mode
  • Verify GPU In Jupyter Notebook, check GPU availability:
import torch
print("CUDA available:", torch.cuda.is_available())
print("Device name:", torch.cuda.get_device_name(0))
Enter fullscreen mode Exit fullscreen mode
  • Download the Model Log in and download the model:
huggingface-cli login --token <your_token>
huggingface-cli download meta-llama/Llama-3.3-70B-Instruct --include "original/*" --local-dir Llama-3.3-70B-Instruct
Enter fullscreen mode Exit fullscreen mode
  • Load the Model Run in Jupyter Notebook:
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig

# Model and tokenizer setup
model_id = "meta-llama/Llama-3.3-70B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
quantization_config = BitsAndBytesConfig(load_in_8bit=True)

model = AutoModelForCausalLM.from_pretrained(
    model_id, device_map="auto", torch_dtype=torch.bfloat16, quantization_config=quantization_config
)

# Generate text
input_text = "How can I set up Llama 3.3 on my GPU virtual machine?"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
output = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Enter fullscreen mode Exit fullscreen mode
  • Optional: Use 4-bit Precision To reduce memory usage:
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
model = AutoModelForCausalLM.from_pretrained(
    model_id, device_map="auto", torch_dtype=torch.bfloat16, quantization_config=quantization_config
)
Enter fullscreen mode Exit fullscreen mode

You’re ready to use the model! 🚀

Conclusion

The Llama-3.3 70B Instruct model is a groundbreaking model from Meta AI that offers advanced capabilities to developers and researchers. By following this step-by-step guide, you can easily install Llama-3.3 70B Instruct on a cloud-based virtual machine using a GPU-powered setup from NodeShift to maximize its potential. NodeShift provides a user-friendly, secure, and cost-effective platform to run your models efficiently. It’s an ideal choice for those exploring Llama-3.3 70B Instruct and other cutting-edge models.

Top comments (0)