The Llama 3.2-Vision collection is a powerful set of large-scale models designed to handle both text and image tasks, including visual recognition, image reasoning, captioning, and answering image-based questions. Built on the robust Llama 3.1 architecture, it combines advanced text processing with a vision adapter for seamless integration of visual data. These models excel in performance across key industry benchmarks and support multiple languages for text tasks, with English as the primary language for combined image and text applications. Developers can also fine-tune the models for additional languages while adhering to the licensing and usage guidelines.
Llama 3.2-Vision is designed for visual tasks like recognition, reasoning, captioning, and interactive image-based chat. It supports use cases such as answering questions about images, interpreting document layouts, creating image captions, and linking language to image details, all under its Community License.
Benchmark Table
Training Data | Params | Input modalities | Output modalities | Context length | GQA | Data volume | Knowledge cutoff |
---|
Llama 3.2-Vision | (Image, text) pairs | 11B (10.6) | Text + Image | Text | 128k | Yes | 6B (image, text) pairs | December 2023 |
Llama 3.2-Vision | (Image, text) pairs | 90B (88.8) | Text + Image | Text | 128k | Yes | 6B (image, text) pairs | December 2023 |
Prerequisites for deploying Llama 3.2 Vision Model
Minimum requirements:
- GPUs: 1xRTXA6000 (for smooth execution).
- Disk Space: 40 GB free.
- RAM: 48 GB(24 Also works) but we use 48 for smooth execution
- CPU: 48 Cores(24 Also works)but we use 48 for smooth execution
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deployment.
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1x RTX A6000 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy Llama 3.2 Vision on an NVIDIA Cuda Virtual Machine. This proprietary, closed-source parallel computing platform will allow you to install Llama 3.2 Vision Model on your GPU Node.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to GPUs using SSH
NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.
Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.
Now open your terminal and paste the proxy SSH IP or direct SSH IP.
Next, if you want to check the GPU details, run the command below:
nvidia-smi
Note: We will be running Llama 3.2 Vision on both Open WebUI and Terminal.
What is Open WebUI?
Open WebUI is a versatile web-based platform designed to integrate smoothly with a range of language processing interfaces, like Ollama and other tools compatible with OpenAI-style APIs. It offers a suite of features that streamline managing and interacting with language models, adaptable for both server and personal use, transforming your setup into an advanced workstation for language tasks.
This platform lets you manage and communicate with language models through an easy-to-use graphical interface, accessible on both desktops and mobile devices. It even incorporates a voice interaction feature, making it as natural as having a conversation.
How to set up Open WebUI?
We have a separate blog post on Open WebUI. In this blog post, we provide a step-by-step and detailed guide on setting up Open WebUI. If you want to run this model on Open WebUI, check out the blog using the link below:
Link: https://nodeshift.com/blog/running-ai-models-with-open-webui
Step 8: Install Ollama
After setting up Open WebUI, now it’s time to install Ollama from the Ollama website.
Website Link: https://ollama.com/
Run the following command to install the Ollama:
curl -fsSL https://ollama.com/install.sh | sh
Step 9: Serve Ollama
Run the following command to host the Ollama so that it can be accessed and utilized efficiently:
ollama serve
Now, both Open WebUI and Ollama are running.
Step 10: Pull Llama 3.2 Vision 11b Model
Run the following command to pull the Llama 3.2 11b Vision model:
ollama run llama3.2-vision:11b
Step 11: Pull Llama 3.2 Vision 90b Model
Run the following command to pull the Llama 3.2 90b Vision model:
ollama run llama3.2-vision:90b
Step 12: Check Available Models
Run the following command to check if the downloaded models are available:
ollama list
Step 13: Run Models on Open WebUI
Now, refresh the Open WebUI interface to ensure all the downloaded models are visible.
Then, select the model and start interacting with it.
Check the screenshots below for the output.
Note: This is a step-by-step guide for interacting with your model through Open WebUI. If you prefer to avoid setting up Open WebUI, you can simply install Ollama, pull the model, and start interacting with it directly in the terminal.
Option 1: Using Open WebUI
- Set Up Open WebUI: Follow our Open WebUI Setup Guide to configure the interface.
- Refresh the Interface: Ensure all downloaded models, including Llama 3.2 Vision 11B and 90B, are visible.
- Select Your Model: Choose either the 11B or 90B model from the list.
- Start Interaction: Begin using the model by entering your queries in the interface.
Option 2: Using Ollama (Terminal)
- Install Ollama: Download and install the Ollama tool from the official site.
- Pull the Model: Run the following command to download the desired model:
Or for the 90B model:
ollama pull llama-3.2-vision-11b
Or for the 90B model:
ollama pull llama-3.2-vision-90b
- Run the Model: Start the model in the terminal:
ollama run llama-3.2-vision-11b
Or for the 90B model:
ollama run llama-3.2-vision-90b
- Interact with the Model: Enter your prompts in the terminal to start interacting.
Conclusion
The Llama 3.2 Vision model is a groundbreaking model from Meta AI that offers advanced capabilities to developers and researchers. By following this step-by-step guide, you can easily deploy Llama 3.2 Vision on a cloud-based virtual machine using a GPU-powered setup from NodeShift to maximize its potential. NodeShift provides a user-friendly, secure, and cost-effective platform to run your models efficiently. It’s an ideal choice for those exploring Llama 3.2 Vision and other cutting-edge models.