Granite Vision 3.2-2B is a compact and efficient vision-language model designed for document analysis and image-based reasoning. It specializes in extracting structured information from tables, charts, diagrams, infographics, and scanned documents, making it ideal for business automation, OCR tasks, and data extraction. Built on Granite’s language model foundation, it integrates a SigLIP vision encoder and a multi-layer transformer architecture, ensuring high accuracy in document question-answering, visual classification, and retrieval-augmented generation (RAG). Optimized for enterprise applications, Granite Vision 3.2-2B provides precise and reliable outputs for processing text and visual data in a unified manner.
Prerequisites for Installing Granite Vision 3.2 2B Locally
Ensure you have the following setup before running the model:
- Ubuntu 22.04+ or Debian-based OS (for GPU VM)
- Python 3.10+
- NVIDIA GPU (A100 80GB, H100 80GB, RTXA6000)
- GPUs: RTXA6000 (for smooth execution).
- Disk Space: 30 GB free.
- RAM: At least 16 GB.
- CPU: 16 Cores
- CUDA
- PyTorch
- Transformers
- Jupyter Notebook installed and running
Model Resource
Hugging Face
Link: https://huggingface.co/ibm-granite/granite-vision-3.2-2b
Ollama
Link: https://ollama.com/library/granite3.2-vision:2b
Step-by-Step Process to Install Granite Vision 3.2 2B Model Locally
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deploy
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1x RTX A6000 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy Granite Vision 3.2 2B Multimodal on a Jupyter Virtual Machine. This open-source platform will allow you to install and run the Granite Vision 3.2 2B Multimodal on your GPU node. By running this Model on a Jupyter Notebook, we avoid using the terminal, simplifying the process and reducing the setup time. This allows you to configure the model in just a few steps and minutes.
Note: NodeShift provides multiple image template options, such as TensorFlow, PyTorch, NVIDIA CUDA, Deepo, Whisper ASR Webservice, and Jupyter Notebook. With these options, you don’t need to install additional libraries or packages to run Jupyter Notebook. You can start Jupyter Notebook in just a few simple clicks.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to Jupyter Notebook
Once your GPU VM deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ Button in the top right corner.
After clicking the ‘Connect’ button, you can view the Jupyter Notebook.
Now open Python 3(pykernel) Notebook.
Next, If you want to check the GPU details, run the command in the Jupyter Notebook cell:
!nvidia-smi
Step 8: Install Dependencies in Jupyter Notebook
Run the following commands in Jupyter Notebook to install dependencies:
pip install torch torchvision torchaudio transformers huggingface_hub
Step 9: Load the Model and Processor
Run the following Python script to load the model:
import torch
from transformers import AutoProcessor, AutoModelForVision2Seq
from huggingface_hub import hf_hub_download
# Check if GPU is available
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Using device: {device}")
# Model Path
model_path = "ibm-granite/granite-vision-3.2-2b"
# Load Model and Processor
processor = AutoProcessor.from_pretrained(model_path)
model = AutoModelForVision2Seq.from_pretrained(model_path).to(device)
print("Model and Processor loaded successfully!")
Expected Output:
Using device: cuda
Model and tokenizer loaded successfully!
Step 10: Run Inference with an Image
Now, test the model with an image.
Option 1: Use Your Own Local Image
Modify your code to load an image from your local system:
from PIL import Image
# Load a local image (Change the path to your image file)
img_path = "/path/to/your/image.png"
image = Image.open(img_path).convert("RGB")
print("Image loaded successfully!")
Step 11: Continue Your Model Execution
Run the prompt.
conversation = [
{
"role": "user",
"content": [
{"type": "image", "url": img_path}, # Use your corrected image path
{"type": "text", "text": "What is the highest scoring model on ChartQA and what is its score?"},
],
},
]
# Process the image and text
inputs = processor.apply_chat_template(
conversation,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt"
).to(device)
# Generate response
output = model.generate(**inputs, max_new_tokens=100)
# Print model output
print("Generated Response:", processor.decode(output[0], skip_special_tokens=True))
Note: This is a step-by-step guide for interacting with your model. It covers the first method for installing Granite Vision 3.2 2B Multimodal locally using Jupyter Notebook and Transformers.
Option 2: Using Ollama (Terminal)
- Install Ollama: Download and install the Ollama tool from the official site.
- Pull the Model: Run the following command to download the desired model:
ollama pull granite3.2-vision:2b
- Run the Model: Start the model in the terminal:
ollama run granite3.2-vision:2b
Option 3: Using Open WebUI
- Set Up Open WebUI:
Follow our Open WebUI Setup Guide to configure the interface. Ensure all dependencies are installed and the environment is correctly set up.
- Refresh the Interface:
Confirm that the Granite Vision 3.2 2B Multimodal has been downloaded and is visible in the list of available models on the Open WebUI.
- Select Your Model:
Choose the Granite Vision 3.2 2B Multimodal model from the list. This model is available in a single size.
- Start Interaction:
Begin using the model by entering your queries in the interface.
Conclusion
Granite Vision 3.2-2B is a highly efficient vision-language model built for structured document analysis, OCR tasks, and multimodal data extraction. With its ability to process text and images together, it provides precise insights from tables, charts, and infographics, making it a valuable tool for enterprise automation and data-driven workflows.
By following this guide, users can easily install and run the model on Jupyter Notebook, Open WebUI, or Ollama, ensuring flexibility across different platforms. Whether for business intelligence, document processing, or advanced visual understanding, Granite Vision 3.2-2B offers a reliable and scalable solution for integrating vision and language in a unified system.