InternVL3-1B is a compact yet powerful model built to handle both text and visual inputs with precision. It combines advanced image comprehension with strong language capabilities, making it perfect for tasks like image captioning, visual storytelling, diagram explanation, GUI navigation, and even video analysis. At its core, it blends a custom-built vision module (InternViT) with a language engine based on Qwen2.5, all tied together using a unique pretraining approach that treats visual and textual data as equals. Whether you’re working with a single image or a full video, InternVL3-1B understands the context, picks up on fine details, and delivers coherent, thoughtful responses every time.
Model Resource
Hugging Face
Link: https://huggingface.co/OpenGVLab/InternVL3-1B
Recommended GPU Configuration for InternVL3-1B
Component | Minimum Requirement | Recommended for Smooth Performance |
---|
GPU | 1× NVIDIA A100 / H100 / RTX A6000 (≥ 40GB VRAM) | ✅ 1× A100 40GB / H100 80GB |
vCPU | 16 cores | ✅ 24+ vCPUs |
RAM | 32 GB | ✅ 48–64 GB |
Storage | 60 GB SSD minimum (to hold model weights + data) | ✅ 120 GB SSD (for models, videos, images, cache) |
CUDA Version | 12.0 or later | ✅ 12.1+ |
PyTorch Version | ≥ 2.1.0 with bfloat16 support | ✅ 2.2.0 |
Transformers | transformers>=4.37.2 | ✅ Latest |
Python | Python 3.10 or higher | ✅ Python 3.11 |
InternVL3 Family
In the following table, we provide an overview of the InternVL3 series.
Step-by-Step Process to Install InternVL3 Locally Locally
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deploy
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1 x RTXA6000 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy InternVL3 on a Jupyter Virtual Machine. This open-source platform will allow you to install and run the InternVL3 on your GPU node. By running this Model on a Jupyter Notebook, we avoid using the terminal, simplifying the process and reducing the setup time. This allows you to configure the model in just a few steps and minutes.
Note: NodeShift provides multiple image template options, such as TensorFlow, PyTorch, NVIDIA CUDA, Deepo, Whisper ASR Webservice, and Jupyter Notebook. With these options, you don’t need to install additional libraries or packages to run Jupyter Notebook. You can start Jupyter Notebook in just a few simple clicks.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to Jupyter Notebook
Once your GPU VM deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ Button in the top right corner.
After clicking the ‘Connect’ button, you can view the Jupyter Notebook.
Now open Python 3(pykernel) Notebook.
Step 8: Check GPU & CUDA Availability
Run the following commands to check GPU & CUDA availability:
!nvidia-smi
!nvcc --version
Step 9: Install Required Dependencies
Run the following command to install the required dependencies:
transformers >= 4.37.2
torch >= 2.1
accelerate
, decord
, Pillow
, etc.
!pip install --upgrade pip
!pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
!pip install transformers>=4.37.2 decord pillow accelerate safetensors
Step 10: Install Einops & Timm
Run the following command to install einops & timm:
!pip install einops timm
Step 11: Load the Model
Run the following code to load the model:
from transformers import AutoTokenizer, AutoModel
model_path = "OpenGVLab/InternVL3-1B"
model = AutoModel.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True
).eval().cuda()
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True, use_fast=False)
Step 12: Install FlashAttention2
Run the following command to install FlashAttention2:
!pip install flash-attn --no-build-isolation
This is optional, but speeds up inference.
Step 13: Run a Text-only Chat
Run the following code for text-only chat:
question = "Hello, who are you?"
generation_config = dict(max_new_tokens=512, do_sample=True)
response, history = model.chat(tokenizer, None, question, generation_config, history=None, return_history=True)
print(f"User: {question}\nAssistant: {response}")
Step 14: Run an Image Chat
Run the following code for image chat:
from PIL import Image
import torchvision.transforms as T
from torchvision.transforms.functional import InterpolationMode
import torch
# Load and transform image
def build_transform(input_size=448):
return T.Compose([
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
T.ToTensor(),
T.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225))
])
img_path = "./image.jpg" # replace with your image path
img = Image.open(img_path).convert("RGB")
transform = build_transform()
pixel_values = transform(img).unsqueeze(0).to(torch.bfloat16).cuda()
# Chat with image
question = "<image>\nDescribe the image."
response = model.chat(tokenizer, pixel_values, question, generation_config)
print(f"Assistant: {response}")
Step 15: Install Decord
Run the following command to install decord:
!pip install decord
Step 16: Import Libraries
Run the following code to import the libraries:
import math
import numpy as np
import torch
from PIL import Image
from decord import VideoReader, cpu
import torchvision.transforms as T
from torchvision.transforms.functional import InterpolationMode
# Transform builder
def build_transform(input_size=448):
IMAGENET_MEAN = (0.485, 0.456, 0.406)
IMAGENET_STD = (0.229, 0.224, 0.225)
return T.Compose([
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
T.ToTensor(),
T.Normalize(mean=IMAGENET_MEAN, std=IMAGENET_STD)
])
Step 17: Define Helper Functions
Run the following code to define helper functions:
# Calculate frame indices
def get_index(bound, fps, max_frame, first_idx=0, num_segments=8):
if bound:
start, end = bound[0], bound[1]
else:
start, end = -100000, 100000
start_idx = max(first_idx, round(start * fps))
end_idx = min(round(end * fps), max_frame)
seg_size = float(end_idx - start_idx) / num_segments
frame_indices = np.array([
int(start_idx + (seg_size / 2) + np.round(seg_size * idx))
for idx in range(num_segments)
])
return frame_indices
# Load & preprocess video
def load_video(video_path, bound=None, input_size=448, max_num=1, num_segments=8):
vr = VideoReader(video_path, ctx=cpu(0), num_threads=1)
max_frame = len(vr) - 1
fps = float(vr.get_avg_fps())
pixel_values_list, num_patches_list = [], []
transform = build_transform(input_size=input_size)
frame_indices = get_index(bound, fps, max_frame, first_idx=0, num_segments=num_segments)
for frame_index in frame_indices:
img = Image.fromarray(vr[frame_index].asnumpy()).convert('RGB')
img = img.resize((input_size, input_size)) # Resize to 448x448
pixel_values = transform(img)
pixel_values_list.append(pixel_values)
num_patches_list.append(1)
pixel_values = torch.stack(pixel_values_list) # Shape: [frames, 3, 448, 448]
return pixel_values, num_patches_list
Step 18: Install ffmpeg
Run the following command to install ffmpeg:
!apt-get update && apt-get install -y ffmpeg
Step 19: Install OpenCV
Run the following command to install OpenCV:
!pip install opencv-python-headless
Step 20: Load the Model
Run the following code to load the model:
model_path = "OpenGVLab/InternVL3-1B"
model = AutoModel.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True
).eval().cuda()
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True, use_fast=False)
Step 21: Run Inference for Video
Run the following code for video inference:
# Load your video
video_path = "./Tak_Server_Reencoded.mp4"
pixel_values, num_patches_list = extract_frames_opencv(video_path, num_frames=8)
pixel_values = pixel_values.to(torch.bfloat16).cuda()
# Prompt
video_prefix = ''.join([f'Frame{i+1}: <image>\n' for i in range(len(num_patches_list))])
question = video_prefix + "What is happening in this video?"
# Run first round
generation_config = dict(max_new_tokens=512, do_sample=True)
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list, history=None, return_history=True)
print(f"User: {question}\nAssistant: {response}")
# Follow-up
followup_question = "Can you describe it in more detail?"
response, history = model.chat(tokenizer, pixel_values, followup_question, generation_config,
num_patches_list=num_patches_list, history=history, return_history=True)
print(f"User: {followup_question}\nAssistant: {response}")
Step 22: Install Gradio
Run the following command to install gradio:
!pip install gradio
Step 23: Import Libraries
Run the following code to import the libraries:
import gradio as gr
import cv2
from PIL import Image
import torch
import torchvision.transforms as T
from torchvision.transforms.functional import InterpolationMode
# --- Transform for InternVL3 ---
def build_transform(input_size=448):
IMAGENET_MEAN = (0.485, 0.456, 0.406)
IMAGENET_STD = (0.229, 0.224, 0.225)
return T.Compose([
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
T.ToTensor(),
T.Normalize(mean=IMAGENET_MEAN, std=IMAGENET_STD)
])
# --- Extract frames using OpenCV ---
def extract_frames(video_path, num_frames=8, input_size=448):
cap = cv2.VideoCapture(video_path)
total = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
idxs = np.linspace(0, total - 1, num_frames, dtype=int)
transform = build_transform(input_size)
pixel_values_list = []
for idx in idxs:
cap.set(cv2.CAP_PROP_POS_FRAMES, idx)
ret, frame = cap.read()
if not ret:
continue
frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
pil_img = Image.fromarray(frame_rgb)
tensor_img = transform(pil_img)
pixel_values_list.append(tensor_img)
cap.release()
pixel_values = torch.stack(pixel_values_list)
return pixel_values, [1] * len(pixel_values_list)
# --- InternVL3 Video Analysis Function ---
def analyze_video(video_file):
pixel_values, num_patches_list = extract_frames(video_file, num_frames=8)
pixel_values = pixel_values.to(torch.bfloat16).cuda()
video_prefix = ''.join([f'Frame{i+1}: <image>\n' for i in range(len(num_patches_list))])
question = video_prefix + "Describe what's happening in this video."
generation_config = dict(max_new_tokens=512, do_sample=True)
response, _ = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list, history=None, return_history=True)
return response
Step 24: Launch Gradio Interface
Run the following command to launch gradio interface:
gr.Interface(
fn=analyze_video,
inputs=gr.Video(label="Upload your MP4 video"),
outputs=gr.Textbox(label="InternVL3 Response"),
title="InternVL3-1B - Video Understanding Demo",
description="Upload a short MP4 video to see what InternVL3-1B understands from it. Uses 8 key frames for reasoning."
).launch(share=True)
Step 25: Access the Gradio Web App
Access the gradio web app on:
* Running on local URL: http://127.0.0.1:7860
* Running on public URL: https://391d0e9e0e5a5c9dba.gradio.live
Conclusion
InternVL3-1B makes working with both images and text feel natural and effortless. From single-image analysis to multi-turn video conversations, its capabilities unlock a whole new level of understanding across formats. And when deployed on a GPU-powered virtual machine through NodeShift, the setup becomes straightforward and hassle-free. Whether you’re building something for research, development, or creative exploration, this model gives you the tools to bring your ideas to life—seamlessly and at scale.