Llama 4 Scout is a powerful large-scale language model built to handle both words and visuals together. Designed with a special structure that picks the right mix of expert modules for every task, it delivers sharp, context-rich answers and creative responses. With the ability to understand multiple languages, interpret images, and generate thoughtful replies, it feels more like a companion that can read, see, and respond—all at once.
Whether you’re building interactive apps, powering research, or exploring new ideas, Scout gives you the tools to bring your thoughts to life in smarter ways. It’s fast, accurate, and trained to follow your lead—no complex setup required.
Model Name | Training Data | Params | Input modalities | Output modalities | Context length | Token count | Knowledge cutoff |
---|
Llama 4 Scout (17Bx16E) | A mix of publicly available, licensed data and information from Meta’s products and services. This includes publicly shared posts from Instagram and Facebook and people’s interactions with Meta AI. Learn more in our Privacy Center. | 17B (Activated) 109B (Total) | Multilingual text and image | Multilingual text and code | 10M | ~40T | August 2024 |
Llama 4 Maverick (17Bx128E) | 17B (Activated) 400B (Total) | Multilingual text and image | Multilingual text and code | 1M | ~22T | August 2024 |
Model Resource
Hugging Face
Link: https://huggingface.co/meta-llama/Llama-4-Scout-17B-16E-Instruct
Prerequisites for GPU Configuration
Before you get started with deploying the Llama-4-Scout-17B-16E-Instruct model on a GPU-powered Virtual Machine, make sure you meet the following hardware and setup requirements:
Minimum Recommended GPU Specs:
- GPU: At least 1× A100, H100, or H200 For better performance, use 2× H200 (used in this guide).
- VRAM: Minimum 80 GB (required to fit all model shards)
- vCPU: 24 cores or more
- RAM: At least 48 GB
- Storage: Minimum 350 GB SSD recommended
- CUDA Version: CUDA 12.1+ Works best with latest PyTorch + CUDA Toolkit
Step-by-Step Process to Install Llama-4-Scout-17B-16E-Instruct Locally
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Access model from Hugging Face
Link: https://huggingface.co/meta-llama/Llama-4-Scout-17B-16E-Instruct
You need to agree to share your contact information to access this model. Fill in all the mandatory details, such as your name and email, and then wait for approval from Hugging Face and Meta to gain access and use the model.
You will be granted access to this model within an hour, provided you have filled in all the details correctly.
Step 2: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 3: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deploy
Step 4: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 2 x H200 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 5: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 6: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy Llama-4-Scout-17B-16E-Instruct on a Jupyter Virtual Machine. This open-source platform will allow you to install and run the Llama-4-Scout-17B-16E-Instruct on your GPU node. By running this Model on a Jupyter Notebook, we avoid using the terminal, simplifying the process and reducing the setup time. This allows you to configure the model in just a few steps and minutes.
Note: NodeShift provides multiple image template options, such as TensorFlow, PyTorch, NVIDIA CUDA, Deepo, Whisper ASR Webservice, and Jupyter Notebook. With these options, you don’t need to install additional libraries or packages to run Jupyter Notebook. You can start Jupyter Notebook in just a few simple clicks.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 7: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 8: Connect to Jupyter Notebook
Once your GPU VM deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ Button in the top right corner.
After clicking the ‘Connect’ button, you can view the Jupyter Notebook.
Now open Python 3(pykernel) Notebook.
Step 9: Authenticate Hugging Face and Install HuggingFace Hub
Run the following commands to authenticate hugging face and install huggingface hub:
!pip install huggingface_hub --upgrade
from huggingface_hub import login
login()
Paste your Hugging Face token when prompted.
Step 10: Install Required Libraries
Run the following commands to install the required libraries:
!pip install -U torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
!pip install -U transformers accelerate safetensors
Run the following command to install bitsandbytes for memory-saving quantization:
!pip install auto-gptq bitsandbytes
Step 11: Verify GPU is available to PyTorch
Run the following command to verify GPU is available to PyTorch:
import torch
torch.cuda.is_available()
You should see True
.
Next, run the following command to check GPU details:
!nvidia-smi
Step 12: Install Hf_Xet
Run the following command to install hf_xet:
pip install huggingface_hub[hf_xet]
Step 13: Load the Model
Run the following code to load the model:
from transformers import AutoProcessor, Llama4ForConditionalGeneration
import torch
model_id = "meta-llama/Llama-4-Scout-17B-16E-Instruct"
processor = AutoProcessor.from_pretrained(model_id)
model = Llama4ForConditionalGeneration.from_pretrained(
model_id,
attn_implementation="eager", # ✅ fix: avoid tracing issues
device_map="auto", # auto-maps across GPU/CPU
torch_dtype=torch.bfloat16 # use float16 if no bfloat16 support
)
Step 14: Try a Basic Chat Prompt
Test the model with a text or multimodal prompt:
messages = [
{
"role": "user",
"content": [
{"type": "text", "text": "What makes cloud servers ideal for modern development?"}
]
}
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt"
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=300)
response = processor.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])[0]
print(response)
Then in the next cell, run your image + text prompt:
url1 = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg"
url2 = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/cat_style_layout.png"
messages = [
{
"role": "user",
"content": [
{"type": "image", "url": url1},
{"type": "image", "url": url2},
{"type": "text", "text": "Describe how these two images are similar and how they are different."}
]
}
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt"
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256)
response = processor.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])[0]
print(response)
Step 15: Install Gradio
Run the following command to install gradio:
pip install gradio --upgrade
Step 16: Expose Llama-4-Scout with Gradio UI
Run the following code to expose Llama-4-Scout with Gradio UI:
import gradio as gr
from transformers import AutoProcessor, Llama4ForConditionalGeneration
import torch
from PIL import Image
import requests
model_id = "meta-llama/Llama-4-Scout-17B-16E-Instruct"
# Load model & processor
processor = AutoProcessor.from_pretrained(model_id)
model = Llama4ForConditionalGeneration.from_pretrained(
model_id,
attn_implementation="eager", # 🛠️ Use eager to avoid flex errors
device_map="auto",
torch_dtype=torch.bfloat16
)
# 🔁 Inference function
def scout_infer(img1, img2, prompt):
images = []
if img1:
images.append({"type": "image", "image": img1})
if img2:
images.append({"type": "image", "image": img2})
if prompt:
images.append({"type": "text", "text": prompt})
messages = [{"role": "user", "content": images}]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt"
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
response = processor.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])[0]
return response
# 🖥️ Gradio Interface
gr.Interface(
fn=scout_infer,
inputs=[
gr.Image(type="pil", label="Image 1 (Optional)"),
gr.Image(type="pil", label="Image 2 (Optional)"),
gr.Textbox(label="Ask your question or give a prompt")
],
outputs=gr.Textbox(label="Llama 4 Scout Response"),
title="🦙 Llama 4 Scout - Multimodal Reasoning Demo",
description="Upload 1-2 images and ask a question. Llama-4-Scout will compare, describe, or reason across them."
).launch(share=True) # 👉 share=True gives you a public URL
Step 17: Access the Gradio Web App
Access the gradio web app on:
Running on local URL: http://127.0.0.1:7860
Running on public URL: https://faf1e52f64e9618aff.gradio.live
Upload images, run prompt and generate response.
Conclusion
Running Llama-4-Scout-17B-16E-Instruct on a GPU-powered Virtual Machine is now more accessible than ever. With just a few steps—setting up your cloud instance, authenticating access, installing the right tools, and loading the model—you can start exploring powerful text and image understanding capabilities right from a Jupyter Notebook or through a simple Gradio interface. Whether you’re building tools, testing ideas, or working on research, this setup gives you full control, speed, and flexibility to get started in minutes.