The open-source model ecosystem is moving fast, and Llama 4 is one of the most powerful and flexible model families available today. Built with native multimodality, Mixture-of-Experts (MoE) architecture, and support for tool calling, Llama 4 opens up a world of possibilities across text, code, and vision.
With the recent release of Ollama v0.8, developers can now leverage real-time streaming responses and tool invocation directly from their GPU Virtual Machines. Whether you’re building assistants, agents, or research tools, the combination of Llama 4 and Ollama makes it possible to run highly capable models locally with precision and control.
What Makes Llama 4 Special?
- Native Multimodal: Accepts both text and image input
- Supports 12+ Languages: Arabic, English, French, German, Hindi, Indonesian, Italian, Portuguese, Spanish, Tagalog, Thai, and Vietnamese
- Output: Multilingual text and code generation
- Architecture: Mixture-of-Experts (MoE) with 17B active parameters
- Use Cases: Chat assistants, visual reasoning, captioning, instruction following, synthetic data generation
Available Variants:
- Llama 4 Scout
ollama run llama4:scout
→ 109B parameter MoE with 17B active parameters
- Llama 4 Maverick
ollama run llama4:maverick
→ 400B parameter MoE with 17B active parameters
Llama 4 has been pre-trained on a broader collection of 200+ languages, and is license-compliant for safe fine-tuning beyond the 12 supported languages. For image inputs, it has been validated with up to 5 simultaneous images.
Benchmarks
Category | Benchmark | # Shots | Metric | Llama 3.3 70B | Llama 3.1 405B | Llama 4 Scout | Llama 4 Maverick |
---|
Image Reasoning | MMMU | 0 | accuracy | No multimodal support | 69.4 | 73.4 |
| MMMU Pro^ | 0 | accuracy | | 52.2 | 59.6 |
| MathVista | 0 | accuracy | | 70.7 | 73.7 |
Image Understanding | ChartQA | 0 | relaxed_accuracy | | 88.8 | 90.0 |
| DocVQA (test) | 0 | anls | | 94.4 | 94.4 |
Code | LiveCodeBench (10/01/2024-02/01/2025) | 0 | pass@1 | 33.3 | 27.7 | 32.8 | 43.4 |
Reasoning & Knowledge | MMLU Pro | 0 | macro_avg/acc | 68.9 | 73.4 | 74.3 | 80.5 |
| GPQA Diamond | 0 | accuracy | 50.5 | 49.0 | 57.2 | 69.8 |
Multilingual | MGSM | 0 | average/em | 91.1 | 91.6 | 90.6 | 92.3 |
Long Context | MTOB (half book) eng->kgv/kgv->eng | – | chrF | Context window is 128K | 42.2 / 36.6 | 54.0 / 46.4 |
| MTOB (full book) eng->kgv/kgv->eng | – | chrF | | 39.7 / 36.3 | 50.8 / 46.7 |
GPU Configuration
Component | Configuration |
---|
GPU | 1 x H100 SXM |
VRAM | 80 GB |
vCPU | 24 |
RAM | 48 GB |
Image | nvidia/cuda:12.0.1 |
OS | Ubuntu 20.04 or 22.04 |
What’s New in Ollama v0.8?
The May 28, 2025 release of Ollama v0.8 introduced groundbreaking updates:
- Streaming Responses: Token-by-token output while generating
- Tool Calling Support: Real-time function calling via
tools[]
- Improved Parsing: More accurate detection of JSON tool call structures
Supported Tool-Calling Models in Ollama v0.8:
- Qwen 3
- Qwen2.5 & Qwen2.5-Coder
- Devstral
- Llama 3.1
- Llama 4
- and more.
These models can now generate responses that dynamically call external functions, enabling capabilities like weather lookup, calculations, file parsing, and more.
curl http://localhost:11434/api/chat -d '{
"model": "qwen3",
"messages": [
{ "role": "user", "content": "What is the weather today in Toronto?" }
],
"stream": true,
"tools": [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": { "type": "string" },
"format": { "type": "string", "enum": ["celsius", "fahrenheit"] }
},
"required": ["location", "format"]
}
}
}
]
}'
Step-by-Step Process to Run Llama 4 Locally with Tool Calling Enabled
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deploy
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1 x H100 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy Qwen 2.5 VL on an NVIDIA Cuda Virtual Machine. This proprietary, closed-source parallel computing platform will allow you to install Qwen 2.5 VL on your GPU Node.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to GPUs using SSH
NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.
Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.
Now open your terminal and paste the proxy SSH IP or direct SSH IP.
Next, if you want to check the GPU details, run the command below:
nvidia-smi
Step 8: Install Ollama
After connecting to the terminal via SSH, it’s now time to install Ollama from the official Ollama website.
Website Link: https://ollama.com/
Run the following command to install the Ollama:
curl -fsSL https://ollama.com/install.sh | sh
Step 9: Serve Ollama
Run the following command to host the Ollama so that it can be accessed and utilized efficiently:
ollama serve
Step 10: Check Commands
Run, the following command to see a list of available commands:
ollama
Step 11: Check Available Models
Run the following command to check if the downloaded model are available:
ollama list
Step 12: Connect with SSH
Now, open a new tab in the terminal and reconnect using SSH.
Step 13: Pull Llama4 Model
Run the following command to pull the llama4 model:
ollama pull llama4
Step 14: Pull Llama4:16x17b Model
Run the following command to pull the llama4:16x17b model:
ollama pull llama4:16x17b
Step 15: Pull Llama4:128x17b Model
Run the following command to pull the llama4:128x17b model:
ollama pull llama4:128x17b
Step 16: Run Llama4:16x17b Model
Now, you can run the model in the terminal using the following command and interact with your model:
ollama run llama4:16x17b
Try These Sample Prompts
Here are a few prompts to test Llama 4’s reasoning, tool-call-like behavior, and generation skills:
General Reasoning Prompt
What's the difference between RAG and fine-tuning? When should I use each?
Code Understanding Prompt
Here's a Python snippet. Can you explain what it does line by line?
```python
def fib(n):
a, b = 0, 1
for _ in range(n):
a, b = b, a + b
return a
#### 📊 Simulated Tool Calling Prompt (fake API call scenario)
```text
Imagine you can call an external weather API. What tool would you call if I asked: "What’s the weather in New Delhi today?"
File-like Prompt (to simulate file parsing)
If I had a file called `report.csv` with user activity logs, how would you summarize it in Python?
What’s New in Ollama v0.8?
- Streaming Responses: See output token-by-token
- Tool Calling Support: Define tools and let the model invoke them in real time
- Works with models like Qwen3, Devstral, Qwen2.5-Coder, Llama 3.1, Llama 4, and more
This means you can now build your own custom agents with tools like calculator, getWeather, readFile, and more—directly on a GPU Node!
Conclusion
Setting up Llama 4 with tool-calling support on a GPU VM has never been this smooth. With NodeShift’s high-performance infrastructure and Ollama’s flexible local serving engine, you’re equipped to run world-class models with real-time capability.
Whether you’re building agents, experimenting with tool calling, or benchmarking next-gen models, Llama 4 + NodeShift + Ollama v0.8 is a killer combo.