OLMo 2 is a family of advanced language models developed by the Allen Institute for AI, available in 7B and 13B parameter sizes. Trained on up to 5 trillion tokens using a staged approach and high-quality datasets like OLMo-Mix-1124 and Dolmino-Mix-1124, these models achieve significant performance improvements over their predecessors. Designed for diverse applications, they are competitive with leading open-weight models like Llama 3.1 on academic benchmarks, making them a robust choice for research and development in natural language processing.
Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length |
---|
OLMo 2-7B | 4 Trillion | 32 | 4096 | 32 | 4096 |
OLMo 2-13B | 5 Trillion | 40 | 5120 | 40 | 4096 |
Benchmark Table for OLMo 2 Model
Model | Train FLOPs | Average | ARC/C | HSwag | WinoG | MMLU | DROP | NQ | AGIEval | GSM8k | MMLUPro | TriviaQA |
---|
Open weights models: | | | | | | | | | | | | |
Llama-2-13B | 1.6·10²³ | 54.1 | 67.3 | 83.9 | 74.9 | 55.7 | 45.6 | 38.4 | 41.5 | 28.1 | 23.9 | 81.3 |
Mistral-7B-v0.3 | n/a | 58.8 | 78.3 | 83.1 | 77.7 | 63.5 | 51.8 | 37.2 | 47.3 | 40.1 | 30 | 79.3 |
Llama-3.1-8B | 7.2·10²³ | 61.8 | 79.5 | 81.6 | 76.6 | 66.9 | 56.4 | 33.9 | 51.3 | 56.5 | 34.7 | 80.3 |
Mistral-Nemo-12B | n/a | 66.9 | 85.2 | 85.6 | 81.5 | 69.5 | 69.2 | 39.7 | 54.7 | 62.1 | 36.7 | 84.6 |
Qwen-2.5-7B | 8.2·10²³ | 67.4 | 89.5 | 89.7 | 74.2 | 74.4 | 55.8 | 29.9 | 63.7 | 81.5 | 45.8 | 69.4 |
Gemma-2-9B | 4.4·10²³ | 67.8 | 89.5 | 87.3 | 78.8 | 70.6 | 63 | 38 | 57.3 | 70.1 | 42 | 81.8 |
Qwen-2.5-14B | 16.0·10²³ | 72.2 | 94 | 94 | 80 | 79.3 | 51.5 | 37.3 | 71 | 83.4 | 52.8 | 79.1 |
Partially open models: | | | | | | | | | | | | |
StableLM-2-12B | 2.9·10²³ | 62.2 | 81.9 | 84.5 | 77.7 | 62.4 | 55.5 | 37.6 | 50.9 | 62 | 29.3 | 79.9 |
Zamba-2-7B | n/c | 65.2 | 92.2 | 89.4 | 79.6 | 68.5 | 51.7 | 36.5 | 55.5 | 67.2 | 32.8 | 78.8 |
Fully open models: | | | | | | | | | | | | |
Amber-7B | 0.5·10²³ | 35.2 | 44.9 | 74.5 | 65.5 | 24.7 | 26.1 | 18.7 | 21.8 | 4.8 | 11.7 | 59.3 |
OLMo-7B | 1.0·10²³ | 38.3 | 46.4 | 78.1 | 68.5 | 28.3 | 27.3 | 24.8 | 23.7 | 9.2 | 12.1 | 64.1 |
MAP-Neo-7B | 2.1·10²³ | 49.6 | 78.4 | 72.8 | 69.2 | 58 | 39.4 | 28.9 | 45.8 | 12.5 | 25.9 | 65.1 |
OLMo-0424-7B | 0.9·10²³ | 50.7 | 66.9 | 80.1 | 73.6 | 54.3 | 50 | 29.6 | 43.9 | 27.7 | 22.1 | 58.8 |
DCLM-7B | 1.0·10²³ | 56.9 | 79.8 | 82.3 | 77.3 | 64.4 | 39.3 | 28.8 | 47.5 | 46.1 | 31.3 | 72.1 |
OLMo-2-1124-7B | 1.8·10²³ | 62.9 | 79.8 | 83.8 | 77.2 | 63.7 | 60.8 | 36.9 | 50.4 | 67.5 | 31 | 78 |
OLMo-2-1124-13B | 4.6·10²³ | 68.3 | 83.5 | 86.4 | 81.5 | 67.5 | 70.7 | 46.7 | 54.2 | 75.1 | 35.1 | 81.9 |
Model Resource
Hugging Face
Link: https://huggingface.co/allenai/OLMo-2-1124-7B
Ollama
Link: https://ollama.com/library/olmo2
Prerequisites for Installing OLMo 2 Locally
Hardware Requirements:
- GPU:
- Minimum: 16GB VRAM (8-bit quantization recommended).
- Recommended: 24GB or higher for smooth performance (e.g., RTX 3090, 4090, A100, or H100).
- Disk Space: At least 50GB free SSD storage.
- RAM: Minimum 16GB, recommended 32GB.
- CPU: Multi-core processor (e.g., 8–16 cores).
Step-by-Step Process to Install OLMo 2 Model Locally
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deployment.
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1x RTX A6000 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy OLMo 2 on an NVIDIA Cuda Virtual Machine. This proprietary, closed-source parallel computing platform will allow you to install OLMo 2 on your GPU Node.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to GPUs using SSH
NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.
Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.
Now open your terminal and paste the proxy SSH IP or direct SSH IP.
Next, if you want to check the GPU details, run the command below:
nvidia-smi
Step 8: Install Ollama
After connecting to the terminal via SSH, it’s now time to install Ollama from the official Ollama website.
Website Link: https://ollama.com/
Run the following command to install the Ollama:
curl -fsSL https://ollama.com/install.sh | sh
Step 9: Serve Ollama
Run the following command to host the Ollama so that it can be accessed and utilized efficiently:
ollama serve
Now, “Ollama is running.”
Step 10: Select olmo2 7b Model
Link: https://ollama.com/library/olmo2:7b
OLMo 2 model is available in two sizes: 7B and 13B. We will run both the models one by one on our GPU virtual machine.
Step 11: Connect with SSH
Now, open a new tab in the terminal and reconnect using SSH.
Step 12: Check Commands
Run, the following command to see a list of available commands:
ollama
Step 13: Check Available Models
Run the following command to check if the downloaded model are available:
ollama list
Step 14: Pull OLMo 2 7b Model
Run the following command to pull the OLMo 2 7b model:
ollama pull olmo2:7b
Step 15: Run OLMo 2 7b Model
Now, you can run the model in the terminal using the following command and interact with your model:
ollama run olmo2:7b
Step 16: Select OLMo 2 13b Model
Link: https://ollama.com/library/olmo2:13b
Step 17: Pull OLMo 2 13b Model
Run the following command to pull the olmo2 13b model:
ollama pull olmo2:13b
Step 18: Run OLMo 2 13b Model
Now, you can run the model in the terminal using the following command and interact with your model:
ollama run olmo2:13b
Option 1: Using Ollama (Terminal)
- Install Ollama: Download and install the Ollama tool from the official site.
- Pull the Model: Run the following command to download the desired model:
ollama pull olmo2
- Run the Model: Start the model in the terminal:
ollama run olmo2
Option 2: Using Open WebUI
- Set Up Open WebUI:
Follow our Open WebUI Setup Guide to configure the interface. Ensure all dependencies are installed and the environment is correctly set up.
- Refresh the Interface:
Confirm that the OLMo 2 model has been downloaded and is visible in the list of available models on the Open WebUI.
- Select Your Model:
Choose the OLMo 2 model from the list. This model is available in a single size.
- Start Interaction:
Begin using the model by entering your queries in the interface.
Option 3: Using Hugging Face and Jupyter Notebook
- Follow our Jupyter Notebook Setup Guide to configure your notebook environment. Ensure that all required dependencies are installed and that your Jupyter Notebook is set up correctly for optimal use.
When choosing an image for your Virtual Machine, select the Jupyter Notebook image. This open-source platform allows you to install and run the OLMo 2 on your GPU node. By running this model on a Jupyter Notebook, you can avoid using the terminal, simplifying the process and reducing setup time. This approach enables you to configure the model in just a few steps and minutes.
Note: NodeShift provides multiple image template options, such as TensorFlow, PyTorch, NVIDIA CUDA, Deepo, Whisper ASR Webservice, and Jupyter Notebook. With these options, you don’t need to install additional libraries or packages to run Jupyter Notebook. You can start Jupyter Notebook in just a few simple clicks.
Link: https://huggingface.co/allenai/OLMo-2-1124-7B
GPU Recommendations for OLMo 2 Model
7B Model:
- Minimum VRAM: 16GB
Suitable for quantized inference (8-bit or 4-bit).
- Recommended VRAM: 24GB
Ideal for smoother inference or lightweight fine-tuning.
- Optimal VRAM: 48GB
Supports full-precision training and extended context lengths (4096 tokens).
13B Model:
- Minimum VRAM: 24GB
Works for inference with quantized formats (8-bit or 4-bit).
- Recommended VRAM: 48GB
Ensures seamless fine-tuning and long-context inference.
- Optimal VRAM: 80GB
Best for full-precision fine-tuning, multi-GPU setups, and handling large datasets.
GPU Recommendations by Type:
- RTX 3090/4090:
Good for inference with quantization.
- RTX A6000:
Suitable for full-precision tasks and medium-length context.
- NVIDIA A100 (40GB):
Excels at both inference and training for the 7B model.
- NVIDIA A100 (80GB):
Handles 13B model fine-tuning and long-context inference efficiently.
- NVIDIA H100 (80GB):
State-of-the-art performance for large-scale fine-tuning and extensive multi-step tasks.
Multi-GPU Scaling:
- Two GPUs:
2x RTX A6000 for efficient fine-tuning.
- Four GPUs:
4x A100 (40GB) for large-scale model training and high-throughput tasks.
Key Notes:
- Use SSD storage for faster model loading and data handling.
- Quantization methods (8-bit or 4-bit) help reduce memory requirements, enabling efficient model deployment on smaller GPUs.
System Configuration Summary
After setting up the VM and running your Jupyter Notebook, start installing the OLMo 2 model.
1. Install Required Dependencies
Run the following command to install the required libraries:
pip install --upgrade git+https://github.com/huggingface/transformers.git torch safetensors bitsandbytes
2. Verify GPU Availability
Run the following code in your Jupyter Notebook to ensure GPU detection:
import torch
print("CUDA available:", torch.cuda.is_available())
print("Device name:", torch.cuda.get_device_name(0))
3. Download the Model
- Log in to Hugging Face:
huggingface-cli login --token <your_hugging_face_token>
2. Download the OLMo-2-1124-7B model files:
huggingface-cli download allenai/OLMo-2-1124-7B --include "original/*" --local-dir olmo-2-7b
4. Load the Model in Your Jupyter Notebook
Use the following code to load and test the model:
from transformers import AutoTokenizer, AutoModelForCausalLM
# Define model path
model_id = "allenai/OLMo-2-1124-7B"
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
# Load model
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.float16 # Use float16 for GPU optimization
)
# Test input
input_text = "What is the purpose of the OLMo-2 model?"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda") # Move inputs to GPU
# Generate response
output = model.generate(
**inputs,
max_new_tokens=50,
do_sample=True,
top_k=50,
top_p=0.95
)
print(tokenizer.decode(output[0], skip_special_tokens=True))
5. Optimize for Limited GPU Memory
If your GPU has less VRAM (e.g., 16GB), use 8-bit quantization to reduce memory usage:
- Install BitsAndBytes:
pip install bitsandbytes
2. Load the Model with Quantization:
from transformers import BitsAndBytesConfig
# Configure 8-bit quantization
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
# Load the model with quantization
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.float16,
quantization_config=quantization_config
)
6. Adjust Generation Parameters
To control the quality of the output, modify the generation parameters:
output = model.generate(
**inputs,
max_new_tokens=50,
temperature=0.7, # Creativity control
repetition_penalty=1.2, # Penalize repetitive outputs
top_k=50,
top_p=0.9
)
print(tokenizer.decode(output[0], skip_special_tokens=True))
7. Fine-Tuning (Optional)
If you want to fine-tune the model:
- Use the official OLMo GitHub repository:
git clone https://github.com/allenai/OLMo.git
cd OLMo
2. Customize the training script:
torchrun --nproc_per_node=8 scripts/train.py {path_to_train_config} \
--data.paths=[{path_to_data}/input_ids.npy] \
--data.label_mask_paths=[{path_to_data}/label_mask.npy] \
--load_path={path_to_checkpoint} \
--reset_trainer_state
You’re ready to use the model! 🚀
Conclusion
In conclusion, OLMo 2 stands as a powerful and versatile family of language models, offering robust performance across diverse natural language processing tasks. With efficient training methods, high-quality datasets, and compatibility with various platforms like NodeShift, Ollama, and Jupyter Notebook, it caters to both researchers and developers. Its scalability, advanced fine-tuning options, and support for resource-constrained deployments make OLMo 2 an excellent choice for pushing the boundaries of language model applications.