ALLaM-7B-Instruct-preview is a bilingual text generation model built to support both Arabic and English. Developed by the National Center for Artificial Intelligence at SDAIA, it has been trained from scratch using a combination of 4 trillion English tokens and 1.2 trillion mixed Arabic-English tokens. This balanced training approach preserves strong English performance while introducing deep understanding of Arabic. The model is designed for instruction-following tasks and is optimized for cultural alignment, question answering, summarization, and general-purpose conversations in both supported languages.
Model Resource
Hugging Face
Link : https://huggingface.co/ALLaM-AI/ALLaM-7B-Instruct-preview
Evaluation Scores of ALLaM
Model | AVG | ETEC 0 shot | IEN-MCQ 0 shot | IEN-TF 0 shot | AraPro 0 shot | AraMath 5 shot | Ar-IFEval (prompt strict) 0 shot | Ar-IFEval (inst strict) 0 shot | ExamsAR 5 shot | ACVA 5 shot | Arabic MMLU 0 Shot | Openai MMLU 0 shot | GAT 0 shot |
---|
ALLaM-7B-Instruct-preview | 64.42 | 66.67 | 91.77 | 82.95 | 69.71 | 66.78 | 31.34 | 67.65 | 51.58 | 76.33 | 67.78 | 55.91 | 44.53 |
AceGPT-v2-8B-Chat | 52.67 | 56.81 | 77.01 | 75.91 | 63.51 | 41.49 | 10.26 | 39.25 | 51.96 | 72.69 | 57.02 | 49.99 | 36.15 |
AceGPT-v2-32B-Chat | 62.23 | 64.81 | 81.6 | 80.35 | 67.19 | 64.46 | 25.75 | 63.41 | 55.31 | 71.57 | 68.3 | 60.8 | 43.21 |
jais-family-6p7b-chat | 46.31 | 45.47 | 46.22 | 63.92 | 54.31 | 25.29 | 13.99 | 52.97 | 46.93 | 73.8 | 56.15 | 44.96 | 31.71 |
jais-family-13b-chat | 49.14 | 48.65 | 62.95 | 68.68 | 57.53 | 26.61 | 17.16 | 54.27 | 45.07 | 71.18 | 58.14 | 47.73 | 31.72 |
jais-family-30b-16k-chat | 52.54 | 53.31 | 74.88 | 68.76 | 62.79 | 41.49 | 16.6 | 54.95 | 49.72 | 60.08 | 62.04 | 50.98 | 34.85 |
jais-family-30b-8k-chat | 53.19 | 53.52 | 72.76 | 70.65 | 61.27 | 33.39 | 16.79 | 54.68 | 50.28 | 74.47 | 63.11 | 50.9 | 36.44 |
jais-adapted-7b-chat | 45.19 | 40.49 | 57.38 | 67.18 | 50.59 | 28.43 | 14.93 | 54.27 | 40.6 | 70.44 | 49.75 | 38.54 | 29.68 |
jais-adapted-13b-chat | 51.86 | 48.12 | 69.65 | 71.85 | 59.07 | 37.02 | 23.32 | 60.61 | 48.23 | 67.78 | 56.42 | 46.83 | 33.4 |
jais-adapted-70b-chat | 58.32 | 56.81 | 74.51 | 76.47 | 64.59 | 45.62 | 27.05 | 65.05 | 54.75 | 73.33 | 65.74 | 56.82 | 39.15 |
Qwen2.5-7B-Instruct | 60.55 | 64.12 | 66.38 | 78.46 | 64.63 | 71.74 | 28.17 | 65.19 | 50.65 | 78.17 | 61.54 | 56.1 | 41.42 |
Qwen2.5-14B-Instruct | 71.26 | 72.18 | 80.51 | 77.64 | 69.11 | 82.81 | 68.66 | 86.76 | 57.54 | 75.04 | 69.36 | 63.8 | 51.7 |
Qwen2.5-72B-Instruct | 76.91 | 78.7 | 86.88 | 86.62 | 74.69 | 92.89 | 67.72 | 87.51 | 60.71 | 79.92 | 74.1 | 73.59 | 59.54 |
Mistral-7B-Instruct-v0.3 | 43.05 | 35.67 | 53.59 | 63.4 | 43.85 | 27.11 | 30.41 | 64.03 | 34.08 | 60.25 | 45.27 | 32.3 | 26.65 |
Mistral-Nemo-Instruct-2407 | 53.79 | 49.28 | 68.43 | 71.78 | 57.61 | 40.0 | 35.82 | 70.58 | 47.49 | 76.92 | 55.97 | 46.15 | 25.44 |
Mistral-Small-Instruct-2409 | 51.11 | 40.96 | 60.64 | 63.66 | 47.73 | 44.46 | 51.12 | 78.16 | 38.73 | 68.93 | 50.43 | 39.63 | 28.82 |
Falcon3-7B-Instruct | 41.3 | 37.52 | 52.65 | 57.63 | 41.47 | 56.53 | 8.58 | 47.92 | 31.84 | 58.98 | 42.08 | 32.36 | 27.99 |
Meta-Llama-3.1-8B-Instruct | 54.08 | 45.68 | 59.23 | 71.7 | 52.51 | 34.38 | 51.87 | 79.11 | 52.51 | 69.93 | 56.43 | 44.67 | 30.9 |
Llama-3.3-70B-Instruct | 71.43 | 68.84 | 79.6 | 78.81 | 70.49 | 70.91 | 70.9 | 88.6 | 65.74 | 76.93 | 72.01 | 70.25 | 44.12 |
Closed Models Evaluations
Model | ETEC 0 shot | IEN-MCQ 0 shot | IEN-TF 0 shot | AraPro 0 shot | AraMath 5 shot | ARIFEval (prompt strict) 0 shot | ARIFEval (inst strict) 0 shot | ExamsAR 5 shot | ACVA 5 shot | Arabicmmlu 0 Shot | Openai mmlu 0 shot | GAT 0 shot |
---|
Azureml GPT4o (gpt-4o-900ptu) | 79.39 | 92.03 | 88.97 | 80.86 | 83.47 | 70.9 | 88.12 | 61.82 | 72.51 | 79.02 | 76.5 | 62.65 |
Claude Sonnet 3.5 (claude-3-5-sonnet-20241022) | 85.9 | 86.17 | 89.42 | 81.46 | 79.83 | 53.73 | 80.14 | 62.38 | 80.42 | 69.5 | 66.4 | 68.89 |
gemini pro 1.5 (gemini-1.5-pro) | 83.31 | 88.28 | 85.44 | 76.22 | 94.88 | 74.81 | 90.17 | 58.1 | 75.17 | 82.0 | 64.8 | 59.14 |
English Benchmarks
model | Avg | AGIEval 0 Shot | Arc (challenge) 0 Shot | GPQA (main) 0 Shot | Hendrycks ethics 0 Shot | Winogrande 0 Shot | HellaSwag 0 Shot | TriviaQa 5 Shot | MMLU Pro 5 Shot | Minerva Math 4 Shot | MMLU 0 Shot | TruthfulQA (mc2) 0 Shot | IFEval (prompt strict) 0 Shot | IFEval (inst strict) 0 Shot | GSM8k 5 Shot |
---|
ALLaM-7B-Instruct-preview | 46.85 | 41.99 | 51.28 | 22.77 | 73.17 | 70.48 | 76.26 | 16.07 | 30.4 | 17.3 | 59.6 | 46.67 | 38.08 | 50.0 | 61.79 |
AceGPT-v2-8B-Chat | 49.51 | 37.17 | 53.5 | 25.67 | 68.14 | 73.72 | 79.21 | 67.65 | 37.38 | 17.58 | 64.62 | 55.2 | 23.48 | 32.97 | 56.86 |
AceGPT-v2-32B-Chat | 57.14 | 56.01 | 53.92 | 32.8125 | 66.23 | 79.16 | 83.29 | 69.45 | 45.89 | 32.8 | 74.03 | 59.18 | 27.54 | 40.89 | 78.7 |
jais-family-6p7b-chat | 38.33 | 30.56 | 44.62 | 23.21 | 65.7 | 62.43 | 72.05 | 29.74 | 23.3 | 2.56 | 49.62 | 40.99 | 14.05 | 23.5 | 54.36 |
jais-family-13b-chat | 42.62 | 30.31 | 47.87 | 25.89 | 65.91 | 65.04 | 75.0 | 35.82 | 24.4 | 19.1 | 51.91 | 40.57 | 19.41 | 30.82 | 64.59 |
jais-family-30b-16k-chat | 45.15 | 31.85 | 48.46 | 23.88 | 69.44 | 68.19 | 76.21 | 43.99 | 29.11 | 22.3 | 58.5 | 44.78 | 18.3 | 29.14 | 67.93 |
jais-family-30b-8k-chat | 47.59 | 36.65 | 48.38 | 21.88 | 69.28 | 70.32 | 78.55 | 46.67 | 28.7 | 26.44 | 57.46 | 49.49 | 22.92 | 37.05 | 72.48 |
jais-adapted-7b-chat | 44.91 | 32.9 | 52.65 | 23.88 | 55.32 | 71.74 | 79.39 | 63.89 | 24.38 | 15.34 | 52.36 | 41.12 | 22.0 | 35.73 | 58.07 |
jais-adapted-13b-chat | 47.7 | 36.49 | 54.18 | 26.34 | 65.73 | 69.77 | 80.86 | 58.48 | 26.29 | 21.34 | 55.66 | 42.27 | 24.95 | 36.57 | 68.84 |
jais-adapted-70b-chat | 53.49 | 39.96 | 59.56 | 20.98 | 70.8 | 77.27 | 84.06 | 68.64 | 37.25 | 27.72 | 65.23 | 44.49 | 31.61 | 44.0 | 77.26 |
Qwen2.5-7B-Instruct | 54.68 | 59.2 | 51.28 | 26.56 | 73.76 | 69.38 | 79.55 | 50.59 | 44.92 | 12.04 | 70.56 | 58.93 | 57.3 | 68.23 | 43.29 |
Qwen2.5-14B-Instruct | 62.37 | 66.32 | 62.12 | 25.89 | 76.19 | 75.77 | 84.36 | 59.47 | 52.44 | 23.04 | 78.93 | 69.01 | 52.13 | 64.03 | 83.47 |
Qwen2.5-72B-Instruct | 70.06 | 71.09 | 63.48 | 25.67 | 78.33 | 76.24 | 87.41 | 70.9 | 62.77 | 54.04 | 83.44 | 69.54 | 67.65 | 77.1 | 93.25 |
Mistral-7B-Instruct-v0.3 | 51.98 | 36.45 | 58.87 | 23.21 | 72.58 | 73.95 | 82.93 | 67.97 | 33.18 | 13.44 | 59.74 | 59.69 | 42.51 | 54.8 | 48.37 |
Mistral-Nemo-Instruct-2407 | 54.0 | 39.65 | 59.04 | 24.33 | 67.86 | 74.66 | 82.35 | 72.77 | 44.27 | 29.62 | 65.56 | 54.88 | 30.13 | 38.97 | 71.95 |
Mistral-Small-Instruct-2409 | 61.65 | 40.76 | 60.49 | 25.89 | 72.27 | 78.53 | 85.35 | 79.11 | 47.47 | 39.42 | 69.42 | 56.35 | 58.23 | 68.35 | 81.43 |
Falcon3-7B-Instruct | 58.04 | 43.84 | 59.47 | 33.71 | 70.39 | 70.09 | 78.43 | 51.98 | 46.73 | 30.76 | 68.14 | 55.53 | 56.01 | 68.59 | 78.92 |
Meta-Llama-3.1-8B-Instruct | 56.5 | 42.39 | 55.12 | 27.23 | 66.69 | 73.95 | 79.28 | 70.05 | 40.641622 | 34.26 | 67.96 | 54.05 | 44.36 | 58.51 | 76.5 |
Llama-3.3-70B-Instruct | 67.7 | 55.44 | 63.4 | 25.89 | 81.05 | 79.24 | 84.39 | 81.7 | 60.51 | 46.42 | 81.99 | 60.91 | 63.22 | 72.78 | 90.83 |
Multi-Turn Bench
Model | AR Average | AR Turn 1 | AR Turn 2 | EN Average | EN Turn 1 | EN Turn 2 |
---|
ALLaM-7B-Instruct-preview | 5.9 | 6.93 | 4.88 | 6.5 | 7.49 | 5.15 |
AceGPT-v1.5-13B-Chat | 4.61 | 5.28 | 3.93 | 4.86 | 5.56 | 4.17 |
AceGPT-v2-32B-Chat | 5.43 | 6.61 | 4.26 | 6.5 | 7.41 | 5.58 |
jais-family-13b-chat | 4.89 | 5.37 | 4.41 | 4.77 | 5.57 | 3.97 |
jais-family-30b-16k-chat | 4.87 | 5.50 | 4.25 | 5.13 | 5.86 | 4.4 |
jais-adapted-70b-chat | 5.86 | 6.33 | 5.38 | 5.88 | 6.41 | 5.36 |
Minimum GPU Configuration
Before proceeding, ensure your VM has a powerful GPU, such as:
- NVIDIA A100 (80GB)
- NVIDIA H100
- RTX 4090 (at least 16GB VRAM)
- A6000 (24GB VRAM)
- Multiple GPUs (4x A100 recommended for full 7B model)
Step-by-Step Process to Install ALLaM 7B Locally
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deploy
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1 x RTXA6000 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy ALLaM 7B on a Jupyter Virtual Machine. This open-source platform will allow you to install and run the ALLaM 7B on your GPU node. By running this Model on a Jupyter Notebook, we avoid using the terminal, simplifying the process and reducing the setup time. This allows you to configure the model in just a few steps and minutes.
Note: NodeShift provides multiple image template options, such as TensorFlow, PyTorch, NVIDIA CUDA, Deepo, Whisper ASR Webservice, and Jupyter Notebook. With these options, you don’t need to install additional libraries or packages to run Jupyter Notebook. You can start Jupyter Notebook in just a few simple clicks.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to Jupyter Notebook
Once your GPU VM deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ Button in the top right corner.
After clicking the ‘Connect’ button, you can view the Jupyter Notebook.
Now open Python 3(pykernel) Notebook.
Step 8: Install PyTorch with GPU Support
Run the following command to install pytorch with GPU support:
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
Step 9: Install Transformers
, Accelerate
, and Safetensors
Run the following command to install transformers, accelerate and safetensors:
pip install transformers accelerate safetensors
Step 10: Install Huggingface Hub
Run the following command to install huggingface hub:
pip install huggingface_hub
Step 11: Install Protobuf
Run the following command to install protobuf:
pip install protobuf
Step 12: Install Blobfile
Run the following command to install blobfile:
pip install blobfile
Step 13: Install Sentencepiece
Run the following command to install sentencepiece:
pip install sentencepiece
Step 14: Download and Load ALLaM-7B Model
Run the following code to download and load ALLaM-7B model:
from transformers import AutoModelForCausalLM, LlamaTokenizer
import torch
model_name = "ALLaM-AI/ALLaM-7B-Instruct-preview"
# ✅ Use the slow tokenizer to load tokenizer.model properly
tokenizer = LlamaTokenizer.from_pretrained(model_name, use_fast=False)
# ✅ Load the model
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
# Prompt
messages = [{"role": "user", "content": "كيف أجهز كوب شاهي؟"}]
input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# Tokenize and move to GPU
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
# Generate
output = model.generate(**inputs, max_new_tokens=512, do_sample=True, top_k=50, top_p=0.95, temperature=0.6)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Note: This is a step-by-step guide for interacting with your model. It covers the first method for installing ALLaM 7B locally using transformers and running it in the Jupyter Notebook.
Option 2: Using Terminal
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deploy
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1 x RTXA6000 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy ALLaM 7B on an NVIDIA Cuda Virtual Machine. This proprietary, closed-source parallel computing platform will allow you to install ALLaM 7B on your GPU Node.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to GPUs using SSH
NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.
Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.
Now open your terminal and paste the proxy SSH IP or direct SSH IP.
Step 8: Check the Available Python version and Install the new version
Run the following commands to check the available Python version.
If you check the version of the python, system has Python 3.8.1 available by default. To install a higher version of Python, you’ll need to use the deadsnakes
PPA.
Run the following commands to add the deadsnakes
PPA:
sudo apt update
sudo apt install -y software-properties-common
sudo add-apt-repository -y ppa:deadsnakes/ppa
sudo apt update
Step 9: Install Python 3.11
Now, run the following command to install Python 3.11 or another desired version:
sudo apt install -y python3.11 python3.11-distutils python3.11-venv
Step 10: Update the Default Python3
Version
Now, run the following command to link the new Python version as the default python3
:
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 1
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 2
sudo update-alternatives --config python3
Then, run the following command to verify that the new Python version is active:
python3 --version
Step 11: Install and Update Pip
Run the following command to install and update the pip:
python3 -m ensurepip --upgrade
python3 -m pip install --upgrade pip
Then, run the following command to check the version of pip:
pip --version
Step 12: Install Pytorch
Run the following command to install the Pytorch:
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu121
Step 13: Install transformers
, accelerate
, and safetensors
Run the following command to install transformers, accelerate and safetensors:
pip install transformers accelerate safetensors
Step 14: Install Huggingface Hub
Run the following command to install huggingface hub:
pip install huggingface_hub
Step 15: Install Sentencepiece
Run the following command to install sentencepiece:
pip install sentencepiece
Step 16: Install Blobfile
Run the following command to install blobfile:
pip install blobfile
Step 16: Install Protobuf
Run the following command to install protobuf:
pip install protobuf
Step 17: Connecting to a Remote GPU VM from VS Code
Connecting to a remote GPU VM (like your NodeShift Cloud machine) from VS Code makes it super easy to develop and run Python scripts (like your terminal-based chatbot) directly on the server.
Prerequisites
Before we begin:
- You must have VS Code installed on your local machine.
- You need the Remote – SSH extension installed.
- Your remote VM must have:
- A public IP address
- SSH access enabled
- Your SSH key configured
Step-by-Step Setup
Step 1: Install VS Code Remote – SSH Extension
- Open VS Code.
- Go to the Extensions sidebar (
Ctrl+Shift+X
).
- Search for “Remote – SSH” and install it.
🔗 Extension: Remote – SSH (by Microsoft)
Step 2: Add SSH Configuration
- Press
Ctrl+Shift+P
and select:
Remote-SSH: Open SSH Configuration File
Choose the one for your user (usually ~/.ssh/config
).
- Add the following block (replace with your actual details):
Host allam-vm
HostName 84.32.34.49 # Your VM public IP
User root # Your VM user
IdentityFile ~/.ssh/id_rsa # Path to your private SSH key
Step 3: Connect from VS Code
- Press
Ctrl+Shift+P
→ Remote-SSH: Connect to Host…
- Choose
allam-vm
from the list
- VS Code will open a new remote window and set everything up automatically (might prompt to install the VS Code server on the VM — just accept).
Step 4: Start Coding
Once connected:
- You can open any folder or directory on the VM
- Create a new Python script like
allam_chat.py
- Use the VS Code terminal to run scripts directly on the GPU machine
from transformers import AutoModelForCausalLM, LlamaTokenizer
import torch
model_name = "ALLaM-AI/ALLaM-7B-Instruct-preview"
tokenizer = LlamaTokenizer.from_pretrained(model_name, use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
print("🟢 ALLaM Chat is running. Type your message (or 'exit' to quit):")
while True:
user_input = input("You: ")
if user_input.lower() in ["exit", "quit"]:
print("👋 Exiting chat.")
break
messages = [{"role": "user", "content": user_input}]
input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=512, do_sample=True, top_k=50, top_p=0.95, temperature=0.6)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print("ALLaM:", response)
Step 18: Run Python Script
Run the Python script:
python3 allam_chat.py
Conclusion
ALLaM-7B-Instruct-preview is a thoughtfully crafted bilingual model built to handle both Arabic and English tasks with strong accuracy and cultural alignment. With extensive training on a wide range of data and languages, it delivers well-structured responses for instruction-based tasks, question answering, and dialogue. This guide provided a complete walkthrough to help users deploy and run the model efficiently on GPU-powered virtual machines using NodeShift. Whether through Jupyter or terminal access, ALLaM-7B is easy to integrate into research, development, and content workflows requiring language understanding across Arabic and English.