C4AI Command A (March 2025) is a large-scale text generation model built to handle high-performance tasks across multiple languages and domains. Developed by Cohere and Cohere For AI, it contains 111 billion parameters and supports a massive 256K context length, making it ideal for long-form reasoning, dialogue, and retrieval-based tasks. Trained on 23 languages, it provides reliable performance in both conversational and non-conversational settings, and is optimized for secure, fast, and scalable deployment—even on just two GPUs. The model is instruction-tuned with enhanced capabilities for summarization, translation, coding, information extraction, and tool-assisted workflows.
Model Resource
Hugging Face
Link : https://huggingface.co/CohereForAI/c4ai-command-a-03-2025
Ollama
Link: https://ollama.com/library/command-a:111b
Minimum Configuration (For Quantized Inference – 4-bit/8-bit)
Suitable for basic testing, prototyping, or running on reduced precision with quantized weights.
- GPU:
- 2 × NVIDIA A100 40GB
- or 1 × NVIDIA H100 80GB
- or 2 × RTX 6000 Ada (48GB x 2)
- or 4 × RTX 3090 (24GB each) using model parallelism
- vRAM (Total):
- Minimum: 80GB
- Recommended: 96GB+
- CPU:
- RAM:
- Minimum: 128GB
- Recommended: 192GB+
- Disk:
- 300GB SSD (minimum)
- 500GB+ NVMe SSD recommended for caching, model weights, and RAG data storage
- Precision:
- int4 / int8 using
AutoGPTQ
, bitsandbytes
, or exllama
- Frameworks:
- PyTorch ≥ 2.1
- Transformers ≥ 4.38
- CUDA ≥ 12.1
Recommended Configuration (For Full Precision Inference – BF16 / FP16)
Ideal for smooth performance across tasks like summarization, translation, coding, and tool use.
- GPU:
- 2 × NVIDIA H100 SXM (80GB each)
- or 4 × A100 (40GB each)
- with NCCL + model parallelism
- vRAM (Total):
- 160GB+
- More is better for longer sequences or complex tool-augmented chains
- CPU:
- RAM:
- 256GB DDR5 or higher (especially for multi-user or enterprise environments)
- Disk:
- 1TB NVMe SSD or greater
- High IOPS for fast model loading and RAG document retrieval
- Precision:
- bfloat16 / float16 with
device_map="auto"
or FSDP (Fully Sharded Data Parallel)
- Parallelization Techniques:
FSDP
or DeepSpeed ZeRO-3
- Enable gradient checkpointing for memory efficiency
High-End Distributed Training Setup (Optional, for Research Labs & Fine-Tuning)
- Cluster Setup:
- 8+ A100 GPUs (40GB) or
- 4+ H100 GPUs (80GB) with NVLink and high-bandwidth interconnect
- CPU:
- RAM:
- Storage:
- 2TB NVMe SSD
- Optional EFS/NFS mount for datasets & logs
- Software Stack:
PyTorch
, Transformers
, Accelerate
, DeepSpeed
Ray
, Slurm
, or Kubernetes
for scheduling (if needed)
Option 1: Using Ollama and Terminal
Step-by-Step Process to Install Command A 111B Locally
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deploy
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1 x H100 SXM GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy Command A 111B on an NVIDIA Cuda Virtual Machine. This proprietary, closed-source parallel computing platform will allow you to install Command A 111B on your GPU Node.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to GPUs using SSH
NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.
Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.
Now open your terminal and paste the proxy SSH IP or direct SSH IP.
Next, if you want to check the GPU details, run the command below:
nvidia-smi
Step 8: Install Ollama
After connecting to the terminal via SSH, it’s now time to install Ollama from the official Ollama website.
Website Link: https://ollama.com/
Run the following command to install the Ollama:
curl -fsSL https://ollama.com/install.sh | sh
Step 9: Serve Ollama
Run the following command to host the Ollama so that it can be accessed and utilized efficiently:
ollama serve
Now, “Ollama is running.”
Step 10: Select Command A 111B Model
Link: https://ollama.com/library/command-a:111b
Command A 111B model is available in only one size: 111B. We will run it on our GPU virtual machine.
Step 11: Connect with SSH
Now, open a new tab in the terminal and reconnect using SSH.
Step 12: Check Commands
Run, the following command to see a list of available commands:
ollama
Step 13: Pull Command A 111B Model
Run the following command to pull the Command A 111B model:
ollama pull command-a:111b
Step 14: Run Command A 111B Model
Now, you can run the model in the terminal using the following command and interact with your model:
ollama run command-a:111b
Option 2: Using HuggingFace, Transformers and Terminal
Step 1: Access model from Hugging Face
Link: https://huggingface.co/CohereForAI/c4ai-command-a-03-2025
You need to agree to share your contact information to access this model. Fill in all the mandatory details, such as your name and email, and then wait for approval from Hugging Face and Google to gain access and use the model.
You will be granted access to this model within an minutes, provided you have filled in all the details correctly.
Step 2: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 3: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deploy
Step 4: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1x H200 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 5: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 6: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy Command A 111B on an NVIDIA Cuda Virtual Machine. This proprietary, closed-source parallel computing platform will allow you to install Command A 111B on your GPU Node.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 7: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 8: Connect to GPUs using SSH
NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.
Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.
Now open your terminal and paste the proxy SSH IP or direct SSH IP.
Next, if you want to check the GPU details, run the command below:
nvidia-smi
Step 9: Check the Available Python version and Install the new version
Run the following commands to check the available Python version.
If you check the version of the python, system has Python 3.8.1 available by default. To install a higher version of Python, you’ll need to use the deadsnakes
PPA.
Run the following commands to add the deadsnakes
PPA:
sudo apt update
sudo apt install -y software-properties-common
sudo add-apt-repository -y ppa:deadsnakes/ppa
sudo apt update
Step 10: Install Python 3.11
Now, run the following command to install Python 3.11 or another desired version:
sudo apt install -y python3.11 python3.11-distutils python3.11-venv
Step 11: Update the Default Python3
Version
Now, run the following command to link the new Python version as the default python3
:
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 1
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 2
sudo update-alternatives --config python3
Then, run the following command to verify that the new Python version is active:
python3 --version
Step 12: Install and Update Pip
Run the following command to install and update the pip:
python3 -m ensurepip --upgrade
python3 -m pip install --upgrade pip
Then, run the following command to check the version of pip:
pip --version
Step 13: Install Pytorch
Run the following command to install the Pytorch:
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu121
Step 14: Install Transformers from HuggingFace
Run the following command to install the transformers from huggingface:
pip install git+https://github.com/huggingface/transformers.git
Step 15: Install Accelerate & Safetensors
Run the following command to install the accelerate & safetensors:
pip install accelerate safetensors
Step 16: Install HuggingFace Hub
Run the following command to install the huggingface_hub:
pip install huggingface_hub
Step 17: Login Using Your Hugging Face API Token
Use the huggingface_hub
cli to login directly in the terminal.
Run the following command to login in huggingface-cli:
huggingface-cli login
Then, enter the token and press the Enter key. Ensure you press Enter after entering the token so the input will not be visible.
After entering the token, you will see the following output:
Login Successful.
The current active token is (your_token_name).
Check the screenshot below for reference.
How to Generate a Hugging Face Token
- Create an Account: Go to the Hugging Face website and sign up for an account if you don’t already have one.
- Access Settings: After logging in, click on your profile photo in the top right corner and select “Settings.”
- Navigate to Access Tokens: In the settings menu, find and click on the “Access Tokens” tab.
- Generate a New Token: Click the “New token” button, provide a name for your token, and choose a role (either
read
or write
).
- Generate and Copy Token: Click the “Generate a token” button. Your new token will appear; click “Show” to view it and copy it for use in your applications.
- Secure Your Token: Ensure you keep your token secure and do not expose it in public code repositories.
Step 18: Create a Python Script
You are connecting your GPU remote server to VS Code and creating a python file for running the Command A 111B. Follow these steps:
Install VS Code Extensions
On your local machine, open VS Code and install:
- Remote – SSH extension
- Python extension
Steps:
- Open VS Code.
- Click on Extensions (
Ctrl + Shift + X
).
- Search for “Remote – SSH” and install it.
- Search for “Python” and install it.
Connect VS Code to Your GPU Remote Server
Steps to Connect via SSH
- Open VS Code.
- Press Ctrl + Shift + P to open the command palette.
- Type “Remote-SSH: Connect to Host…” and select it.
- Enter your GPU server details:
ssh root@<YOUR_GPU_SERVER_IP>
Example:
ssh root@192.168.1.100
- Enter your password (or use your SSH key if set up).
- Now you are inside your remote GPU server via VS Code!
Create c4ai_chat.py
File in VS Code
Now, in VS Code, inside your remote connection:
- Open the File Explorer in VS Code.
- Navigate to your remote GPU directory.
- Create a new file named
c4ai_chat.py
.
- Copy and paste the following test code inside c4ai_chat.py file:
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Model ID
model_id = "CohereForAI/c4ai-command-a-03-2025"
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
# Load model (ensure enough VRAM)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="auto",
device_map="auto"
)
# Prepare input prompt
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
# Generate response
output_tokens = model.generate(input_ids, max_new_tokens=100, do_sample=True, temperature=0.3)
# Decode output
response = tokenizer.decode(output_tokens[0], skip_special_tokens=True)
print("🤖 AI Response:", response)
Step 19: Run Server
Now, run the script on terminal:
python3 c4ai_chat.py
Conclusion
C4AI Command A (March 2025) stands out as a powerful multilingual model, ready to tackle complex enterprise tasks, multilingual reasoning, long-form dialogue, and structured tool-based interactions. With 111 billion parameters and support for context lengths up to 256K, it provides exceptional performance in both interactive and structured workflows. This guide covered two practical ways to run the model—via Ollama and Hugging Face Transformers—on GPU-powered Virtual Machines using NodeShift. Whether you’re exploring summarization, translation, coding, or data extraction, Command A delivers reliable performance at scale while remaining accessible for advanced deployment setups.