ERNIE-4.5-21B-A3B is a finely engineered language model that leverages a modular structure with expert routing, designed to deliver high-quality responses efficiently. With 21 billion total parameters and 3 billion activated per input token, this model belongs to the MoE (Mixture-of-Experts) family, ensuring resource-friendly yet powerful generation.
It isn’t just large—it’s smart. It handles long-form content, understands context at scale, and operates with a mix of language and vision expertise under the hood. Thanks to its high context length (up to 131,072 tokens) and post-training optimizations, it’s ready for instruction following, dialog, reasoning, and more.
Backed by Baidu’s ERNIEKit toolkit and deployed efficiently via FastDeploy or vLLM, this model strikes a balance between performance and practical deployment. Whether you’re fine-tuning, scaling across GPUs, or deploying on high-throughput inference platforms, ERNIE-4.5-21B-A3B offers flexibility and precision out of the box.
Model Overview
ERNIE-4.5-21B-A3B is a text MoE Post-trained model, with 21B total parameters and 3B activated parameters for each token. The following are the model configuration details:
Key | Value |
---|
Modality | Text |
Training Stage | Posttraining |
Params(Total / Activated) | 21B / 3B |
Layers | 28 |
Heads(Q/KV) | 20 / 4 |
Text Experts(Total / Activated) | 64 / 6 |
Vision Experts(Total / Activated) | 64 / 6 |
Shared Experts | 2 |
Context Length | 131072 |
Recommended GPU Configuration Table for ERNIE-4.5-21B-A3B
GPU Model | GPU Memory (GB) | vCPUs | RAM (GB) | Precision | Use Case |
---|
A100 80GB | 80 | 96 | 192 | FP16/BF16 | Multi-GPU fine-tuning, inference |
H100 80GB | 80 | 128 | 256 | FP8/BF16/INT4 | Optimal parallel inference |
RTX A6000 | 48 | 48 | 96 | FP16 | Small-scale batch generation |
A100 40GB x2 | 80 (total) | 96 | 192 | FP16 | MoE training with 2-GPU setup |
H100 x4 | 320 (total) | 128 | 512 | INT4 / Quant | FastDeploy deployment, scaled |
Resources
Link: https://huggingface.co/baidu/ERNIE-4.5-21B-A3B-PT
Step-by-Step Process to Install ERNIE-4.5-21B-A3B-PT Locally
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deploy
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1 x H100 SXM GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy ERNIE-4.5-21B-A3B-PT on a Jupyter Virtual Machine. This open-source platform will allow you to install and run the ERNIE-4.5-21B-A3B-PT on your GPU node. By running this Model on a Jupyter Notebook, we avoid using the terminal, simplifying the process and reducing the setup time. This allows you to configure the model in just a few steps and minutes.
Note: NodeShift provides multiple image template options, such as TensorFlow, PyTorch, NVIDIA CUDA, Deepo, Whisper ASR Webservice, and Jupyter Notebook. With these options, you don’t need to install additional libraries or packages to run Jupyter Notebook. You can start Jupyter Notebook in just a few simple clicks.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to Jupyter Notebook (on NodeShift Cloud)
Once your GPU VM deployment is successfully created and the status shows as RUNNING
, follow these steps to launch your Jupyter Notebook:
- Go to the My GPU Deployments section on your NodeShift Cloud dashboard.
- Locate your running deployment (e.g.,
ERNIE 4.5
).
- On the top-right corner of your deployment card, click the three dots (⋮) menu.
- Select “Open Jupyter Notebook” from the dropdown options.
Your Jupyter environment will launch in a new browser tab, allowing you to start coding immediately using the full power of your GPU.
Step 8: Bypass Browser Security Warning to Access Jupyter Notebook
When you click “Open Jupyter Notebook”, you might see a “Your connection is not private” warning like this:
Your connection is not private
<!– You can replace this with your hosted image if needed –>
This happens because the Jupyter Notebook server is using a self-signed SSL certificate, which browsers do not automatically trust.
Here’s how to continue safely:
- Click the
Advanced
button on the bottom-left of the warning screen.
- Then click
Proceed to <IP Address> (unsafe)
— this will take you to the Jupyter Notebook interface.
- Example:
Proceed to 174.88.114.229 (unsafe)
Don’t worry — this is expected behavior for local and cloud-hosted Jupyter environments. Since you control the VM, it’s safe to proceed.
Step 9: Launch Python 3.10 Notebook in Jupyter Interface
Once you’ve bypassed the browser warning and successfully accessed the Jupyter Notebook, you’ll land on the Launcher screen — just like the one below:
<!– Replace with hosted image link if needed –>
What you see:
- The Notebook section shows available environments (like
Python 3
, Python 3.10
, Python 3.10 (webui)
).
- The Console lets you quickly test code snippets.
- The Other section gives you access to Terminal, Markdown files, and Python files.
To start coding:
- Under Notebook, click on the
Python 3.10 (webui)
option if you want GPU-accelerated + image-capable model interaction (ideal for ERNIE 4.5).
- Alternatively, pick
Python 3.10
or Python 3 (ipykernel)
for standard LLM workflows.
- A new tab will open where you can begin writing your Python code or paste your LLM setup script.
That’s it — your powerful GPU-backed Jupyter Notebook is now ready to go!
Step 10: Verify Your GPU Allocation with nvidia-smi
Once your Jupyter Notebook is open, the first thing you should do is confirm that your high-performance GPU (like NVIDIA H100) is properly detected and allocated.
Run this command in a code cell:
!nvidia-smi
What you should see:
- The GPU name:
NVIDIA H100 80GB HBM3
- The driver:
560.35.03
- The CUDA version:
12.6
- The GPU memory usage and temperature (e.g.,
30C
, 0%
usage if idle)
- A total memory of ~81 GB (
81559MiB
) and current usage shown in real-time
Step 11: Install Required Python Libraries in Jupyter
Before running ERNIE 4.5 or any other multimodal model, you need to install the essential libraries. In your first notebook cell, run the following:
!pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
!pip install git+https://github.com/huggingface/transformers
!pip install git+https://github.com/huggingface/accelerate
!pip uninstall -y bitsandbytes # Only if you encountered CUDA binary errors
!pip install git+https://github.com/TimDettmers/bitsandbytes.git
!pip install huggingface_hub
!pip install sentencepiece
Breakdown:
- Installs PyTorch + CUDA 12.1-compatible packages for GPU acceleration.
- Fetches latest Transformers and Accelerate from Hugging Face GitHub.
- Reinstalls BitsAndBytes (custom source) to fix CUDA binary issues.
- Installs
huggingface_hub
to interact with models and datasets.
- Install Sentencepiece
Once all packages are installed without errors, you’re ready to load ERNIE 4.5 or any other model for inference.
Step 12: Load the Model
Run the following code to load the model:
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_name = "baidu/ERNIE-4.5-21B-A3B-PT"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_name,
trust_remote_code=True,
torch_dtype=torch.bfloat16, # or torch.float16
device_map="auto"
)
Step 13: Run the Prompt
Time Loop Awareness Prompt
prompt = """
You're stuck in a time loop where each response you give rewrites the last 10 seconds of reality. You're aware of it—but others aren't. Describe what you do.
"""
# Use chat-style input format
messages = [{"role": "user", "content": prompt}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# Tokenize the prompt and move to the model's device
inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Generate the response
with torch.no_grad():
outputs = model.generate(
inputs.input_ids,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.9,
repetition_penalty=1.1
)
# Decode and display
response = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
print("🧠 Model Output:\n", response)
Model Output
The Snake-Cactus Philosophy Debate
prompt = """
A snake that recently read Nietzsche and Kant is arguing with a cactus about free will. Summarize their debate.
"""
# Use chat-style input format
messages = [{"role": "user", "content": prompt}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# Tokenize the prompt and move to the model's device
inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Generate the response
with torch.no_grad():
outputs = model.generate(
inputs.input_ids,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.9,
repetition_penalty=1.1
)
# Decode and display
response = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
print("🧠 Model Output:\n", response)
Model Output
Mathematical Paradox Simulation
prompt = """
If 2+2 = 5 in an alternate reality, what other math truths must be false?
"""
# Use chat-style input format
messages = [{"role": "user", "content": prompt}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# Tokenize the prompt and move to the model's device
inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Generate the response
with torch.no_grad():
outputs = model.generate(
inputs.input_ids,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.9,
repetition_penalty=1.1
)
# Decode and display
response = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
print("🧠 Model Output:\n", response)
Model Output
Conclusion
ERNIE-4.5-21B-A3B isn’t just another large language model — it’s a thoughtfully crafted system optimized for both performance and practical deployment. Whether you’re exploring advanced reasoning tasks, running multimodal experiments, or scaling real-world applications, this model delivers speed, precision, and flexibility. With NodeShift Cloud and Jupyter Notebooks, you can set it up in minutes and start generating rich, nuanced outputs using powerful prompts.
Ready to push boundaries? ERNIE 4.5 is built to take your work to the next level.