The LFM2-1.2B is a next-generation hybrid model developed by Liquid AI, designed specifically for edge AI and on-device deployment. With ~1.2 billion parameters, this model stands out for its speed, memory efficiency, and quality, making it ideal for lightweight applications like agentic tasks, data extraction, RAG, creative writing, and multi-turn conversations.
Model details
Due to their small size, we recommend fine-tuning LFM2 models on narrow use cases to maximize performance. They are particularly suited for agentic tasks, data extraction, RAG, creative writing, and multi-turn conversations. However, we do not recommend using them for tasks that are knowledge-intensive or require programming skills.
Property | Value |
---|
Parameters | 1,170,340,608 |
Layers | 16 (10 conv + 6 attn) |
Context length | 32,768 tokens |
Vocabulary size | 65,536 |
Precision | bfloat16 |
Training budget | 10 trillion tokens |
License | LFM Open License v1.0 |
Key highlights:
- 3× faster training vs. prior generation
- 2× faster CPU decoding vs. Qwen3
- Hybrid architecture with multiplicative gates + short convolutions
- Supports CPU, GPU, and even NPU hardware (smartphones, laptops, vehicles)
- Multilingual: English, Arabic, Chinese, French, German, Japanese, Korean, Spanish
It’s optimized to run efficiently even on limited hardware and supports both transformer-based runs (Hugging Face) and GGUF format runs (via llama.cpp) — meaning you can deploy it flexibly depending on your system.
We successfully ran both versions of the LiquidAI LFM2-1.2B model: one using the GGUF format on the oobabooga Text Generation WebUI for an easy-to-use web interface, and the other directly through Python in a Jupyter Notebook environment provided by NodeShift, leveraging Hugging Face Transformers. This setup allowed us to test the model’s flexibility across interfaces — from web-based interactions to code-driven experimentation — showcasing its smooth deployment on both platforms.
Performance
Model | MMLU | GPQA | IFEval | IFBench | GSM8K | MGSM | MMMLU |
---|
LFM2-350M | 43.43 | 27.46 | 65.12 | 16.41 | 30.1 | 29.52 | 37.99 |
LFM2-700M | 49.9 | 28.48 | 72.23 | 20.56 | 46.4 | 45.36 | 43.28 |
LFM2-1.2B | 55.23 | 31.47 | 74.89 | 20.7 | 58.3 | 55.04 | 46.73 |
Qwen3-0.6B | 44.93 | 22.14 | 64.24 | 19.75 | 36.47 | 41.28 | 30.84 |
Qwen3-1.7B | 59.11 | 27.72 | 73.98 | 21.27 | 51.4 | 66.56 | 46.51 |
Llama-3.2-1B-Instruct | 46.6 | 28.84 | 52.39 | 16.86 | 35.71 | 29.12 | 38.15 |
gemma-3-1b-it | 40.08 | 21.07 | 62.9 | 17.72 | 59.59 | 43.6 | 34.43 |
Recommended GPU Configuration for LFM2-1.2B
Minimum GPU:
- 1× RTX A6000 / RTX 3090 / RTX 4090 or better
- GPU VRAM (per GPU): ≥ 24 GB (bfloat16 loads efficiently; GGUF quantized versions can work on 12–16 GB GPUs)
For optimal speed and multiturn tasks:
- 1× RTX A6000 (48 GB) or H100 (80 GB) → for native
bfloat16
- GPU with support for
flash_attention_2
can further speed up inference (check compatibility, e.g., A100/H100)
CPU-only (small batch / GGUF):
- 16–32 vCPUs, ≥64 GB RAM (for llama.cpp quantized GGUF version)
Disk & Bandwidth:
- Storage: ~10–15 GB for model + dependencies (Transformers) or ~5–6 GB for GGUF quantized
- Disk speed: ≥1000 MB/s recommended (NVMe SSD)
- Bandwidth: ≥500 Mbps if pulling from remote, or local load for faster startup
GPU Configuration Table for LFM2-1.2B-F16.gguf
GPU Model | VRAM (GB) | Recommended gpu-layers | cache-type | ctx-size |
---|
RTX 3060 (12GB) | 12 GB | 32–40 | fp16 / q4_0 | 4096–8192 |
RTX 3070 (8GB) | 8 GB | 24–32 | q4_0 / q5_k_m | 4096–8192 |
RTX 3080 (10GB) | 10 GB | 32–48 | fp16 | 8192 |
RTX 3090 (24GB) | 24 GB | 100–160 | fp16 | 8192–16384 |
RTX 4070 (12GB) | 12 GB | 32–48 | fp16 | 8192 |
RTX 4080 (16GB) | 16 GB | 64–128 | fp16 | 8192–16384 |
RTX 4090 (24GB) | 24 GB | 128–200 | fp16 | 16384–32768 |
RTX A6000 (48GB) | 48 GB | 256–320 | fp16 | 32768–131072 |
Step-by-Step Process to Install LiquidAI LFM2-1.2B Locally
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deploy
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1 x RTXA6000 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
In our previous blogs, we used pre-built images from the Templates tab when creating a Virtual Machine. However, for running LiquidAI LFM2-1.2B, we need a more customized environment with full CUDA development capabilities. That’s why, in this case, we switched to the Custom Image tab and selected a specific Docker image that meets all runtime and compatibility requirements.
We chose the following image:
nvidia/cuda:12.1.1-devel-ubuntu22.04
This image is essential because it includes:
- Full CUDA toolkit (including
nvcc
)
- Proper support for building and running GPU-based applications like LiquidAI LFM2-1.2B
- Compatibility with CUDA 12.1.1 required by certain model operations
Launch Mode
We selected:
Interactive shell server
This gives us SSH access and full control over terminal operations — perfect for installing dependencies, running benchmarks, and launching tools like LiquidAI LFM2-1.2B.
Docker Repository Authentication
We left all fields empty here.
Since the Docker image is publicly available on Docker Hub, no login credentials are required.
Identification
nvidia/cuda:12.1.1-devel-ubuntu22.04
CUDA and cuDNN images from gitlab.com/nvidia/cuda. Devel version contains full cuda toolkit with nvcc.
This setup ensures that the LiquidAI LFM2-1.2B runs in a GPU-enabled environment with proper CUDA access and high compute performance.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to GPUs using SSH
NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.
Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.
Now open your terminal and paste the proxy SSH IP or direct SSH IP.
Next, If you want to check the GPU details, run the command below:
nvidia-smi
Step 8: Check the Available Python version and Install the new version
Run the following commands to check the available Python version.
If you check the version of the python, system has Python 3.8.1 available by default. To install a higher version of Python, you’ll need to use the deadsnakes
PPA.
Run the following commands to add the deadsnakes
PPA:
sudo apt update
sudo apt install -y software-properties-common
sudo add-apt-repository -y ppa:deadsnakes/ppa
sudo apt update
Step 9: Install Python 3.11
Now, run the following command to install Python 3.11 or another desired version:
sudo apt install -y python3.11 python3.11-venv python3.11-dev
Step 10: Update the Default Python3
Version
Now, run the following command to link the new Python version as the default python3
:
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 1
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 2
sudo update-alternatives --config python3
Then, run the following command to verify that the new Python version is active:
python3 --version
Step 11: Install and Update Pip
Run the following command to install and update the pip:
curl -O https://bootstrap.pypa.io/get-pip.py
python3.11 get-pip.py
Then, run the following command to check the version of pip:
pip --version
Step 12: Clone the WebUI Repo
Run the following command to clone the webui repo:
git clone https://github.com/oobabooga/text-generation-webui
cd text-generation-webui
Step 13: Run the One-Click Installer Script
Execute the following command to run the one-click installer script:
bash start_linux.sh
- It will automatically detect your GPU, install everything needed (Python, pip, CUDA toolkits, etc.)
- You’ll be prompted to select your GPU backend (choose
CUDA
or NVIDIA GPU
).
- Wait for it to finish setting up Python environment + dependencies.
Since our VM uses NVIDIA CUDA GPUs (e.g., A100, H100, A6000), choose:
A
Just type A
and hit Enter
.
What Happens Next
Once you select option A
, the script will:
- Install
torch
, vllm
, and GPU-specific dependencies
- Prepare the web UI environment
- Prompt you to select or download a model (or you can do that manually)
- Launch the server on
http://127.0.0.1:7860
Step 14: SSH Port Forward
On your machine run the following command for SSH port forward:
ssh -p 40880 -L 7860:127.0.0.1:7860 root@38.29.145.28
Then open: http://localhost:7860 in your browser.
Step 15: Download and Prepare the model
Go to Model tab in WebUI
Open your browser at:
http://127.0.0.1:7860
On the left sidebar, click:
Model → Download
Enter the model name
In the right-hand “Download model or LoRA” section:
LiquidAI/LFM2-1.2B-GGUF
LFM2-1.2B-F16.gguf
Click Download
Press the orange Download button.
You should see:
Model successfully saved to user_data/models/
This confirms the file was saved into:
~/text-generation-webui/user_data/models/LFM2-1.2B-F16.gguf
Select model to load
In the Model dropdown at the top:
Choose:
LFM2-1.2B-F16.gguf
Adjust main options
Set:
gpu-layers
:
Example → 256
(This depends on your GPU, e.g., A6000 → plenty of VRAM)
ctx-size
:
Example → 8192
cache-type
:
Example → fp16
Click Load
Press the Load button.
If everything is set up (CUDA backend built, proper llama.cpp, etc.), the model will load.
Step 16: Test with Prompts
Up to this point, we have successfully set up and tested the GGUF quantized version of the LFM2-1.2B model on the Oobabooga WebUI, allowing us to interact with it through an easy-to-use web interface. Now, we will move on to setting up the standard Transformers version of this model on a Jupyter Notebook running inside a CUDA-enabled virtual machine, following the correct GPU configuration and setup steps to ensure smooth performance.
Step-by-Step Guide to Run LiquidAI LFM2-1.2B Model on Jupyter Notebook
For running the LiquidAI LFM2-1.2B model, we used two setups: the standard Transformers version was deployed on a Jupyter Notebook running inside a CUDA-enabled virtual machine, while the LFM2-1.2B GGUF quantized version was run using the Oobabooga WebUI for a lightweight web-based interface. If you want to set up the Jupyter Notebook yourself, we have a separate detailed guide you can follow — just make sure to choose the right GPU configuration as outlined above to ensure smooth performance and compatibility.
Link: https://nodeshift.com/blog/how-to-set-up-a-jupyter-notebook-server-in-the-cloud-in-5-minutes
Step 1: Install Required Libraries
!pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
!pip install "transformers @ git+https://github.com/huggingface/transformers.git@main"
!pip install git+https://github.com/huggingface/accelerate
!pip install huggingface_hub
Step 2: Load the Model and Tokenizer
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "LiquidAI/LFM2-1.2B"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype="bfloat16",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
Step 3: Run the Prompts
prompt = """
You are a reasoning assistant. Please solve the following question step by step:
Question: If a train travels 120 km in 2 hours and 180 km in the next 3 hours, what is its average speed for the entire journey
"""
input_ids = tokenizer.apply_chat_template(
[{"role": "user", "content": prompt}],
add_generation_prompt=True,
return_tensors="pt",
tokenize=True,
).to(model.device)
output = model.generate(
input_ids,
do_sample=True,
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_new_tokens=512,
)
print(tokenizer.decode(output[0], skip_special_tokens=False))
Conclusion
In conclusion, the LiquidAI LFM2-1.2B model offers an impressive blend of speed, efficiency, and flexibility, making it a practical choice for lightweight tasks like reasoning, creative writing, and multi-turn conversations across different platforms. By successfully setting it up both on Oobabooga WebUI (using the GGUF quantized version) and on a Jupyter Notebook with CUDA support, we demonstrated its adaptability for both no-code and code-driven environments. With the right GPU configuration and setup, this model runs smoothly and delivers reliable results, showing its strength as a versatile tool for a wide range of use cases.