SmallThinker, fine-tuned from the Qwen2.5-3B-Instruct model, is a lightweight and efficient model designed for edge deployment and as a draft for the larger QwQ-32B-Preview model, offering a 70% speed improvement in token generation. It excels in generating long chains of reasoning with its training on the QWQ-LONGCOT-500K dataset, which includes over 75% samples exceeding 8K tokens. Optimized for resource-constrained environments, SmallThinker delivers competitive performance across benchmarks, making it ideal for both standalone applications and as a precursor for more complex models.
Benchmark Table for SmallThinker Model
Model | AIME24 | AMC23 | GAOKAO2024_I | GAOKAO2024_II | MMLU_STEM | AMPS_Hard | math_comp |
---|
Qwen2.5-3B-Instruct | 6.67 | 45 | 50 | 35.8 | 59.8 | – | – |
SmallThinker | 16.667 | 57.5 | 64.2 | 57.1 | 68.2 | 70 | 46.8 |
GPT-4o | 9.3 | – | – | – | 64.2 | 57 | 50 |
Model Resource
Hugging Face
Link: https://huggingface.co/PowerInfer/SmallThinker-3B-Preview
Ollama
Link: https://ollama.com/library/smallthinker
Prerequisites for Installing Smallthinker Locally
- VRAM:
- Minimum: 16GB for 8-bit or 4-bit quantization.
- Recommended: 24GB for smoother inference and lightweight fine-tuning.
- Optimal: 48GB+ for full-precision training and inference.
- Disk Space:
- Minimum: 30GB for storing model weights and temporary files.
- Recommended: 100GB for additional datasets and fine-tuning outputs.
- RAM:
- Minimum: 16GB for running inference.
- Recommended: 32GB for smoother execution.
- CPU:
- Minimum: 8 cores.
- Recommended: 16–32 cores for faster data preprocessing and multitasking.
- Storage Type:
- Use SSD for faster read/write speeds, ensuring faster model loading and data handling.
Performance Summary
- RTX 3090/4090:
- Best for inference with 8-bit quantization.
- Handles inference smoothly for shorter context lengths.
- RTX A6000:
- Handles full-precision inference for context lengths up to 16K tokens.
- A100 40GB/80GB:
- Excels at fine-tuning and long-context inference.
- H100 80GB:
- Ideal for resource-heavy multi-step tasks, long-context inference, and high-throughput fine-tuning.
Multi-GPU Scaling
For faster inference or fine-tuning, consider multi-GPU setups:
- 2x RTX A6000 for efficient fine-tuning.
- 4x A100 40GB for large-scale tasks.
Step-by-Step Process to Install SmallThinker Model Locally
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deployment.
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1x RTX A6000 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suit your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy SmallThinker on an NVIDIA Cuda Virtual Machine. This proprietary, closed-source parallel computing platform will allow you to install SmallThinker on your GPU Node.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to GPUs using SSH
NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.
Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.
Now open your terminal and paste the proxy SSH IP or direct SSH IP.
Next, if you want to check the GPU details, run the command below:
nvidia-smi
Step 8: Install Ollama
After connecting to the terminal via SSH, it’s now time to install Ollama from the official Ollama website.
Website Link: https://ollama.com/
Run the following command to install the Ollama:
curl -fsSL https://ollama.com/install.sh | sh
Step 9: Serve Ollama
Run the following command to host the Ollama so that it can be accessed and utilized efficiently:
ollama serve
Now, “Ollama is running.”
Step 10: Select Smallthinker Model
Link: https://ollama.com/library/smallthinker:3b
Smallthinker model is available in only one size: 3B. We will run it on our GPU virtual machine.
Step 11: Connect with SSH
Now, open a new tab in the terminal and reconnect using SSH.
Step 12: Check Commands
Run, the following command to see a list of available commands:
ollama
Step 13: Check Available Models
Run the following command to check if the downloaded model are available:
ollama list
Step 14: Pull Smallthinker Model
Run the following command to pull the smallthinker model:
ollama pull smallthinker
Step 15: Run smallthinker Model
Now, you can run the model in the terminal using the following command and interact with your model:
ollama run smallthinker
Note: This is a step-by-step guide for interacting with your model. It covers the first method for installing SmallThinker locally using Ollama and running it in the terminal.
Option 1: Using Ollama (Terminal)
- Install Ollama: Download and install the Ollama tool from the official site.
- Pull the Model: Run the following command to download the desired model:
ollama pull smallthinker
- Run the Model: Start the model in the terminal:
ollama run smallthinker
Option 2: Using Open WebUI
- Set Up Open WebUI:
Follow our Open WebUI Setup Guide to configure the interface. Ensure all dependencies are installed and the environment is correctly set up.
- Refresh the Interface:
Confirm that the SmallThinker model has been downloaded and is visible in the list of available models on the Open WebUI.
- Select Your Model:
Choose the SmallThinker model from the list. This model is available in a single size.
- Start Interaction:
Begin using the model by entering your queries in the interface.
Option 3: Using Hugging Face and Jupyter Notebook
- Follow our Jupyter Notebook Setup Guide to configure your notebook environment. Ensure that all required dependencies are installed and that your Jupyter Notebook is set up correctly for optimal use.
When choosing an image for your Virtual Machine, select the Jupyter Notebook image. This open-source platform allows you to install and run the SmallThinker model on your GPU node. By running this model on a Jupyter Notebook, you can avoid using the terminal, simplifying the process and reducing setup time. This approach enables you to configure the model in just a few steps and minutes.
Note: NodeShift provides multiple image template options, such as TensorFlow, PyTorch, NVIDIA CUDA, Deepo, Whisper ASR Webservice, and Jupyter Notebook. With these options, you don’t need to install additional libraries or packages to run Jupyter Notebook. You can start Jupyter Notebook in just a few simple clicks.
Link: https://huggingface.co/PowerInfer/SmallThinker-3B-Preview
GPU Recommendations
GPU Model | VRAM | Use Case |
---|
RTX 3090 | 24GB | Suitable for inference with full precision or 8-bit quantization. |
RTX 4090 | 24GB | Excellent for inference and lightweight fine-tuning. |
RTX A6000 | 48GB | Ideal for full-precision tasks and longer context lengths. |
NVIDIA A100 (40GB) | 40GB | Optimal for both inference and training. |
NVIDIA A100 (80GB) | 80GB | Best for heavy fine-tuning or multi-step tasks with longer context windows. |
NVIDIA H100 (80GB) | 80GB | State-of-the-art performance for large-scale fine-tuning. |
System Configuration Summary
After setting up the VM and running your Jupyter Notebook, start installing the SmallThinker model.
1. Install Required Dependencies
Ensure you have Python 3.9+ installed. Install the necessary Python libraries:
pip install torch transformers safetensors deepspeed
2. Verify GPU Availability
Run the following in your Jupyter Notebook to check GPU compatibility:
import torch
print("CUDA available:", torch.cuda.is_available())
print("Device name:", torch.cuda.get_device_name(0))
3. Download the Model
- Log in to Hugging Face using your token:
huggingface-cli login --token <your_hugging_face_token>
2. Download the SmallThinker model:
huggingface-cli download PowerInfer/SmallThinker-3B-preview --include "original/*" --local-dir smallthinker-3b
4. Load the Model in Your Jupyter Notebook
Run the following code to load and test the model:
from transformers import AutoTokenizer, AutoModelForCausalLM
# Define model path
model_id = "PowerInfer/SmallThinker-3B-preview"
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
# Load model
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.float16 # Use float16 for GPU optimization
)
# Test input
input_text = "What is the capital of France?"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda") # Move to GPU
# Generate response
output = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(output[0], skip_special_tokens=True))
5. Optimize for Memory
If you’re constrained by GPU memory, you can use quantization to reduce memory usage:
- Install BitsAndBytes for quantization:
pip install bitsandbytes
2. Load the Model with Quantization:
from transformers import BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True) # Use 4-bit with `load_in_4bit=True`
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.float16,
quantization_config=quantization_config
)
6. Fine-Tuning and Advanced Configuration
If you want to further fine-tune the model using DeepSpeed:
- Install DeepSpeed:
pip install deepspeed
2. Modify the Training Configuration: Use the SFT details provided in the model card with a deepspeed_config.json
file.
3. Run Fine-Tuning Script: Customize and execute the training script using the specified datasets and training configuration.
7. Adjust Generation Parameters
To improve generation quality, adjust parameters such as repetition_penalty
and temperature
:
output = model.generate(
**inputs,
max_new_tokens=50,
temperature=0.7, # Adjust for creativity
repetition_penalty=1.2, # Penalize repeated tokens
)
print(tokenizer.decode(output[0], skip_special_tokens=True))
You’re all set to use SmallThinker model! 🚀
Conclusion
In conclusion, SmallThinker-3B-Preview is a compact and efficient model tailored for edge deployment and draft applications, offering impressive speed and performance. With detailed setup guides and compatibility across platforms like Ollama, Open WebUI, and Jupyter Notebook, it provides flexibility for various use cases. Its support for fine-tuning and memory optimization ensures seamless integration into resource-constrained environments while maintaining high-quality outputs, making it a valuable tool for developers and researchers alike.