Tiny-R1-32B-Preview is a high-performance reasoning model designed for mathematics, coding, and science tasks. Built on the Deepseek-R1-Distill-Qwen-32B framework, it leverages advanced fine-tuning techniques and multi-domain training to deliver strong problem-solving abilities.
With a compact 32B parameter size, this model achieves near-R1 performance in complex calculations, logical reasoning, and structured problem-solving while outperforming larger models like the 70B Deepseek-R1-Distill-Llama-70B in specialized domains.
Key Features:
✅ Optimized for Math, Code, and Science – Fine-tuned to handle step-by-step reasoning and complex queries.
✅ Efficient Model Merging – Uses the Mergekit tool to combine multiple specialized models into a single, high-performance system.
✅ High Accuracy – Demonstrates competitive results on industry benchmarks such as AIME 2024, LiveCodeBench, and GPQA-Diamond.
✅ Compact Yet Powerful – Matches the performance of larger models while maintaining efficiency and scalability.
Designed for research, academic problem-solving, and advanced computational tasks, Tiny-R1-32B-Preview is an ideal choice for users looking for a lightweight yet capable model for structured reasoning and domain-specific applications.
Model Resource
Hugging Face
Link: https://huggingface.co/qihoo360/TinyR1-32B-Preview
1️⃣ Minimum Configuration (For Basic Inference)
🔹 Setup: Single GPU (May require model offloading to CPU)
🔹 GPU: 1x RTX 6000 Ada 48GB / NVIDIA A100 80GB
🔹 VRAM: 48GB+ (Offloading required for lower VRAM)
🔹 CPU: 16 vCPUs (Recommended)
🔹 RAM: 64GB+
🔹 Storage: 200GB+ NVMe SSD
🔹 Framework: PyTorch + Transformers (Accelerate enabled)
🔹 Torch Precision: bfloat16 / float16
🔹 Performance: Moderate speed, some delay due to offloading
2️⃣ Recommended Configuration (For Fast Inference & Fine-Tuning)
🔹 Setup: Multi-GPU (Optimized for speed)
🔹 GPU: 2x NVIDIA A100 80GB / 2x H100 80GB
🔹 VRAM: 160GB (combined)
🔹 CPU: 32 vCPUs+
🔹 RAM: 128GB+
🔹 Storage: 1TB NVMe SSD
🔹 Parallelization: Fully Sharded Data Parallel (FSDP) / ZeRO Offloading
🔹 Performance: Fast inference, supports light fine-tuning
3️⃣ High-End Configuration (For Full Training & Large Batch Inference)
🔹 Setup: Enterprise / Research-Level Cluster
🔹 GPU: 4x H100 SXM 80GB
🔹 VRAM: 320GB (Combined)
🔹 CPU: 64 vCPUs+
🔹 RAM: 256GB+
🔹 Storage: 2TB+ NVMe SSD
🔹 Parallelization: FSDP, Tensor Parallelism, DeepSpeed
🔹 Performance: Best for large-scale inference, fine-tuning, and batch processing
💡 Key Notes:
✅ For inference, a single RTX 6000 Ada 48GB or A100 80GB can be used with offloading.
✅ For optimal performance, multi-GPU setups with A100s or H100s are recommended.
✅ For fine-tuning and full training, a multi-GPU H100 cluster with high RAM and NVMe storage is ideal.
Step-by-Step Process to Install TinyR1-32B-Preview Locally
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.

Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.


Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deploy
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.


We will use 1 x A100X GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.

Step 5: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy TinyR1-32B-Preview on a Jupyter Virtual Machine. This open-source platform will allow you to install and run the TinyR1-32B-Preview on your GPU node. By running this Model on a Jupyter Notebook, we avoid using the terminal, simplifying the process and reducing the setup time. This allows you to configure the model in just a few steps and minutes.
Note: NodeShift provides multiple image template options, such as TensorFlow, PyTorch, NVIDIA CUDA, Deepo, Whisper ASR Webservice, and Jupyter Notebook. With these options, you don’t need to install additional libraries or packages to run Jupyter Notebook. You can start Jupyter Notebook in just a few simple clicks.

After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.

Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.

Step 7: Connect to Jupyter Notebook
Once your GPU VM deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ Button in the top right corner.

After clicking the ‘Connect’ button, you can view the Jupyter Notebook.

Now open Python 3(pykernel) Notebook.

Next, If you want to check the GPU details, run the command in the Jupyter Notebook cell:
!nvidia-smi

Step 8: Install Dependencies in Jupyter Notebook
Run the following commands in Jupyter Notebook to install dependencies:
pip install torch torchvision torchaudio
pip install transformers accelerate


Step 9: Load the Model
Run the following Python script to load the model:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# Define model path
model_name = "qihoo360/TinyR1-32B-Preview"
# Check for GPU availability
device = "cuda" if torch.cuda.is_available() else "cpu"
# Load the model
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16, # Uses BF16 for efficiency
device_map="auto" # Automatically assigns GPU
)
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
print("✅ Model and tokenizer loaded successfully!")

Expected Output:
Model and tokenizer loaded successfully!

Step 10: Run Inference
Now, test the model with a prompt:
# Define a reasoning-based question
prompt = "Please reason step by step, and put your final answer within \\boxed{}. Solve the integral: \[I = \int \frac{x^2}{(x+1)^3} \,dx\]"
# Format input message
messages = [{"role": "user", "content": prompt}]
# Apply chat template
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# Convert text into tokens
inputs = tokenizer([text], return_tensors="pt").to(device)
# Generate response
with torch.no_grad():
generated_ids = model.generate(
**inputs,
max_new_tokens=1000 # Adjust based on need
)
# Decode and print output
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print("📝 Model Output:\n", response)
Optional: Run Gradio Chatbot
You can create a simple chatbot using Gradio:
import gradio as gr
def chat_with_model(user_input):
messages = [{"role": "user", "content": user_input}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer([text], return_tensors="pt").to(device)
with torch.no_grad():
generated_ids = model.generate(**inputs, max_new_tokens=1000)
return tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
# Launch Gradio interface
gr.Interface(
fn=chat_with_model,
inputs=gr.Textbox(lines=2, placeholder="Enter your question..."),
outputs=gr.Textbox(label="Response"),
title="🧠 TinyR1-32B Reasoning Model Chatbot",
description="Ask me math, coding, or science questions!",
).launch()
Conclusion
Tiny-R1-32B-Preview is a high-performance reasoning model built to handle complex problem-solving in mathematics, coding, and science. With its efficient 32B parameter size, it delivers results that rival larger models while maintaining scalability and ease of deployment. The model’s fine-tuned architecture ensures high accuracy across industry benchmarks, making it an excellent choice for research, academic studies, and computational analysis.
This guide provided a step-by-step approach to installing and running Tiny-R1-32B-Preview on a GPU-powered virtual machine using NodeShift. Whether you need the model for inference, fine-tuning, or full-scale training, the recommended GPU configurations ensure optimal performance across different workloads.
By following this installation process, users can seamlessly integrate Tiny-R1-32B-Preview into their workflows, enabling structured reasoning and domain-specific applications with precision and efficiency.