OpenThinker-7B and OpenThinker-32B are advanced reasoning models fine-tuned on the OpenThoughts-114k dataset, designed to improve logical problem-solving, structured reasoning, and mathematical proficiency. Built upon Qwen2.5-7B and Qwen2.5-32B, these models leverage optimized training techniques and reinforcement-based fine-tuning to achieve high accuracy in complex evaluations.
OpenThinker-7B offers a balance between efficiency and performance, making it suitable for research, academic problem-solving, and structured reasoning tasks. On the other hand, OpenThinker-32B provides higher precision and deeper contextual understanding, optimized for long-form reasoning, theorem proving, and knowledge-based inference.
Both models are fully open-source, with transparent weights, datasets, training methodologies, and evaluation tools, ensuring accessibility for researchers and developers to further enhance and adapt them to specific applications. Released under the Apache 2.0 License, OpenThinker models offer a scalable and reproducible solution for advanced computational reasoning.
| AIME24 | MATH500 | GPQA-Diamond | LCBv2 Easy | LCBv2 Medium | LCBv2 Hard | LCBv2 All |
OpenThinker-7B | 31.3 | 83.0 | 42.4 | 75.3 | 28.6 | 6.5 | 39.9 |
Bespoke-Stratos-7B | 22.7 | 79.6 | 38.9 | 71.4 | 25.2 | 0.8 | 35.8 |
DeepSeek-R1-Distill-Qwen-7B | 60 | 88.2 | 46.9 | 79.7 | 45.1 | 14.6 | 50.1 |
gpt-4o-0513 | 8.7 | 75.8 | 46.5 | 87.4 | 42.7 | 8.9 | 50.5 |
o1-mini | 64 | 85.6 | 60 | 92.8 | 74.7 | 39.8 | 72.8 |
| Open Weights | Open Data | Open Code |
OpenThinker-7B | ✅ | ✅ | ✅ |
Bespoke-Stratos-7B | ✅ | ✅ | ✅ |
DeepSeek-R1-Distill-Qwen-7B | ✅ | ❌ | ❌ |
gpt-4o-0513 | ❌ | ❌ | ❌ |
o1-mini | ❌ | ❌ | ❌ |
Model Resource
Hugging Face
Link : https://huggingface.co/open-thoughts/OpenThinker-7B
Link: https://huggingface.co/open-thoughts/OpenThinker-32B
Ollama
Link: https://ollama.com/library/openthinker
Prerequisites for Running OpenThinker-7B & OpenThinker-32B Locally
VRAM:
- Minimum: 16GB for 8-bit or 4-bit quantization (for OpenThinker-7B).
- Recommended:
- 24GB+ for smooth inference and fine-tuning (for OpenThinker-7B).
- 48GB+ for OpenThinker-32B full-precision training & inference.
- Optimal: 80GB+ H100 for multi-step long-context tasks and fine-tuning.
Disk Space:
- Minimum: 50GB for model weights and temporary storage.
- Recommended: 200GB+ for datasets, logs, and fine-tuning outputs.
RAM:
- Minimum: 16GB for running inference.
- Recommended: 64GB for fine-tuning & larger context processing.
CPU:
- Minimum: 8 cores for inference.
- Recommended: 16–32 cores for faster preprocessing and fine-tuning.
Storage Type:
- Use SSD (NVMe preferred) for fast model loading and read/write speeds.
Performance Summary
For OpenThinker-7B
✅ RTX 3090 / 4090 / A6000 → Best for inference with 8-bit quantization.
✅ A100 40GB / 80GB → Handles long-context inference up to 16K tokens.
✅ H100 80GB → Ideal for resource-heavy fine-tuning & multi-step workflows.
For OpenThinker-32B
✅ A100 80GB / H100 80GB+ → Required for full-precision fine-tuning & inference.
✅ Multiple GPUs (2–4x A100 80GB) → Needed for efficient large-scale training.
Step-by-Step Process to Install OpenThinker 7B and 32B Model Locally
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deployment.
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1x RTX A6000 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suit your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy OpenThinker 7B and 32B on an NVIDIA Cuda Virtual Machine. This proprietary, closed-source parallel computing platform will allow you to install OpenThinker 7B and 32B on your GPU Node.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to GPUs using SSH
NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.
Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.
Now open your terminal and paste the proxy SSH IP or direct SSH IP.
Next, if you want to check the GPU details, run the command below:
nvidia-smi
Step 8: Install Ollama
After connecting to the terminal via SSH, it’s now time to install Ollama from the official Ollama website.
Website Link: https://ollama.com/
Run the following command to install the Ollama:
curl -fsSL https://ollama.com/install.sh | sh
Step 9: Serve Ollama
Run the following command to host the Ollama so that it can be accessed and utilized efficiently:
ollama serve
Now, “Ollama is running.”
Step 10: Select OpenThinker Model
Link: https://ollama.com/library/openthinker
OpenThinker model is available in only two size: 7B and 32B. We will run it on our GPU virtual machine.
Step 11: Connect with SSH
Now, open a new tab in the terminal and reconnect using SSH.
Step 12: Check Commands
Run, the following command to see a list of available commands:
Step 13: Check Available Models
Run the following command to check if the downloaded model are available:
ollama list
Step 14: Pull OpenThinker 7B and 32B Model
Run the following commands to pull the OpenThinker 7B and 32B model:
ollama run openthinker:7b
ollama run openthinker:32b
Step 15: Run OpenThinker 7B and 32B Model
Now, you can run the model in the terminal using the following commands and interact with your model:
ollama run openthinker:7b
ollama run openthinker:32b
Note: This is a step-by-step guide for interacting with your model. It covers the first method for installing OpenThinker 7B and 32B locally using Ollama and running it in the terminal.
Option 1: Using Ollama (Terminal)
- Install Ollama: Download and install the Ollama tool from the official site.
- Pull the Model: Run the following command to download the desired model:
ollama pull openthinker:7b
ollama pull openthinker:32b
- Run the Model: Start the model in the terminal:
ollama run openthinker:7b
ollama run openthinker:32b
Option 2: Using Open WebUI
- Set Up Open WebUI:
Follow our Open WebUI Setup Guide to configure the interface. Ensure all dependencies are installed and the environment is correctly set up.
- Refresh the Interface:
Confirm that the OpenThinker 7B and 32B model has been downloaded and is visible in the list of available models on the Open WebUI.
- Select Your Model:
Choose the OpenThinker 7B and 32B model from the list. This model is available in a single size.
- Start Interaction:
Begin using the model by entering your queries in the interface.
Option 3: Using Hugging Face and Jupyter Notebook
- Follow our Jupyter Notebook Setup Guide to configure your notebook environment. Ensure that all required dependencies are installed and that your Jupyter Notebook is set up correctly for optimal use.
When choosing an image for your Virtual Machine, select the Jupyter Notebook image. This open-source platform allows you to install and run the OpenThinker 7B and 32B on your GPU node. By running this model on a Jupyter Notebook, you can avoid using the terminal, simplifying the process and reducing setup time. This approach enables you to configure the model in just a few steps and minutes.
Note: NodeShift provides multiple image template options, such as TensorFlow, PyTorch, NVIDIA CUDA, Deepo, Whisper ASR Webservice, and Jupyter Notebook. With these options, you don’t need to install additional libraries or packages to run Jupyter Notebook. You can start Jupyter Notebook in just a few simple clicks.
Link : https://huggingface.co/open-thoughts/OpenThinker-7B
Link: https://huggingface.co/open-thoughts/OpenThinker-32B
Step 1: Open Jupyter Notebook
- Start your Jupyter Notebook from your GPU VM.
- Open a new Python notebook.
Step 2: Install Required Dependencies
Run the following command in a Jupyter Notebook cell to install all necessary dependencies:
!pip install torch transformers safetensors accelerate huggingface_hub bitsandbytes
This installs:
- torch → For deep learning computations.
- transformers → To load and run OpenThinker models.
- safetensors → For optimized model loading.
- accelerate → For faster inference on GPUs.
- huggingface_hub → To download models.
- bitsandbytes → For 4-bit and 8-bit quantization.
Step 3: Import Required Libraries
After installing, import the necessary libraries:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
Step 4: Load the Model and Tokenizer
For OpenThinker-7B
model_name = "open-thoughts/OpenThinker-7B"
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Load model with optimized settings for GPU
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16, # Use mixed precision for performance
device_map="auto" # Automatically map model to available GPU
)
For OpenThinker-32B
model_name = "open-thoughts/OpenThinker-32B"
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Load model with optimized settings for GPU
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
If you are running on a GPU with less than 24GB VRAM, consider loading the model in 8-bit or 4-bit quantization using bitsandbytes
:
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
load_in_8bit=True, # Use 8-bit quantization
device_map="auto"
)
Step 5: Run Inference Using the Model
Now, you can generate text using the model. Run the following:
# Define the input prompt
prompt = "Solve the math problem: What is the sum of angles in a triangle?"
# Tokenize input
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
# Generate output
output = model.generate(**inputs, max_new_tokens=100)
# Decode and print the response
response = tokenizer.decode(output[0], skip_special_tokens=True)
print(response)
Step 6: Using the Model with a Pipeline
For a simpler interface, use a pipeline:
# Load model pipeline
generator = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0)
# Generate response
response = generator(prompt, max_new_tokens=100)
print(response[0]['generated_text'])
Step 7: Save and Export the Results
To save the model output in a text file, run:
with open("openthinker_output.txt", "w") as file:
file.write(response[0]['generated_text'])
Step 8 (Optional): Fine-tuning or Further Training
If you want to fine-tune OpenThinker-7B/32B on your dataset, you need PEFT (Parameter Efficient Fine-Tuning) or LoRA. Run:
!pip install peft
from peft import get_peft_model, LoraConfig, TaskType
lora_config = LoraConfig(
task_type=TaskType.CAUSAL_LM,
r=8,
lora_alpha=32,
lora_dropout=0.1
)
peft_model = get_peft_model(model, lora_config)
You’re all set to use OpenThinker-7B or OpenThinker-32B on Jupyter Notebook!
Now, test different prompts, experiment with fine-tuning, or integrate the model into your research workflow.
Conclusion
OpenThinker-7B and OpenThinker-32B offer a powerful solution for structured reasoning, logical problem-solving, and advanced computations. Built on an open framework with full transparency, these models enable researchers and developers to push the boundaries of complex analysis and structured tasks. With scalable deployment options across different hardware configurations, OpenThinker models provide flexibility for real-world applications. Whether used for academic research, computational modeling, or long-form reasoning, these models serve as a reliable foundation for building advanced solutions. By making their resources open and accessible, OpenThinker-7B and OpenThinker-32B continue to foster innovation, knowledge sharing, and practical advancements in structured problem-solving.