QwQ-32B is a cutting-edge reasoning model designed to handle complex tasks with structured thinking and logical accuracy. Built with 32.5 billion parameters, it features a 131K token context length, making it ideal for solving intricate problems, multi-step reasoning, and precise text generation. With an advanced transformer-based architecture incorporating RoPE, SwiGLU, RMSNorm, and Attention QKV bias, QwQ-32B delivers superior performance in mathematical reasoning, structured responses, and multi-turn conversations. Optimized for real-world applications, it outperforms conventional instruction-tuned models, making it a powerful tool for research, enterprise solutions, and high-precision tasks.
Model Resource
Hugging Face
Link: https://huggingface.co/Qwen/QwQ-32B
Ollama
Link: https://ollama.com/library/qwq:32b
Prerequisites for Installing Qwen QWQ 32B Model Locally
Ensure you have the following setup before running the model:
- Ubuntu 22.04+ or Debian-based OS (for GPU VM)
- Python 3.10+
- NVIDIA GPU (A100 80GB, H100 80GB, RTXA6000)
- GPUs: RTXA6000 (for smooth execution).
- Disk Space: 100 GB free.
- RAM: At least 100 GB.
- CPU: 48 Cores
- CUDA 12.1+
- PyTorch 2.1+
- Transformers 4.40.1+
- Jupyter Notebook installed and running
Step-by-Step Process to Install Qwen QWQ 32B using Jupyter Notebook and Transformers
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deployment.
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1x RTX A6000 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy Qwen QWQ 32B Model on a Jupyter Virtual Machine. This open-source platform will allow you to install and run the Qwen QWQ 32B Model on your GPU node. By running this Model on a Jupyter Notebook, we avoid using the terminal, simplifying the process and reducing the setup time. This allows you to configure the model in just a few steps and minutes.
Note: NodeShift provides multiple image template options, such as TensorFlow, PyTorch, NVIDIA CUDA, Deepo, Whisper ASR Webservice, and Jupyter Notebook. With these options, you don’t need to install additional libraries or packages to run Jupyter Notebook. You can start Jupyter Notebook in just a few simple clicks.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to Jupyter Notebook
Once your GPU VM deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ Button in the top right corner.
After clicking the ‘Connect’ button, you can view the Jupyter Notebook.
Now open Python 3(pykernel) Notebook.
Next, If you want to check the GPU details, run the command in the Jupyter Notebook cell:
!nvidia-smi
Step 8: Install Dependencies in Jupyter Notebook
Run the following commands in Jupyter Notebook to install dependencies:
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
pip install transformers accelerate
pip install bitsandbytes
Step 9: Check GPU Availability
Run the following commands in Jupyter Notebook to ensure that CUDA and GPU drivers are properly installed:
import torch
print("CUDA Available:", torch.cuda.is_available())
print("Number of GPUs:", torch.cuda.device_count())
print("GPU Name:", torch.cuda.get_device_name(0))
Step 10: Load the Model
Run the following Python script inside Jupyter Notebook to load the model:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# Set model path
model_name = "Qwen/QwQ-32B"
# Load model with automatic device allocation
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16, # Use bfloat16 for better memory efficiency
device_map="auto" # Auto assigns to available GPUs
)
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
print("Model and tokenizer loaded successfully!")
Step 11: Try Different Prompts
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model and tokenizer
model_name = "Qwen/QwQ-32B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16, # Use BF16 for better efficiency
device_map="auto" # Auto-assigns GPU or CPU
).eval()
def generate_response(prompt):
messages = [{"role": "user", "content": prompt}]
# Format input using chat template
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# Tokenize input and send to the correct device
inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Generate response
with torch.no_grad():
generated_ids = model.generate(**inputs, max_new_tokens=128)
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
return response
# **Test the function**
prompt = "If you double a number and then subtract 6, the result is 14. What was the original number?"
response = generate_response(prompt)
print(response)
Now, you can try more reasoning-based prompts:
print(generate_response("What comes next in the sequence: 3, 9, 27, 81, ?"))
print(generate_response("If a car travels 180 km in 3 hours, what is its average speed?"))
print(generate_response("Which number is larger: 2^10 or 10^2? Explain your answer."))
You’re All Set!
✅ Model is loaded
✅ Tokenizer is initialized
✅ Inference function is defined
✅ You can now chat with QwQ-32B.
Step 12: Run the Gradio Chatbot
Ensure dependencies are installed
If you haven’t installed Gradio and Transformers, run:
pip install torch transformers gradio
Step 13: Run the Gradio Chatbot
Run the following Python script to start the chatbot:
import torch
import gradio as gr
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model and tokenizer
model_name = "Qwen/QwQ-32B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16, # Use BF16 for better efficiency
device_map="auto" # Auto-assigns GPU or CPU
).eval()
# Define the function to generate responses
def chat_with_qwq(message, history=[]):
messages = [{"role": "user", "content": message}]
# Format input using chat template
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# Tokenize input and send to the correct device
inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Generate response
with torch.no_grad():
generated_ids = model.generate(**inputs, max_new_tokens=256)
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
# Append to chat history
history.append((message, response))
return response, history
# **Create Gradio Interface**
chatbot = gr.ChatInterface(
fn=chat_with_qwq,
title="QwQ-32B Chatbot",
description="Chat with the QwQ-32B reasoning model.",
theme="compact"
)
# **Launch the chatbot**
chatbot.launch(share=True) # Use `share=True` to get a public link
How This Works
- The script loads QwQ-32B model and tokenizer.
- Uses Gradio’s
ChatInterface
to create an interactive chatbot.
- Generates responses with the model and maintains chat history.
- Runs a Gradio Web UI where you can interact with the model.
Expected Output
After running this script, it will output a Gradio link, like:
Running on local URL: http://127.0.0.1:7860
Running on public URL: https://xyz.gradio.app
You can click the public URL to chat with the model!
Step 14: Access Chatbot
Access the Chatbot on:
Running on local URL: http://127.0.0.1:7860
Running on public URL: https://xyz.gradio.app
You can also access chatbot in Jupyter Notebook.
Note: This is a step-by-step guide for interacting with your model. It covers the first method for installing Qwen QWQ 32B locally using jupyter notebook and transformers.
Option 2: Step-by-Step Process to Install Qwen QWQ 32B using Ollama
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deployment.
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1x RTX A6000 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy Qwen QWQ 32B on an NVIDIA Cuda Virtual Machine. This proprietary, closed-source parallel computing platform will allow you to install Qwen QWQ 32B on your GPU Node.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to GPUs using SSH
NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.
Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.
Now open your terminal and paste the proxy SSH IP or direct SSH IP.
Next, if you want to check the GPU details, run the command below:
nvidia-smi
Step 8: Install Ollama
After connecting to the terminal via SSH, it’s now time to install Ollama from the official Ollama website.
Website Link: https://ollama.com/
Run the following command to install the Ollama:
curl -fsSL https://ollama.com/install.sh | sh
Step 9: Serve Ollama
Run the following command to host the Ollama so that it can be accessed and utilized efficiently:
ollama serve
Now, “Ollama is running.”
Step 10: Select Qwen QWQ 32B Model
Link: https://ollama.com/library/qwq:32b
Qwen QWQ model is available in only one size: 32B. We will run it on our GPU virtual machine.
Step 11: Connect with SSH
Now, open a new tab in the terminal and reconnect using SSH.
Step 12: Check Commands
Run, the following command to see a list of available commands:
ollama
Step 13: Check Available Models
Run the following command to check if the downloaded model are available:
ollama list
Step 14: Pull Qwen QWQ 32B Model
Run the following command to pull the Qwen QWQ 32B model:
ollama pull qwq:32b
Step 15: Run Qwen QWQ 32B Model
Now, you can run the model in the terminal using the following command and interact with your model:
ollama run qwq:32b
Note: This is a step-by-step guide for interacting with your model. It covers the second method for installing Qwen QWQ 32B locally using Ollama and running it in the terminal.
Option 3: Using Open WebUI
- Set Up Open WebUI:
Follow our Open WebUI Setup Guide to configure the interface. Ensure all dependencies are installed and the environment is correctly set up.
- Refresh the Interface:
Confirm that the Qwen QWQ 32B model has been downloaded and is visible in the list of available models on the Open WebUI.
- Select Your Model:
Choose the Qwen QWQ 32B model from the list. This model is available in a single size.
- Start Interaction:
Begin using the model by entering your queries in the interface. The Qwen QWQ 32B is designed for high-quality instruction-based interactions, so input clear and detailed queries for the best results.
Conclusion
QwQ-32B is a powerful reasoning model designed to handle complex tasks with structured thinking and logical precision. With 32.5 billion parameters and a 131K token context length, it excels in multi-step reasoning, problem-solving, and structured text generation. Its transformer-based architecture, featuring RoPE, SwiGLU, RMSNorm, and Attention QKV bias, ensures high accuracy and efficiency across a variety of use cases.
By following this step-by-step guide, users can successfully install, run, and interact with QwQ-32B on NodeShift Cloud, Ollama, Open WebUI, or Jupyter Notebook, making it accessible for research, enterprise applications, and high-precision tasks. Whether for mathematical reasoning, structured responses, or general problem-solving, QwQ-32B offers a reliable and scalable solution for advanced language processing.