WebThinker-QwQ-32B is a large-scale reasoning model designed to mimic human research processes. With 32 billion parameters, it autonomously navigates the web, clicking links and interacting with pages to gather information. It can draft research reports while exploring, integrating real-time knowledge acquisition with writing. Trained using reinforcement learning techniques, it optimizes its performance through iterative feedback loops, making it ideal for complex problem-solving and open-ended tasks requiring external research.
WebThinker-QwQ-32B — GPU Configuration Table
GPU Model | vCPUs | RAM (GB) | VRAM (GB) | Use Case | Recommended For |
---|
NVIDIA H100 SXM | 224 | 1024 | 80 | Full precision (fp16/bf16), high-throughput reasoning | ✅ Best performance, research, RL-based tasks |
NVIDIA A100 80GB | 192 | 512 | 80 | Full context length + RL reward modeling | ✅ Efficient large-scale inference |
NVIDIA A100 40GB | 96 | 256 | 40 | 4-bit or 8-bit quantized inference only | ⚠️ Needs quantization (e.g., bitsandbytes) |
RTX 6000 Ada 48GB | 64 | 128 | 48 | 4-bit inference with small batch size | 🚫 Not recommended for full precision |
RTX A6000 (48GB) | 48 | 96 | 48 | Low-batch 4-bit inference | ❌ Only for experimentation |
Dual A100 (40+40GB) | 96 | 384 | 80 (total) | Multi-GPU split with gpu-split param | ⚠️ Setup complexity; not plug-and-play |
Related Models
Step-by-Step Process to Install WebThinker-QwQ-32B Locally
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deploy
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1 x H100 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy WebThinker-QwQ-32B on an NVIDIA Cuda Virtual Machine. This proprietary, closed-source parallel computing platform will allow you to install WebThinker-QwQ-32B on your GPU Node.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to GPUs using SSH
NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.
Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.
Now open your terminal and paste the proxy SSH IP or direct SSH IP.
Next, if you want to check the GPU details, run the command below:
nvidia-smi
Step 8: Check the Available Python version and Install the new version
Run the following commands to check the available Python version.
If you check the version of the python, system has Python 3.8.1 available by default. To install a higher version of Python, you’ll need to use the deadsnakes
PPA.
Run the following commands to add the deadsnakes
PPA:
sudo apt update
sudo apt install -y software-properties-common
sudo add-apt-repository -y ppa:deadsnakes/ppa
sudo apt update
Step 9: Install Python 3.11
Now, run the following command to install Python 3.11 or another desired version:
sudo apt install -y python3.11 python3.11-venv python3.11-dev
Step 10: Update the Default Python3
Version
Now, run the following command to link the new Python version as the default python3
:
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 1
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 2
sudo update-alternatives --config python3
Then, run the following command to verify that the new Python version is active:
python3 --version
Step 11: Install and Update Pip
Run the following command to install and update the pip:
curl -O https://bootstrap.pypa.io/get-pip.py
python3.11 get-pip.py
Then, run the following command to check the version of pip:
pip --version
Step 12: Clone the WebUI Repo
Run the following command to clone the webui repo:
git clone https://github.com/oobabooga/text-generation-webui
cd text-generation-webui
Step 13: Run the One-Click Installer Script
Execute the following command to run the one-click installer script:
bash start_linux.sh
- It will automatically detect your GPU, install everything needed (Python, pip, CUDA toolkits, etc.)
- You’ll be prompted to select your GPU backend (choose
CUDA
or NVIDIA GPU
).
- Wait for it to finish setting up Python environment + dependencies.
Since our VM uses NVIDIA CUDA GPUs (e.g., A100, H100, A6000), choose:
A
Just type A
and hit Enter
.
What Happens Next
Once you select option A
, the script will:
- Install
torch
, vllm
, and GPU-specific dependencies
- Prepare the web UI environment
- Prompt you to select or download a model (or you can do that manually)
- Launch the server on
http://127.0.0.1:7860
Step 14: SSH Port Forward
On your machine run the following command for SSH port forward:
ssh -L 7860:127.0.0.1:7860 root@<your_vm_ip>
Then open: http://localhost:7860 in your browser.
Step 15: Download the Model
Run the following command to download the model:
python3 download-model.py lixiaoxi45/WebThinker-QwQ-32B
Step 16: Go back to the Web UI in browser
- Go to the “Model” tab
- Find
lixiaoxi45/WebThinker-QwQ-32B
- Click Load
Step 17: Test with Prompts
Chain-of-Thought Reasoning
You are a reasoning assistant. Please solve the following question step by step:
Question: If a train travels 120 km in 2 hours and 180 km in the next 3 hours, what is its average speed for the entire journey?
Tabular Reasoning
Here is a table:
| Product | Price | Quantity |
|---------|-------|----------|
| Pen | 5 | 10 |
| Book | 20 | 4 |
| Bag | 100 | 2 |
Calculate the total cost and sort the items by their individual total value in descending order.
Conclusion
WebThinker-QwQ-32B isn’t just another large language model—it’s a thinking engine. With the ability to mimic real human research workflows by actively navigating information, clicking links, and drafting content while exploring, it represents a major step forward in open-ended reasoning and autonomous research assistance.
Thanks to platforms like NodeShift, deploying and running such powerful models locally or in the cloud is no longer a task reserved for experts. With just a few commands and a GPU-powered VM, you can bring WebThinker to life and start experimenting with deep, multi-hop reasoning tasks.
Whether you’re solving complex problems, generating structured research drafts, or exploring real-world data with precision, WebThinker-QwQ-32B is built to support you—step by step, click by click.