Cogito v1 Preview – Qwen-14B is a high-performance language model built for deep reasoning, instruction following, and multilingual understanding. With 14.8 billion parameters and support for context lengths up to 128k tokens, it handles complex tasks like code generation, scientific problem-solving, and tool-assisted responses with ease.
What sets it apart is its hybrid thinking approach — capable of responding directly or pausing to reflect before generating answers. This results in more thoughtful, accurate outputs across a wide range of domains including STEM, programming, and conversational tasks.
Whether you’re building tools, writing scripts, or exploring multi-turn interactions, Cogito v1 is a powerhouse designed to handle it all — clearly, efficiently, and at scale.
Model Resource
Hugging Face
Link: https://huggingface.co/deepcogito/cogito-v1-preview-qwen-14B
Ollama
Link: https://ollama.com/library/cogito
Evaluations
Livebench Global Average
Recommended GPU Configuration for Cogito v1 – Qwen-14B
Resource | Recommended Specs |
---|
GPU | 1× H100 / A100 / RTX 8000 (>= 40 GB VRAM) |
vCPU | 32+ cores |
RAM | 64 GB+ |
CUDA | 11.8+ (CUDA 12.x is fine too) |
Disk | 80 GB+ (due to model size) |
Frameworks | PyTorch (with bfloat16 / fp16 support) |
Step-by-Step Process to Install Cogito LLM Locally Using HuggingFace, Transformers and Terminal
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deploy
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1 x H100 SXM GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy Cogito LLM on a Jupyter Virtual Machine. This open-source platform will allow you to install and run the Cogito LLM on your GPU node. By running this Model on a Jupyter Notebook, we avoid using the terminal, simplifying the process and reducing the setup time. This allows you to configure the model in just a few steps and minutes.
Note: NodeShift provides multiple image template options, such as TensorFlow, PyTorch, NVIDIA CUDA, Deepo, Whisper ASR Webservice, and Jupyter Notebook. With these options, you don’t need to install additional libraries or packages to run Jupyter Notebook. You can start Jupyter Notebook in just a few simple clicks.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to Jupyter Notebook
Once your GPU VM deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ Button in the top right corner.
After clicking the ‘Connect’ button, you can view the Jupyter Notebook.
Now open Python 3(pykernel) Notebook.
Step 8: Authenticate Hugging Face and Install HuggingFace Hub
Run the following commands to authenticate hugging face and install huggingface hub:
!pip install huggingface_hub --upgrade
from huggingface_hub import login
login()
Step 9: Install Required Libraries
Run the following commands to install the required libraries:
Step 10: Load the Model
Run the following code to load the model:
import torch
from transformers import pipeline
model_id = "deepcogito/cogito-v1-preview-qwen-14B"
pipe = pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
Step 11: Run a Test Prompt
Run the following prompt to generate a response:
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Give me a short introduction to LLMs."},
]
output = pipe(messages, max_new_tokens=512)
print(output[0]["generated_text"][-1])
Step 12: Run with Deep Thinking Subroutine Enabled
Run the following prompt to generate a Deep Thinking Subroutine Enabled:
messages = [
{"role": "system", "content": "Enable deep thinking subroutine.\n\nYou are a thoughtful assistant who explains step-by-step."},
{"role": "user", "content": "What is quantum entanglement in simple terms?"},
]
output = pipe(messages, max_new_tokens=512)
print(output[0]["generated_text"][-1])
Step 13: Install Gradio
Run the following command to install the gradio:
pip install gradio
Step 14: Gradio Chatbot Script
Run the following gradio chatbot script:
import gradio as gr
from transformers import pipeline
# Load model
pipe = pipeline(
"text-generation",
model="deepcogito/cogito-v1-preview-qwen-14B",
model_kwargs={"torch_dtype": "bfloat16"},
device_map="auto",
)
# History of chat
chat_history = []
# Chat function
def chat(user_input, history=[]):
messages = [
{"role": "system", "content": "Enable deep thinking subroutine.\n\nYou are a helpful assistant."},
]
for pair in history:
messages.append({"role": "user", "content": pair[0]})
messages.append({"role": "assistant", "content": pair[1]})
messages.append({"role": "user", "content": user_input})
output = pipe(messages, max_new_tokens=512)
response = output[0]["generated_text"][-1]
history.append((user_input, response))
return "", history
# Launch Gradio app
gr.ChatInterface(
fn=chat,
title="Cogito v1 - 14B Chatbot",
description="Talk to DeepCogito's Qwen-14B preview model. Includes deep thinking mode.",
).launch(share=True, server_name="0.0.0.0", server_port=7860)
Step 15: Access the Gradio Web App
Access the gradio web app on:
Running on local URL: http://0.0.0.0:7860
Running on public URL: https://362f6848616716ab6b.gradio.live
Conclusion
Cogito v1 Preview – Qwen-14B combines scale, structure, and deep thinking to handle everything from multi-language conversations to complex STEM reasoning. Its extended context, hybrid response modes, and support for tool-assisted logic make it a solid choice for researchers, builders, and anyone pushing the limits of natural language interaction. With this step-by-step setup on a GPU-powered virtual machine, you’re ready to experience Cogito’s full capabilities—clearly, securely, and at your command.