If you’ve been exploring compact language models for research, chances are you’ve already come across the impressive Jan-Nano, a lightweight, high-performance model that recently gained popularity for its speed and versatility. But one of its key limitations was its relatively short context window, which often forced researchers and developers to chunk or truncate large documents. Since long context window is a very important factor in areas like deep research, Menlo Research team just launched Jan-Nano-128k, a game-changing upgrade that natively supports an astonishing 128,000-token context window. It is built from the ground up to handle long-form content without the performance degradation seen in traditional context extension methods like YaRN. If you’re analyzing full-length research papers, synthesizing knowledge across multiple documents, or engaging in complex multi-turn conversations, Jan-Nano-128k empowers you to dive deeper with unmatched efficiency and precision. Its architecture is optimized not just for length, but for performance at scale, maintaining coherent, high-quality responses across massive inputs. Fully compatible with Model Context Protocol (MCP) servers, it’s a dream tool for researchers, AI Scientists, and enteprises focusing on AI tools for deep research.
In this guide, we’ll walk you through the easiest way to install Jan-Nano-128k and get it running on GPU accelerated environment, so you can start building, exploring, and reasoning at an entirely new scale.
Prerequisites
The minimum system requirements for running this model are:
- GPU: 1x RTX A6000 or 1x A100
- Storage: 50GB (preferable)
- VRAM: at least 48GB
- Anaconda installed
Step-by-step process to install and run Jan Nano 128k
For the purpose of this tutorial, we’ll use a GPU-powered Virtual Machine by NodeShift since it provides high compute Virtual Machines at a very affordable cost on a scale that meets GDPR, SOC2, and ISO27001 requirements. Also, it offers an intuitive and user-friendly interface, making it easier for beginners to get started with Cloud deployments. However, feel free to use any cloud provider of your choice and follow the same steps for the rest of the tutorial.
Step 1: Setting up a NodeShift Account
Visit app.nodeshift.com and create an account by filling in basic details, or continue signing up with your Google/GitHub account.
If you already have an account, login straight to your dashboard.
Step 2: Create a GPU Node
After accessing your account, you should see a dashboard (see image), now:
- Navigate to the menu on the left side.
- Click on the GPU Nodes option.
- Click on Start to start creating your very first GPU node.
These GPU nodes are GPU-powered virtual machines by NodeShift. These nodes are highly customizable and let you control different environmental configurations for GPUs ranging from H100s to A100s, CPUs, RAM, and storage, according to your needs.
Step 3: Selecting configuration for GPU (model, region, storage)
- For this tutorial, we’ll be using 1x RTX A6000 GPU, however, you can choose any GPU as per the prerequisites.
- Similarly, we’ll opt for 200GB storage by sliding the bar. You can also select the region where you want your GPU to reside from the available ones.
Step 4: Choose GPU Configuration and Authentication method
- After selecting your required configuration options, you’ll see the available GPU nodes in your region and according to (or very close to) your configuration. In our case, we’ll choose a 1x RTX A6000 48GB GPU node with 64vCPUs/63GB RAM/200GB SSD.
2. Next, you’ll need to select an authentication method. Two methods are available: Password and SSH Key. We recommend using SSH keys, as they are a more secure option. To create one, head over to our official documentation.
Step 5: Choose an Image
The final step is to choose an image for the VM, which in our case is Nvidia Cuda.
That’s it! You are now ready to deploy the node. Finalize the configuration summary, and if it looks good, click Create to deploy the node.
Step 6: Connect to active Compute Node using SSH
- As soon as you create the node, it will be deployed in a few seconds or a minute. Once deployed, you will see a status Running in green, meaning that our Compute node is ready to use!
- Once your GPU shows this status, navigate to the three dots on the right, click on Connect with SSH, and copy the SSH details that appear.
As you copy the details, follow the below steps to connect to the running GPU VM via SSH:
- Open your terminal, paste the SSH command, and run it.
2. In some cases, your terminal may take your consent before connecting. Enter ‘yes’.
3. A prompt will request a password. Type the SSH password, and you should be connected.
Output:
Next, If you want to check the GPU details, run the following command in the terminal:
!nvidia-smi
Step 7: Set up the project environment with dependencies
- Create a virtual environment using Anaconda.
conda create -n jann python=3.11 -y && conda activate jann
Output:
2. Once you’re inside the environment, run the following command to install the torch and other packages.
pip install torch torchvision torchaudio einops timm pillow
pip install git+https://github.com/huggingface/transformers
pip install git+https://github.com/huggingface/accelerate
pip install git+https://github.com/huggingface/diffusers
pip install huggingface_hub
pip install sentencepiece bitsandbytes protobuf decord numpy
Output:
3. Run the following command to install vllm
and any other remaining packages needed to run vllm
and not installed already.
pip install --upgrade vllm
Output:
Step 8: Download and Run the model
- Start the vllm server with this command that will also download the model.
vllm serve Menlo/Jan-nano-128k \
--host 0.0.0.0 \
--port 1234 \
--enable-auto-tool-choice \
--tool-call-parser hermes \
--rope-scaling '{"rope_type":"yarn","factor":3.2,"original_max_position_embeddings":40960}' --max-model-len 131072
Output:
2. Once all the model checkpoints are downloaded, we’ll connect our local VSCode editor to the remote server to write a code snippet to test the model during inference.
If you’re using a GPU through a remote server (e.g., NodeShift), you can connect it to your visual studio code editor by following the steps below:
a) Install the “Remote-SSH” Extension by Microsoft on VS Code.
b) Type “Remote-SSH: Connect to Host” on the Command Palette.
c) Click on “Add a new host”.
d) Enter the host details, such as username and SSH password, and you should be connected.
3. Inside VSCode, create a new project directory and a python file inside the directory. Ensure you’re inside the virtual environment created earlier.
mkdir test-app
cd test-app
touch app.py
4. Copy and paste the below code snippet in app.py
file.
This is just a test script that tests the model’s long context capabilities on a very long context i.e., the novel “The Adventures of Sherlock Holmes” in the form of text file. Further we ask the model some deep questions from different areas of this novel.
import requests
import json
def test_long_context():
try:
with open("./sherlock-holmes.txt", 'r', encoding='utf-8') as file:
full_text = file.read()
print(f"Full text: {len(full_text)} characters (~{len(full_text.split())} tokens)")
test_chars = 50000 * 4
long_text = full_text[:test_chars]
print(f"Using all the tokens....")
except Exception as e:
print(f"Error reading file: {e}")
return
prompt = f"""Here is a substantial portion of the Adventures of Sherlock Holmes, which is in the public domain:
{long_text}
---
Please analyze this portion of the novel:
1. How does Watson’s narration influence our perception of Holmes? Provide examples from the introduction or a specific story.
2. How is Holmes’s relationship with Watson portrayed across different stories? What strengths or tensions in the partnership emerge?
3. Holmes often remarks: “You see, but you do not observe.” How does this principle manifest in two different cases?
4. Examine the portrayal of official police (like Lestrade or Inspector Jones) versus Holmes. What does this say about authority and expertise?
Please reference specific scenes and quotes to show you've processed this long text."""
# API request with longer timeout
try:
print("\nSending request to model... (this may take several minutes)")
print("Processing All tokens of context...")
response = requests.post(
"http://localhost:1234/v1/chat/completions",
headers={"Content-Type": "application/json"},
json={
"model": "Menlo/Jan-nano-128k",
"messages": [
{"role": "user", "content": prompt}
],
"max_tokens": 2048,
}
)
if response.status_code == 200:
result = response.json()
print("\nModel response:")
print(result["choices"][0]["message"]["content"])
else:
print(f"\nRequest failed with status code {response.status_code}")
print(response.text)
except Exception as e:
print(f"Error sending request: {e}")
if __name__ == "__main__":
test_long_context()
The file looks like this:
5. Open another new terminal, and connect it to same remote server using SSH, and hit this command to run the script.
(Make sure the vllm
server is up and running in another terminal on the same remote server)
cd test-app
python app.py
Output:
Conclusion
Jan-Nano-128k is a major step towards compact language models, enabling truly long-context reasoning across entire research papers, multi-document synthesis, and deeply contextual conversations, all without compromising performance. In this article, we covered what makes this model a standout evolution from its predecessor, how it overcomes the limitations of traditional context extension techniques like YaRN, and why its native 128k context window is a game-changer for research-grade applications. Powered by NodeShift Cloud’s GPU-accelerated infrastructure, installing and running Jan-Nano-128k becomes effortless, scalable, and production-ready, so you can focus on pushing the boundaries of what’s possible in deep language understanding, without worrying about the compute infrastructure headaches.