If you’ve ever wished your coding assistant could actually understand your messy codebase, navigate real GitHub issues, and propose working solutions, then Devstral might be the endgame for you. Developed by Mistral AI, Devstral isn’t just another code completion model. It’s an agentic LLM fine-tuned on Mistral Small 3.1, specifically for real-world software engineering tasks, outperforming every open-source model on the SWE-Bench Verified benchmark with an impressive score of 46.8%. In tasks like debugging intricate logic, identifying cross-file dependencies, or tackling real GitHub issues using agentic tools like OpenHands or SWE-Agent, Devstral thrives where others struggle. It even beats heavyweight models like DeepSeek-V3 (671B) and Qwen3-232B, and closes in on closed-source competitors, topping GPT-4.1-mini by over 20%. Plus, it’s lightning fast, open-source, and efficiently runs on a single H100 or 2x RTA6000.
In this guide, we’re going to cover the step-by-step process to install Devstral, serve the model, run it for inference and finally create a demo RGB Color Mixer app using Devstral from scratch.
Prerequisites
The minimum system requirements for this use case are:
- GPUs: 1x H100 or 2x RTXA6000
- Disk Space: 100 GB
- RAM: At least 80 GB.
- Anaconda set up
Note: The prerequisites for this are highly variable across use cases. A high-end configuration could be used for a large-scale deployment.
Step-by-step process to install and run Devstral
For the purpose of this tutorial, we’ll use a GPU-powered Virtual Machine by NodeShift since it provides high compute Virtual Machines at a very affordable cost on a scale that meets GDPR, SOC2, and ISO27001 requirements. Also, it offers an intuitive and user-friendly interface, making it easier for beginners to get started with Cloud deployments. However, feel free to use any cloud provider of your choice and follow the same steps for the rest of the tutorial.
Step 1: Setting up a NodeShift Account
Visit app.nodeshift.com and create an account by filling in basic details, or continue signing up with your Google/GitHub account.
If you already have an account, login straight to your dashboard.
Step 2: Create a GPU Node
After accessing your account, you should see a dashboard (see image), now:
- Navigate to the menu on the left side.
- Click on the GPU Nodes option.
- Click on Start to start creating your very first GPU node.
These GPU nodes are GPU-powered virtual machines by NodeShift. These nodes are highly customizable and let you control different environmental configurations for GPUs ranging from H100s to A100s, CPUs, RAM, and storage, according to your needs.
Step 3: Selecting configuration for GPU (model, region, storage)
- For this tutorial, we’ll be using the H100 GPU; however, you can choose any GPU of your choice based on your needs.
- Similarly, we’ll opt for 100GB storage by sliding the bar. You can also select the region where you want your GPU to reside from the available ones.
Step 4: Choose GPU Configuration and Authentication method
- After selecting your required configuration options, you’ll see the available GPU nodes in your region and according to (or very close to) your configuration. In our case, we’ll choose a 1x H100 80GB GPU node with 64vCPUs/126GB RAM/100GB SSD.
2. Next, you’ll need to select an authentication method. Two methods are available: Password and SSH Key. We recommend using SSH keys, as they are a more secure option. To create one, head over to our official documentation.
Step 5: Choose an Image
The final step would be to choose an image for the VM, which in our case is Nvidia Cuda, where we’ll deploy and run the inference of our model.
That’s it! You are now ready to deploy the node. Finalize the configuration summary, and if it looks good, click Create to deploy the node.
Step 6: Connect to active Compute Node using SSH
- As soon as you create the node, it will be deployed in a few seconds or a minute. Once deployed, you will see a status Running in green, meaning that our Compute node is ready to use!
- Once your GPU shows this status, navigate to the three dots on the right and click on Connect with SSH. This will open a pop-up box with the Host details. Copy and paste that in your local terminal to connect to the remote server via SSH.
As you copy the details, follow the below steps to connect to the running GPU VM via SSH:
- Open your terminal, paste the SSH command, and run it.
2. In some cases, your terminal may take your consent before connecting. Enter ‘yes’.
3. A prompt will request a password. Type the SSH password, and you should be connected.
Output:
Next, If you want to check the GPU details, run the following command in the terminal:
!nvidia-smi
Step 7: Set up the project environment with dependencies
- Create a virtual environment using Anaconda.
conda create -n devstral python=3.11 && conda activate devstral
Output:
2. Run this command to install vllm
and all the dependencies needed to run it.
pip install vllm --upgrade
Output:
3. Check if vllm
is correctly installed and has automatically installed “mistral common
” package.
python -c "import mistral_common; print(mistral_common.__version__)"
Output:
4. Spin up the vllm
server with the following command. It’ll start downloading model checkpoints.
(Replace the argument for --tensor-parallel-size
with the no. of GPUs you want the model to split on. E.g., ‘1’)
vllm serve mistralai/Devstral-Small-2505 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 1
Output:
Step 8: Create A Demo App with Devstral
- After the model server is succeffully up and running, we’ll run the inference with a prompt for asking the model to create a RGB Color mixer app with HTML, CSS and JS. We’ll create this app on Visual Studio Code Editor. For this, if you’re using a remote server (e.g. NodeShift GPU), you’ll first need to connect your local VS Code editor to your remote server via SSH with the following steps:
a) Install the “Remote-SSH” Extension by Microsoft on VS Code.
b) Type “Remote-SSH: Connect to Host” on the Command Palette.
c) Enter the host details, such as username and SSH password, and you should be connected.
- Create a new project directory named
devstral-app
and create a new file named app.py
inside the directory. Finally, paste the following code snippet in the file.
(Replace the prompt
with your own description if you want to build a different app.)
import requests
import json
from huggingface_hub import hf_hub_download
url = "http://localhost:8000/v1/chat/completions"
headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}
model = "mistralai/Devstral-Small-2505"
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
return system_prompt
SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt")
prompt = '''
Create a minimal but complete HTML + JavaScript web application contained in a single index.html file. The app should have three sliders labeled Red, Green, and Blue, each allowing users to select values from 0 to 255 for the respective color components.
Each slider’s current numeric value should be displayed next to or above the slider and update live as the slider is moved.
Include a color preview box that dynamically updates its background color based on the RGB values selected by the sliders.
Below the color preview, display the currently selected color values in two formats: the RGB notation (for example, RGB(128, 64, 255)) and the equivalent hexadecimal color code (for example, #8040FF).
All styling and scripting should be included inline within the single index.html file, with no external CSS or JavaScript files.
Use simple, clean CSS to arrange the sliders vertically and center the content on the page, creating a user-friendly and visually balanced interface.
Ensure the app works responsively on common desktop and mobile browsers.
Please output the entire code for index.html.
'''
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{
"role": "user",
"content": [
{
"type": "text",
"text": prompt,
},
],
},
]
data = {"model": model, "messages": messages, "temperature": 0.15}
response = requests.post(url, headers=headers, data=json.dumps(data))
print(response.json()["choices"][0]["message"]["content"])
3. Next, download the SYSTEM_PROMPT.txt
file from Devstral’s official Hugging Face files section. Then put this file inside the project root directory.
4. Run the model with the following command.
python app.py
Output:
As you can see, Devstral has generated the complete code for the RGB Color Mixer app successfully.
5. Now, to try this code, we’ll create an index.html
file inside the project directory and paste the above generated code in the file.
6. Finally, access the HTML web page on the browser to see the live app.
As shown above, the code worked successfully without throwing any errors and all the requested functionalities are working 100% smooth and fine.
Here’s the full live demo of the RGB Color Mixer App built by Devstral:
Build more complex and creative app examples by playing with the prompt to experience the efficient coding abilities of Devstral.
Conclusion
The introduction of Devstral by Mistral as a coding agent tool is practically bridging the gap between traditional LLMs and real-world software engineering needs. From installation to inference to app generation, this guide walked you through how to harness Devstral’s capabilities in a hands-on way. And with NodeShift, deploying and experimenting with powerful models like Devstral becomes seamless, giving developers access to GPU-accelerated infrastructure without the headaches of setup or scaling. Whether you’re prototyping locally or building production-ready agentic tools, NodeShift provides the perfect launchpad both on cloud or on prem to bring cutting-edge models like Devstral into your workflow.