Zed is a next-generation code editor built from the ground up in Rust for ultimate performance. Whether you’re working solo or collaborating with your team in real time, Zed delivers a buttery-smooth coding experience — from instant startup times to zero-lag typing.
With native support for Git, Jupyter, terminals, and remote development, it’s tailored for modern workflows. Zed also integrates deeply with the latest AI assistants, letting you generate, transform, and review code effortlessly through agentic editing and inline intelligence — all while keeping you in control.
What makes Zed stand out?
- Intelligent — Seamlessly connect your favorite models to edit, refactor, and debug faster.
- Ridiculously Fast — Built in Rust to leverage your machine’s full power, including GPU.
- Truly Collaborative — Code together, chat, and share context in real-time.
- Extensible — Hundreds of language extensions, themes, and integrations ready to go.
Zed just works — and it keeps getting better with weekly updates and a growing open-source ecosystem.
This project is open source under the Apache 2.0 License. If you’re an open-source contributor, you’re good to go — start exploring and contributing today!
Resources
Website
Link: https://zed.dev/
GitHub
Link: https://github.com/zed-industries/zed
Step-by-Step Process to Setup Zed + Ollama + LLMs
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deploy
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1 x RTXA6000 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy tool on an NVIDIA Cuda Virtual Machine. This proprietary, closed-source parallel computing platform will allow you to install tool on your GPU Node.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to GPUs using SSH
NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.
Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.
Now open your terminal and paste the proxy SSH IP or direct SSH IP.
Next, if you want to check the GPU details, run the command below:
nvidia-smi
Step 8: Install Ollama
After connecting to the terminal via SSH, it’s now time to install Ollama from the official Ollama website.
Website Link: https://ollama.com/
Run the following command to install the Ollama:
curl -fsSL https://ollama.com/install.sh | sh
Step 9: Serve Ollama
Run the following command to host the Ollama so that it can be accessed and utilized efficiently:
OLLAMA_HOST=0.0.0.0:11434 ollama serve
Step 10: Set Up SSH Port Forwarding (For Remote Models Like Ollama on a GPU VM)
If you’re running a model like Ollama on a remote GPU Virtual Machine (e.g. via NodeShift, AWS, or your own server), you’ll need to port forward the Ollama server to your local machine so Zed Editor can connect to it.
Here’s how to do it:
Example (Mac/Linux Terminal):
ssh -L 11434:localhost:11434 root@<your-vm-ip> -p <your-ssh-port>
Once connected, your local machine will treat http://localhost:11434
as if Ollama is running locally.
- Replace
<your-vm-ip>
with your VM’s IP address
- Replace
<your-ssh-port>
with the custom port (e.g. 260
55)
On Windows:
Use a tool like PuTTY or ssh
from WSL/PowerShell with similar port forwarding.
If you’re running large language models (like Llama 3, DeepSeek, or Qwen) on a remote GPU Virtual Machine, you’ll want Zed Editor on your local machine to talk to that remote Ollama instance.
But since the model is running on the VM — not on your laptop — we need to bridge the gap.
That’s where SSH port forwarding comes in.
Why use a GPU VM?
Large models require serious compute power. Your laptop might struggle or overheat trying to run them. So we spin up a GPU-powered VM in the cloud — it gives us:
- Faster responses
- Support for large models (7B, 13B, even 70B!)
- More RAM + VRAM for smoother inference
Step 11: Run Your First Models in Ollama (Devstral + Qwen 2.5)
Now that Zed Editor is connected to Ollama via http://localhost:11434
, let’s run our first models.
We’ll use two powerful open-source models:
Devstral by Mistral AI
A brand new model purpose-built for coding agents. Devstral isn’t just about code completion — it’s designed to handle real-world software engineering tasks, like resolving GitHub issues and working inside codebases.
To run it on Ollama:
ollama run devstral
Run this command on your GPU Virtual Machine, not your Mac.
Built by Mistral AI in collaboration with All Hands AI, Devstral is optimized for local use (even on a Mac with 32GB RAM or a single RTX 4090) and is fully open under the Apache 2.0 license.
If you want to dive deeper into Devstral, we’ve got a full step-by-step guide here:
Link: https://nodeshift.com/blog/a-step-by-step-to-install-devstral-mistrals-open-source-coding-agent
Qwen 2.5 VL by Qwen
Another great lightweight model you can try locally is Qwen 2.5 VL, specifically the 3B variant — perfect for fast inference and lower memory usage.
Run it with:
ollama run qwen2.5vl:3b
Again, run this command on your GPU Virtual Machine.
This is a solid pick for fast testing, multi-language reasoning, and creative coding tasks — without needing a huge GPU.
You can switch between models anytime by running a different ollama run <model>
command in the background. Once it’s active, Void Editor will automatically detect and use the running model.
Step 12: Check Available Models via curl
(From Your Mac)
Once your Ollama backend is running on the remote GPU VM and connected to your Mac via SSH port forwarding, you can use a simple curl
command from your local machine to check which models are currently available.
First, Pull the Models (like Devstral or Qwen 2.5 VL)
Before you can list anything, you’ll need to pull the models you plan to use. For example:
ollama pull devstral
ollama run qwen2.5vl:3b
These commands run on the VM and download the models for Ollama to use.
Then, run this command on your mac:
curl http://localhost:11434/api/tags
This command connects to your forwarded Ollama server and shows a list of all the models you’ve pulled so far. It gives you a response like:
{
"models": [
{ "name": "devstral" },
{ "name": "qwen2.5vl:3b" }
]
}
Note: This command runs on your Mac, not on the VM — because we’ve already port-forwarded localhost:11434
to the remote GPU VM where Ollama is active.
So in short:
- Ollama is running remotely on a GPU VM
- You’ve connected it to your Mac via SSH
- And now you can check, query, and chat with models — all from your local editor
Step 13: Download Zed Editor
“Open Google or any web browser, type ‘Zed AI Editor’, visit the official website, and click the ‘Download Void’ button to download it.”
Step 14: Open Zed Editor
Once the download is complete, open the Zed Editor app from your Applications folder or Start Menu. It should launch with a clean, minimal interface ready for setup.
Step 15: Choose Your LLM Provider
Head over to Zed → Settings → LLMs.
Here, you’ll find a list of supported large language model providers — both free and paid:
- Google AI
- Mistral AI
- LM Studio
- OpenAI
- Ollama
For this setup, we’ll go with Ollama — it’s fast, flexible, and works perfectly with self-hosted models.
We’ll be running Ollama on a GPU-powered VM, because we’re planning to load and play with large models that need serious horsepower. This gives us much faster response times during code generation and edits.
Once Ollama is running on the GPU server, we’ll expose it to our Mac using SSH port forwarding — so we can interact with the models locally inside Zed, just like a native setup.
This lets you harness powerful cloud GPUs, while keeping your coding workflow smooth and private on your Mac.
Step 16: View Available Models Inside Zed Editor
Once your models (like devstral
or qwen2.5vl:3b
) are running in the background via Ollama on your GPU VM — and the port forwarding is active — Zed Editor will automatically detect them.
Where to Find Them?
Head to the “Models” section inside Zed Editor.
You’ll see a list of all available models that Ollama is currently serving.
Models like devstral
, qwen2.5vl:3b
, or any other you’ve pulled and started will show up here — ready to chat, code, or assist you inside the editor.
No need for any extra configuration — Zed listens to http://localhost:11434
, detects the models, and makes them available in the dropdown or sidebar automatically.
You’re now all set to write, test, and build using real local models inside Zed — powered by your GPU VM!
Step 17: Select a Model and Start Running Prompts
You’re almost there — now it’s time to put everything into action!
How to Use:
- Go to the Models section inside Void Editor.
- Select the model you want to use (e.g.,
devstral
, gemma3:1b
, etc.).
- Jump into any file or open a new tab.
- Use the built-in chat panel or prompt bar to ask questions, get suggestions, or generate code.
For example:
“How do I set up a Node.js server?”
“Refactor this function to use async/await.”
“Write a Python script to scrape a webpage.”
Whatever your task — Zed + Ollama + your remote GPU model is now fully connected and ready to respond.
Step 18: Use a Paid Provider (Like OpenAI) by Adding Your API Key
If you prefer to use OpenAI’s models (like GPT-4 or GPT-4o), Zed Editor also supports that — all you need is your OpenAI API key.
How to Set It Up:
- Open Zed Editor
- Go to the Settings panel
- Navigate to the Providers or API Keys section
- Choose OpenAI from the list
- Paste your OpenAI API key in the input field
Your key stays local and is only used within the editor — privacy is respected.
Once added, you can start using OpenAI’s models alongside your local ones. The same prompt bar and model selection flow applies — just choose OpenAI from the model menu and start coding, chatting, or writing.
Step 19: Select OpenAI Model and Run Prompts
Now that your OpenAI API key is added in Zed Editor, it’s time to put it to use.
How to Use:
- Head to the Models panel or dropdown inside Zed.
- From the provider list, select OpenAI.
- Choose the model you want — like
gpt-4
, gpt-4o
, or gpt-3.5-turbo
.
- Open any file or tab and start typing your question or prompt.
For example:
“Explain what this Python function does.”
“Generate TypeScript types from this JSON.”
“Write unit tests for this function.”
The response will come directly from OpenAI’s API — integrated neatly into your Zed workflow.
Whether you’re coding, debugging, or brainstorming — it just works.
Conclusion
That’s it — your dream dev setup is now live. Zed Editor running on your Mac, Ollama hosting powerful models like Devstral and Qwen 2.5 VL on a GPU-powered VM, and everything connected seamlessly via SSH.
You get the best of both worlds:
- A blazing-fast local code editor with smart inline LLM assistance
- Backed by scalable, VRAM-heavy cloud GPUs that can actually run 7B+ models without breaking a sweat
This setup doesn’t just help you write code — it understands your workflow, adapts to your stack, and gives you full control from start to finish.
Whether you’re debugging legacy code, building a side project, or brainstorming wild ideas at 2AM — Zed + Ollama + LLMs + NodeShift GPU = your new secret weapon.