Introducing Dia-1.6B: Open Dialogue-to-Speech with Realism & Expression
Dia is a fully open, 1.6 billion parameter text-to-speech model crafted by the small but mighty team at Nari Labs. Unlike traditional TTS tools, Dia doesn’t just read — it performs. With the ability to switch speakers, express emotions, and even insert non-verbal gestures like (laughs) or (coughs), Dia brings scripts to life with uncanny realism.
Plug in a simple transcript — optionally guided by an audio prompt — and Dia generates vivid, back-and-forth conversations. It’s a playground for storytellers, developers, and researchers who want full control over expressive speech without relying on closed platforms.
Built on PyTorch, optimized for GPU speed, and backed by Apache 2.0 licensing, Dia is here to empower voice-first experiences with full transparency and community collaboration.
Recommended GPU Configuration Table for Dia-1.6B
GPU Model | VRAM (GB) | CUDA Version | Inference Speed | Usage Notes |
---|
NVIDIA A100 | 40–80 | 12.1–12.6 | ⚡ Ultra Fast (~100+ tokens/s) | Ideal for real-time, production-grade inference |
RTX A6000 | 48 | 12.1+ | ⚡ Fast (~80–100 tokens/s) | Well-suited for full model + voice cloning workflows |
RTX 4090 | 24 | 12.1+ | 🚀 Fast (~70–90 tokens/s) | Great for local or research deployments |
RTX 3090 | 24 | 12.0+ | 🚀 Moderate (~60–80 tokens/s) | Runs well with torch.compile for speed-up |
RTX 3080 (10GB) | 10 | 12.0+ | ✅ Supported (~30–45 tokens/s) | Minimum for full Dia model – avoid multitasking |
RTX A4000 | 16 | 12.0+ | ⚠️ Slower (~40 tokens/s) | Works, but slower. Best for experimentation |
T4 / A10G | 16 | 12.0+ | ⚠️ Limited (~20–35 tokens/s) | Works with basic usage, not ideal for full voice cloning |
Resources
Link: https://huggingface.co/nari-labs/Dia-1.6B-0626
Step-by-Step Process to Install Nari Dia-1.6B-0626 Locally
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deploy
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1 x RTXA6000 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
In our previous blogs, we used pre-built images from the Templates tab when creating a Virtual Machine. However, for running Nari Dia-1.6B-0626, we need a more customized environment with full CUDA development capabilities. That’s why, in this case, we switched to the Custom Image tab and selected a specific Docker image that meets all runtime and compatibility requirements.
We chose the following image:
nvidia/cuda:12.1.1-devel-ubuntu22.04
This image is essential because it includes:
- Full CUDA toolkit (including
nvcc
)
- Proper support for building and running GPU-based applications like Nari Dia-1.6B-0626
- Compatibility with CUDA 12.1.1 required by certain model operations
Launch Mode
We selected:
Interactive shell server
This gives us SSH access and full control over terminal operations — perfect for installing dependencies, running benchmarks, and launching tools like Nari Dia-1.6B-0626.
Docker Repository Authentication
We left all fields empty here.
Since the Docker image is publicly available on Docker Hub, no login credentials are required.
Identification
nvidia/cuda:12.1.1-devel-ubuntu22.04
CUDA and cuDNN images from gitlab.com/nvidia/cuda. Devel version contains full cuda toolkit with nvcc.
This setup ensures that the Nari Dia-1.6B-0626 runs in a GPU-enabled environment with proper CUDA access and high compute performance.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to GPUs using SSH
NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.
Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.
Now open your terminal and paste the proxy SSH IP or direct SSH IP.
Next, If you want to check the GPU details, run the command below:
nvidia-smi
Step 8: Update system packages and Install Python
Run the following command to update system packages and install python:
sudo apt update && sudo apt upgrade -y
sudo apt install -y git python3.10 python3.10-venv python3.10-dev build-essential ffmpeg
Step 9: Clone the Dia Repository
Run the following command to clone the dia repository:
git clone https://github.com/nari-labs/dia.git
cd dia
Step 10: Create and Activate a Virtual Environment
Run the following command to create and activate a virtual environment:
python3.10 -m venv .venv
source .venv/bin/activate
Step 11: Install PyTorch for CUDA 12.1
Run the following command to install PyTorch for CUDA 12.1:
pip install --upgrade pip
pip install torch==2.1.0+cu121 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
Step 12: Install Dia and its Dependencies
Run the following command to install dia and its dependencies:
pip install -e .
Step 13: Run the Gradio Web UI
Execute the following command to run the gradio web ui:
python app.py
This will launch a gradio interface at:
http://127.0.0.1:7860
Step 14: Run SSH Port Forwarding Command to access the Gradio Web App
Run the following command to access the Gradio web app (or any other port from your VM) on your local machine:
ssh -p 40466 -L 7860:127.0.0.1:7860 root@38.29.145.28
Step 15: Access the Gradio Web App
Access the Gradio Web App on:
Running on local URL: http://localhost:7860
Generate Output
Conclusion
Dia-1.6B isn’t just another voice model — it’s a canvas for creative storytelling, expressive speech, and realistic dialogue generation. Whether you’re building interactive audio apps, experimenting with new forms of narration, or just curious about open speech technology, Dia gives you the freedom to build, tweak, and perform — all from your terminal.
With full control over scripts, speaker cues, and even non-verbal gestures like (laughs) or (sighs), you’re not just generating audio — you’re directing a conversation.
Thanks to its open-source nature, permissive Apache 2.0 license, and compatibility with everyday GPU hardware, Dia is one of the most accessible and expressive text-to-dialogue tools available today.
So go ahead — launch your VM, open the Gradio UI, and give your scripts a voice of their own.