What if your text-to-speech model could actually sound human, not just in clarity, but in emotion, rhythm, and personality? That’s exactly what Orpheus TTS delivers. Built on a fine-tuned LLaMA backbone, this open-source model brings speech generation to a whole new level. It doesn’t just read text, it speaks it like a real person, with expressive tone, natural pacing, and guided emotional depth. Whether you want to clone voices without training data, add emotion tags for dramatic delivery, use it in real-time apps with super low latency, or even clone it on your own voice, Orpheus has you covered. It even outperforms many closed-source tools when it comes to realism and control, making it a powerful tool for devs, creators, and AI enthusiasts.
In this guide, we’ll show you how to get Orpheus TTS running locally or on GPU-powered resources in just a few simple steps.
Prerequisites
The minimum system requirements for this use case are:
- GPUs: RTX 4090 or RTX A6000
- Disk Space: 100 GB
- RAM: At least 8 GB.
- Anaconda set up
Note: The prerequisites for this are highly variable across use cases. A high-end configuration could be used for a large-scale deployment.
Step-by-step process to install and run Orpheus TTS locally
For the purpose of this tutorial, we’ll use a GPU-powered Virtual Machine by NodeShift since it provides high compute Virtual Machines at a very affordable cost on a scale that meets GDPR, SOC2, and ISO27001 requirements. Also, it offers an intuitive and user-friendly interface, making it easier for beginners to get started with Cloud deployments. However, feel free to use any cloud provider of your choice and follow the same steps for the rest of the tutorial.
Step 1: Setting up a NodeShift Account
Visit app.nodeshift.com and create an account by filling in basic details, or continue signing up with your Google/GitHub account.
If you already have an account, login straight to your dashboard.
Step 2: Create a GPU Node
After accessing your account, you should see a dashboard (see image), now:
- Navigate to the menu on the left side.
- Click on the GPU Nodes option.
- Click on Start to start creating your very first GPU node.
These GPU nodes are GPU-powered virtual machines by NodeShift. These nodes are highly customizable and let you control different environmental configurations for GPUs ranging from H100s to A100s, CPUs, RAM, and storage, according to your needs.
Step 3: Selecting configuration for GPU (model, region, storage)
- For this tutorial, we’ll be using the RTX 4090 GPU; however, you can choose any GPU of your choice based on your needs.
- Similarly, we’ll opt for 200GB storage by sliding the bar. You can also select the region where you want your GPU to reside from the available ones.
Step 4: Choose GPU Configuration and Authentication method
- After selecting your required configuration options, you’ll see the available GPU nodes in your region and according to (or very close to) your configuration. In our case, we’ll choose a 1x RTX A6000 48GB GPU node with 64vCPUs/63GB RAM/200GB SSD.
2. Next, you’ll need to select an authentication method. Two methods are available: Password and SSH Key. We recommend using SSH keys, as they are a more secure option. To create one, head over to our official documentation.
Step 5: Choose an Image
The final step would be to choose an image for the VM, which in our case is Nvidia Cuda, where we’ll deploy and run the inference of our model.
That’s it! You are now ready to deploy the node. Finalize the configuration summary, and if it looks good, click Create to deploy the node.
Step 6: Connect to active Compute Node using SSH
- As soon as you create the node, it will be deployed in a few seconds or a minute. Once deployed, you will see a status Running in green, meaning that our Compute node is ready to use!
- Once your GPU shows this status, navigate to the three dots on the right and click on Connect with SSH. This will open a pop-up box with the Host details. Copy and paste that in your local terminal to connect to the remote server via SSH.
As you copy the details, follow the below steps to connect to the running GPU VM via SSH:
- Open your terminal, paste the SSH command, and run it.
2. In some cases, your terminal may take your consent before connecting. Enter ‘yes’.
3. A prompt will request a password. Type the SSH password, and you should be connected.
Output:
Next, If you want to check the GPU details, run the following command in the terminal:
!nvidia-smi
Output:
Step 7: Set up the project environment with dependencies
- Create a virtual environment using Anaconda.
conda create -n tts python=3.11 && conda activate tts
Output:
2. Once you’re inside the environment, install project dependencies as mentioned in below.
pip install torch torchvision torchaudio einops timm pillow
pip install git+https://github.com/huggingface/transformers
pip install git+https://github.com/huggingface/accelerate
pip install git+https://github.com/huggingface/diffusers
pip install huggingface_hub
pip install sentencepiece bitsandbytes protobuf decord
pip install librosa peft numpy
Output:
3. Clone the official repository of Orpheus TTS and move inside the project directory.
git clone https://github.com/canopyai/Orpheus-TTS.git && cd Orpheus-TTS
Output:
4. Install Orpheus’s python package.
pip install orpheus-speech
Output:
Step 8: Download and Run the model
- Since this is a gated mode, we’ll need to first login to
huggingface-cli
with our access token.
(replace <YOUR_HF_TOKEN>
with your HF READ Token)
huggingface-cli login --token=<YOUR_HF_TOKEN>
Output:
2. For downloading the model checkpoints and running it for inference at the same time, we’ll write the starter code in the project.
For this, if you’re using a remote server (e.g. NodeShift GPU), you’ll first need to connect your local VS Code editor to your remote server via SSH with the following steps:
a) Install the “Remote-SSH” Extension by Microsoft on VS Code.
b) Type “Remote-SSH: Connect to Host” on the Command Palette.
c) Enter the host details, such as username and SSH password, and you should be connected.
3. Create a file named app.py
in the root of the project and paste the below code snippet in it.
from orpheus_tts import OrpheusModel
import wave
import time
model = OrpheusModel(model_name ="canopylabs/orpheus-tts-0.1-finetune-prod")
prompt = '''Man, the way social media has, um, completely changed how we interact is just wild, right? Like, we're all connected 24/7 but somehow people feel more alone than ever. And don't even get me started on how it's messing with kids' self-esteem and mental health and whatnot.'''
start_time = time.monotonic()
syn_tokens = model.generate_speech(
prompt=prompt,
voice="tara",
)
with wave.open("output.wav", "wb") as wf:
wf.setnchannels(1)
wf.setsampwidth(2)
wf.setframerate(24000)
total_frames = 0
chunk_counter = 0
for audio_chunk in syn_tokens: # output streaming
chunk_counter += 1
frame_count = len(audio_chunk) // (wf.getsampwidth() * wf.getnchannels())
total_frames += frame_count
wf.writeframes(audio_chunk)
duration = total_frames / wf.getframerate()
end_time = time.monotonic()
print(f"It took {end_time - start_time} seconds to generate {duration:.2f} seconds of audio")
The file looks like this:
4. Run the app.py file in terminal.
python3 app.py
Now, as you run the above file you might encounter an error like this:
The official repository has predicted about these bugs and has suggested to run the following command before running the code:
pip install vllm==0.7.3
Output:
Once you install the above mentioned version of vllm
, run python3
app.py
again and now it should successfully start downloading the model checkpoints:
Once the model has finished the process, you’ll find the output file named “output.wav
” inside your project directory in VSCode. You can click and run the audio to listen the generated speech.
Conclusion
Orpheus TTS opens up a new world of possibilities for anyone looking to generate realistic, expressive speech, from developers building voice interfaces to creators experimenting with storytelling and voice cloning. In this guide, we explored its straightforward installation process after which you can explore its powerful capabilities like zero-shot voice cloning, guided emotion control, and real-time streaming, etc. If you’re looking to go beyond local experimentation and scale with ease, NodeShift Cloud offers a seamless way to run Orpheus TTS on high-performance GPU instances, no infra-setup headaches, just plug and play in few clicks.