Fish Speech V1.5 is an advanced text-to-speech system designed to deliver natural and accurate vocal synthesis across multiple languages. Trained on over a million hours of audio data, it supports languages like English, Chinese, Japanese, German, and more, ensuring versatility and precision. With features like voice cloning from short samples, seamless multilingual support, and no reliance on phonetic scripts, Fish Speech excels in usability and performance. It is optimized for speed and accuracy, achieving remarkably low error rates. Users can interact with it through a web-based interface or a graphical desktop application, making it highly accessible for various deployment needs.
Model Resource
Hugging Face
Link: https://huggingface.co/fishaudio/fish-speech-1.5
GitHub
Link: https://github.com/fishaudio/fish-speech
Prerequisites for Installing Fish Speech 1.5 Locally
Make sure you have the following:
- GPUs: 1xRTXA6000 (for smooth execution).
- Disk Space: 40 GB free.
- RAM: 48 GB(24 Also works) but we use 48 for smooth execution
- CPU: 48 Cores(24 Also works)but we use 48 for smooth execution
Step-by-Step Process to Install Fish Speech 1.5 Locally
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deployment.
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1x RTX A6000 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy Fish Speech 1.5 on an NVIDIA Cuda Virtual Machine. This proprietary, closed-source parallel computing platform will allow you to install Fish Speech 1.5 on your GPU Node.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to GPUs using SSH
NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.
Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.
Now open your terminal and paste the proxy SSH IP or direct SSH IP.
Next, if you want to check the GPU details, run the command below:
nvidia-smi
Step 8: Install Miniconda & Packages
After completing the steps above, install Miniconda.
Miniconda is a free minimal installer for conda. It allows the management and installation of Python packages.
Anaconda has over 1,500 pre-installed packages, making it a comprehensive solution for data science projects. On the other hand, Miniconda allows you to install only the packages you need, reducing unnecessary clutter in your environment.
We highly recommend installing Python using Miniconda. Miniconda comes with Python and a small number of essential packages. Additional packages can be installed using the package management systems Mamba or Conda.
For Linux/macOS:
Download the Miniconda installer script:
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
For Windows:
- Download the Windows Miniconda installer from the official website.
- Run the installer and follow the installation prompts.
Run the installer script:
bash Miniconda3-latest-Linux-x86_64.sh
After Installing Miniconda, you will see the following message:
Thank you for installing Miniconda 3! This means Miniconda is installed in your working directory or on your operating system.
Step 9: Activate Conda and Perform a Version Check
After the installation process, activate Conda using the following command:
conda init
source ~/.bashrc
Also, check the version of Conda using the following command:
conda --version
Step 10: Create and Activate Your Environment
Create a Conda Environment using the following command:
conda create -n fish python=3.11 -y
conda create
: This command is used to create a new Conda environment.
-n fish
: The -n
flag specifies the name of the environment. Here fish
is the name of the environment you’re creating. You can name it anything you like.
python=3.11
: This specifies the version of Python you want to install in the environment. In this case, it’s Python 3.11.
-y
: This flag automatically answers “yes” to all prompts during the creation process, so the environment is created without asking for further confirmation.
Activating the Conda Environment
Run the following command to activate the Conda Environment:
conda activate fish
conda activate fish
: This command activates the environment you just created. Once activated, any Python-related commands or installations will be isolated to this environment, which won’t affect other environments or your global Python installation.
Step 11: Clone the Repository
Run the following command in the terminal to clone the repository:
git clone https://github.com/fishaudio/fish-speech.git
Then, run the following command in terminal to navigate to the main project directory:
cd fish-speech
Step 12: Login Using Your Hugging Face API Token
Use the huggingface_hub
cli to login directly in the terminal.
Run the following command to login in huggingface-cli:
huggingface-cli login
Then, enter the token and press the Enter key. Ensure you press Enter after entering the token so the input will not be visible.
After entering the token, you will see the following output:
Login Successful.
The current active token is (your_token_name).
Check the screenshot below for reference.
How to Generate a Hugging Face Token
- Create an Account: Go to the Hugging Face website and sign up for an account if you don’t already have one.
- Access Settings: After logging in, click on your profile photo in the top right corner and select “Settings.”
- Navigate to Access Tokens: In the settings menu, find and click on the “Access Tokens” tab.
- Generate a New Token: Click the “New token” button, provide a name for your token, and choose a role (either
read
or write
).
- Generate and Copy Token: Click the “Generate a token” button. Your new token will appear; click “Show” to view it and copy it for use in your applications.
- Secure Your Token: Ensure you keep your token secure and do not expose it in public code repositories.
Step 13: Download Model and Dataset
Run the following command to download the model and dataset:
huggingface-cli download fishaudio/fish-speech-1.5 --local-dir checkpoints/fish-speech-1.5/
Step 14: Install Libportaudio2
Run the following command to install the libportaudio2:
sudo apt-get install libportaudio2
Step 15: Install Package in a Conda Environment
Run the following command to install the package in a Conda environment:
conda install -c anaconda pyaudio
Step 16: Install Torch and Other Libraries
Run the following command to install the torch and other libraries:
pip install torch torchaudio einops timm pillow
Step 17: Install Transformers
Run the following command to install the transformers:
pip install git+https://github.com/huggingface/transformers
Step 18: Install Accelerate
Run the following command to install the accelerate:
pip install git+https://github.com/huggingface/accelerate
Step 19: Install Diffusers
Run the following command to install the diffusers:
pip install git+https://github.com/huggingface/diffusers
Step 20: Install Huggingface Hub
Run the following command to install the Huggingface hub:
pip install huggingface_hub
Step 21: Install Other Libraries
Run the following command to install the other libraries:
pip install sentencepiece bitsandbytes protobuf decord
Step 22: Start the Server
Run the following command to start the server:
python3 tools/run_webui.py \
--llama-checkpoint-path checkpoints/fish-speech-1.5 \
--decoder-checkpoint-path checkpoints/fish-speech-1.5/firefly-gan-vq-fsq-8x1024-21hz-generator.pth \
--compile
Step 23: SSH port forwarding
To forward local port 7860
on your windows machine to port 7860
on the VM, use the following command in command prompt or powershell:
ssh -i C:\Users\Acer\.ssh\id_rsa -L 7860:127.0.0.1:7860 root@38.29.145.24 -p 40674
After running this command, you can access the Fish Speech in your local browser at http://127.0.0.1:7860
.
Conclusion
Fish Speech 1.5 is a groundbreaking open-source model from fishaudio that brings state-of-the-art AI capabilities to developers and researchers. Following this guide, you can quickly deploy fishaudio on a GPU-powered Virtual Machine with NodeShift, harnessing its full potential. NodeShift provides an accessible, secure, affordable platform to run your AI models efficiently. It is an excellent choice for those experimenting with Fish Speech 1.5 and other cutting-edge AI tools.