Wan2.1 is a cutting-edge video generation model designed to push the boundaries of creativity and efficiency in multimedia production. Built with advanced techniques in visual storytelling, this model enables the seamless transformation of text into high-quality video content. With support for text-to-video, image-to-video, and video editing, Wan2.1 delivers exceptional results while remaining optimized for a wide range of computing environments. It is crafted to run efficiently on consumer-grade hardware, allowing individuals and teams to produce professional-level videos without requiring extensive resources. Whether used for content creation, animation, or research, Wan2.1 offers an innovative approach to video generation, making high-quality production more accessible than ever.
Model | Dimension | Input Dimension | Output Dimension | Feedforward Dimension | Frequency Dimension | Number of Heads | Number of Layers |
---|
1.3B | 1536 | 16 | 16 | 8960 | 256 | 12 | 30 |
14B | 5120 | 16 | 16 | 13824 | 256 | 40 | 40 |
Model Resource
Hugging Face
Link: https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B
GitHub
Link: https://github.com/Wan-Video/Wan2.1
Prerequisites for Installing Wan2.1-T2V-1.3B Model Locally
GPU
- Memory (VRAM):
- Minimum: 24GB (with optimizations for shape generation).
- Recommended: 48GB for efficient texture synthesis and rendering.
- Optimal: 80GB for high-resolution
- Type: NVIDIA GPUs with Tensor Cores (e.g., RTX 4090, A6000, A100, H100).
Disk Space
- Minimum: 100GB free SSD storage.
- Recommended: 200GB SSD for storing 3D assets, checkpoints, and texture data.
RAM
- Minimum: 32GB.
- Recommended: 64GB for smooth processing.
CPU
- Minimum: 16 cores.
- Recommended: 32-64 cores for fast generation.
Step-by-Step Process to Install Wan2.1-T2V-1.3B Model Locally
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deployment.
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1 x RTX A6000 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy Wan2.1-T2V-1.3B on an NVIDIA Cuda Virtual Machine. This proprietary, closed-source parallel computing platform will allow you to install Wan2.1-T2V-1.3B on your GPU Node.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to GPUs using SSH
NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.
Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.
Now open your terminal and paste the proxy SSH IP or direct SSH IP.
Next, if you want to check the GPU details, run the command below:
nvidia-smi
Step 8: Check the Available Python version and Install the new version
Run the following commands to check the available Python version.
If you check the version of the python, system has Python 3.8.1 available by default. To install a higher version of Python, you’ll need to use the deadsnakes
PPA.
Run the following commands to add the deadsnakes
PPA:
sudo apt update
sudo apt install -y software-properties-common
sudo add-apt-repository -y ppa:deadsnakes/ppa
sudo apt update
Step 9: Install Python 3.11
Now, run the following command to install Python 3.11 or another desired version:
sudo apt install -y python3.11 python3.11-distutils python3.11-venv
Step 10: Update the Default Python3
Version
Now, run the following command to link the new Python version as the default python3
:
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 1
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 2
sudo update-alternatives --config python3
Then, run the following command to verify that the new Python version is active:
python3 --version
Step 11: Install and Update Pip
Run the following command to install and update the pip:
python3 -m ensurepip --upgrade
python3 -m pip install --upgrade pip
Then, run the following command to check the version of pip:
pip --version
Step 12: Clone the Repository
Run the following command to clone the WAN2.1 repository:
git clone https://github.com/Wan-Video/Wan2.1.git
cd Wan2.1
Step 13: Install Pytorch
Run the following command to install the Pytorch:
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu117
Step 14: Install Dependencies
Run the following command to install the dependencies:
pip install -r requirements.txt
Step 15: Install HuggingFace Hub
Run the following command to install the huggingface_hub:
pip install huggingface_hub
Step 16: Login Using Your Hugging Face API Token
Use the huggingface_hub
cli to login directly in the terminal.
Run the following command to login in huggingface-cli:
huggingface-cli login
Then, enter the token and press the Enter key. Ensure you press Enter after entering the token so the input will not be visible.
After entering the token, you will see the following output:
Login Successful.
The current active token is (your_token_name).
Check the screenshot below for reference.
How to Generate a Hugging Face Token
- Create an Account: Go to the Hugging Face website and sign up for an account if you don’t already have one.
- Access Settings: After logging in, click on your profile photo in the top right corner and select “Settings.”
- Navigate to Access Tokens: In the settings menu, find and click on the “Access Tokens” tab.
- Generate a New Token: Click the “New token” button, provide a name for your token, and choose a role (either
read
or write
).
- Generate and Copy Token: Click the “Generate a token” button. Your new token will appear; click “Show” to view it and copy it for use in your applications.
- Secure Your Token: Ensure you keep your token secure and do not expose it in public code repositories.
Step 17: Download Model Using Huggingface-CLI
Run the following command to download model using huggingface-cli:
huggingface-cli download Wan-AI/Wan2.1-T2V-1.3B --local-dir ./Wan2.1-T2V-1.3B
Step 18: Navigate to the gradio
Directory
Run the following command to change into the Gradio folder inside the Wan2.1
directory:
cd gradio
Step 19: Start Gradio for the 1.3B Model
Since you are using the T2V-1.3B model, you need to point the Gradio demo to the correct checkpoint directory.
Run the following command:
python3 t2v_1.3B_singleGPU.py --ckpt_dir ../Wan2.1-T2V-1.3B
This will start a Gradio UI where you can input text prompts and generate videos.
Step 20: Access the Gradio Interface
After running the command, you should see an output like:
Running on local URL: http://127.0.0.1:7860
- You should see the Wan2.1 Gradio Interface.
- Enter a text prompt and start generating videos.
Step 21: SSH port forwarding
To forward local port 7860 on your windows machine to port 7860 on the VM, use the following command in command prompt or powershell:
ssh -i C:\Users\Acer\.ssh\id_rsa -L 7860:localhost:7860 -p 22007 root@192.165.134.27
Step 22: Access the Gradio Web UI Locally
After running the SSH command, open your local browser and visit:
http://127.0.0.1:7860
Step 23: Generate Video
Generated Video Link: https://drive.google.com/file/d/1TEq1IwLLeVc0iNeyuUgW7To-bACkZpZp/view?usp=sharing
Conclusion
Wan2.1-T2V-1.3B is a groundbreaking advancement in video generation, offering an efficient and accessible solution for transforming text into high-quality motion visuals. With its optimized performance for consumer-grade hardware, it enables a wide range of creative professionals, researchers, and developers to explore new possibilities in content production. The step-by-step setup ensures a seamless installation process, making it easy to deploy on various computing environments. By providing a structured workflow for generating videos through Gradio, this model simplifies video creation while maintaining high-resolution output. As an open and scalable framework, Wan2.1 sets a new benchmark in video synthesis, empowering users to craft dynamic visuals with precision and ease.