UI-TARS is a next-generation framework designed to automate interactions with graphical user interfaces across desktop, mobile, and web environments. By integrating perception, reasoning, action, and memory into a single vision-language model, it enables seamless execution of complex tasks without predefined workflows or manual intervention. UI-TARS is equipped with advanced multimodal processing, real-time interaction tracking, and a unified action framework, making it highly efficient for automation. Released under the Apache License 2.0, it is an open-source project that encourages contributions from developers looking to enhance automation capabilities in user interface interactions.
Perception Capabilty Evaluation
Qwen2-VL-7B | 73.3 | 81.8 | 84.9 |
Qwen-VL-Max | 74.1 | 91.1 | 78.6 |
Gemini-1.5-Pro | 75.4 | 88.9 | 82.2 |
UIX-Qwen2-7B | 75.9 | 82.9 | 78.8 |
Claude-3.5-Sonnet | 78.2 | 90.4 | 83.1 |
GPT-4o | 78.5 | 87.7 | 82.3 |
UI-TARS-2B | 72.9 | 89.2 | 86.4 |
UI-TARS-7B | 79.7 | 93.6 | 87.7 |
UI-TARS-72B | 82.8 | 89.3 | 88.6 |
Grounding Capability Evaluation
Agent Model | Dev-Text | Dev-Icon | Dev-Avg | Creative-Text | Creative-Icon | Creative-Avg | CAD-Text | CAD-Icon | CAD-Avg | Scientific-Text | Scientific-Icon | Scientific-Avg | Office-Text | Office-Icon | Office-Avg | OS-Text | OS-Icon | OS-Avg | Avg-Text | Avg-Icon | Avg |
---|
QwenVL-7B | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7 | 0.0 | 0.4 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1 | 0.0 | 0.1 |
GPT-4o | 1.3 | 0.0 | 0.7 | 1.0 | 0.0 | 0.6 | 2.0 | 0.0 | 1.5 | 2.1 | 0.0 | 1.2 | 1.1 | 0.0 | 0.9 | 0.0 | 0.0 | 0.0 | 1.3 | 0.0 | 0.8 |
SeeClick | 0.6 | 0.0 | 0.3 | 1.0 | 0.0 | 0.6 | 2.5 | 0.0 | 1.9 | 3.5 | 0.0 | 2.0 | 1.1 | 0.0 | 0.9 | 2.8 | 0.0 | 1.5 | 1.8 | 0.0 | 1.1 |
Qwen2-VL-7B | 2.6 | 0.0 | 1.3 | 1.5 | 0.0 | 0.9 | 0.5 | 0.0 | 0.4 | 6.3 | 0.0 | 3.5 | 3.4 | 1.9 | 3.0 | 0.9 | 0.0 | 0.5 | 2.5 | 0.2 | 1.6 |
OS-Atlas-4B | 7.1 | 0.0 | 3.7 | 3.0 | 1.4 | 2.3 | 2.0 | 0.0 | 1.5 | 9.0 | 5.5 | 7.5 | 5.1 | 3.8 | 4.8 | 5.6 | 0.0 | 3.1 | 5.0 | 1.7 | 3.7 |
ShowUI-2B | 16.9 | 1.4 | 9.4 | 9.1 | 0.0 | 5.3 | 2.5 | 0.0 | 1.9 | 13.2 | 7.3 | 10.6 | 15.3 | 7.5 | 13.5 | 10.3 | 2.2 | 6.6 | 10.8 | 2.6 | 7.7 |
CogAgent-18B | 14.9 | 0.7 | 8.0 | 9.6 | 0.0 | 5.6 | 7.1 | 3.1 | 6.1 | 22.2 | 1.8 | 13.4 | 13.0 | 0.0 | 10.0 | 5.6 | 0.0 | 3.1 | 12.0 | 0.8 | 7.7 |
Aria-UI | 16.2 | 0.0 | 8.4 | 23.7 | 2.1 | 14.7 | 7.6 | 1.6 | 6.1 | 27.1 | 6.4 | 18.1 | 20.3 | 1.9 | 16.1 | 4.7 | 0.0 | 2.6 | 17.1 | 2.0 | 11.3 |
UGround-7B | 26.6 | 2.1 | 14.7 | 27.3 | 2.8 | 17.0 | 14.2 | 1.6 | 11.1 | 31.9 | 2.7 | 19.3 | 31.6 | 11.3 | 27.0 | 17.8 | 0.0 | 9.7 | 25.0 | 2.8 | 16.5 |
Claude Computer Use | 22.0 | 3.9 | 12.6 | 25.9 | 3.4 | 16.8 | 14.5 | 3.7 | 11.9 | 33.9 | 15.8 | 25.8 | 30.1 | 16.3 | 26.9 | 11.0 | 4.5 | 8.1 | 23.4 | 7.1 | 17.1 |
OS-Atlas-7B | 33.1 | 1.4 | 17.7 | 28.8 | 2.8 | 17.9 | 12.2 | 4.7 | 10.3 | 37.5 | 7.3 | 24.4 | 33.9 | 5.7 | 27.4 | 27.1 | 4.5 | 16.8 | 28.1 | 4.0 | 18.9 |
UGround-V1-7B | – | – | 35.5 | – | – | 27.8 | – | – | 13.5 | – | – | 38.8 | – | – | 48.8 | – | – | 26.1 | – | – | 31.1 |
UI-TARS-2B | 47.4 | 4.1 | 26.4 | 42.9 | 6.3 | 27.6 | 17.8 | 4.7 | 14.6 | 56.9 | 17.3 | 39.8 | 50.3 | 17.0 | 42.6 | 21.5 | 5.6 | 14.3 | 39.6 | 8.4 | 27.7 |
UI-TARS-7B | 58.4 | 12.4 | 36.1 | 50.0 | 9.1 | 32.8 | 20.8 | 9.4 | 18.0 | 63.9 | 31.8 | 50.0 | 63.3 | 20.8 | 53.5 | 30.8 | 16.9 | 24.5 | 47.8 | 16.2 | 35.7 |
UI-TARS-72B | 63.0 | 17.3 | 40.8 | 57.1 | 15.4 | 39.6 | 18.8 | 12.5 | 17.2 | 64.6 | 20.9 | 45.7 | 63.3 | 26.4 | 54.8 | 42.1 | 15.7 | 30.1 | 50.9 | 17.5 | 38.1 |
ScreenSpot
Method | Mobile-Text | Mobile-Icon/Widget | Desktop-Text | Desktop-Icon/Widget | Web-Text | Web-Icon/Widget | Avg |
---|
Agent Framework | | | | | | | |
GPT-4 (SeeClick) | 76.6 | 55.5 | 68.0 | 28.6 | 40.9 | 23.3 | 48.8 |
GPT-4 (OmniParser) | 93.9 | 57.0 | 91.3 | 63.6 | 81.3 | 51.0 | 73.0 |
GPT-4 (UGround-7B) | 90.1 | 70.3 | 87.1 | 55.7 | 85.7 | 64.6 | 75.6 |
GPT-4o (SeeClick) | 81.0 | 59.8 | 69.6 | 33.6 | 43.9 | 26.2 | 52.3 |
GPT-4o (UGround-7B) | 93.4 | 76.9 | 92.8 | 67.9 | 88.7 | 68.9 | 81.4 |
Agent Model | | | | | | | |
GPT-4 | 22.6 | 24.5 | 20.2 | 11.8 | 9.2 | 8.8 | 16.2 |
GPT-4o | 20.2 | 24.9 | 21.1 | 23.6 | 12.2 | 7.8 | 18.3 |
CogAgent | 67.0 | 24.0 | 74.2 | 20.0 | 70.4 | 28.6 | 47.4 |
SeeClick | 78.0 | 52.0 | 72.2 | 30.0 | 55.7 | 32.5 | 53.4 |
Qwen2-VL | 75.5 | 60.7 | 76.3 | 54.3 | 35.2 | 25.7 | 55.3 |
UGround-7B | 82.8 | 60.3 | 82.5 | 63.6 | 80.4 | 70.4 | 73.3 |
Aguvis-G-7B | 88.3 | 78.2 | 88.1 | 70.7 | 85.7 | 74.8 | 81.8 |
OS-Atlas-7B | 93.0 | 72.9 | 91.8 | 62.9 | 90.9 | 74.3 | 82.5 |
Claude Computer Use | – | – | – | – | – | – | 83.0 |
Gemini 2.0 (Project Mariner) | – | – | – | – | – | – | 84.0 |
Aguvis-7B | 95.6 | 77.7 | 93.8 | 67.1 | 88.3 | 75.2 | 84.4 |
Aguvis-72B | 94.5 | 85.2 | 95.4 | 77.9 | 91.3 | 85.9 | 89.2 |
Our Model | | | | | | | |
UI-TARS-2B | 93.0 | 75.5 | 90.7 | 68.6 | 84.3 | 74.8 | 82.3 |
UI-TARS-7B | 94.5 | 85.2 | 95.9 | 85.7 | 90.0 | 83.5 | 89.5 |
UI-TARS-72B | 94.9 | 82.5 | 89.7 | 88.6 | 88.7 | 85.0 | 88.4 |
ScreenSpot v2
Method | Mobile-Text | Mobile-Icon/Widget | Desktop-Text | Desktop-Icon/Widget | Web-Text | Web-Icon/Widget | Avg |
---|
Agent Framework | | | | | | | |
GPT-4o (SeeClick) | 85.2 | 58.8 | 79.9 | 37.1 | 72.7 | 30.1 | 63.6 |
GPT-4o (OS-Atlas-4B) | 95.5 | 75.8 | 79.4 | 49.3 | 90.2 | 66.5 | 79.1 |
GPT-4o (OS-Atlas-7B) | 96.2 | 83.4 | 89.7 | 69.3 | 94.0 | 79.8 | 87.1 |
Agent Model | | | | | | | |
SeeClick | 78.4 | 50.7 | 70.1 | 29.3 | 55.2 | 32.5 | 55.1 |
OS-Atlas-4B | 87.2 | 59.7 | 72.7 | 46.4 | 85.9 | 63.1 | 71.9 |
OS-Atlas-7B | 95.2 | 75.8 | 90.7 | 63.6 | 90.6 | 77.3 | 84.1 |
Our Model | | | | | | | |
UI-TARS-2B | 95.2 | 79.1 | 90.7 | 68.6 | 87.2 | 78.3 | 84.7 |
UI-TARS-7B | 96.9 | 89.1 | 95.4 | 85.0 | 93.6 | 85.2 | 91.6 |
UI-TARS-72B | 94.8 | 86.3 | 91.2 | 87.9 | 91.5 | 87.7 | 90.3 |
Multimodal Mind2Web
Method | Cross-Task Ele.Acc | Cross-Task Op.F1 | Cross-Task Step SR | Cross-Website Ele.Acc | Cross-Website Op.F1 | Cross-Website Step SR | Cross-Domain Ele.Acc | Cross-Domain Op.F1 | Cross-Domain Step SR |
---|
Agent Framework | | | | | | | | | |
GPT-4o (SeeClick) | 32.1 | – | – | 33.1 | – | – | 33.5 | – | – |
GPT-4o (UGround) | 47.7 | – | – | 46.0 | – | – | 46.6 | – | – |
GPT-4o (Aria-UI) | 57.6 | – | – | 57.7 | – | – | 61.4 | – | – |
GPT-4V (OmniParser) | 42.4 | 87.6 | 39.4 | 41.0 | 84.8 | 36.5 | 45.5 | 85.7 | 42.0 |
Agent Model | | | | | | | | | |
GPT-4o | 5.7 | 77.2 | 4.3 | 5.7 | 79.0 | 3.9 | 5.5 | 86.4 | 4.5 |
GPT-4 (SOM) | 29.6 | – | 20.3 | 20.1 | – | 13.9 | 27.0 | – | 23.7 |
GPT-3.5 (Text-only) | 19.4 | 59.2 | 16.8 | 14.9 | 56.5 | 14.1 | 25.2 | 57.9 | 24.1 |
GPT-4 (Text-only) | 40.8 | 63.1 | 32.3 | 30.2 | 61.0 | 27.0 | 35.4 | 61.9 | 29.7 |
Claude | 62.7 | 84.7 | 53.5 | 59.5 | 79.6 | 47.7 | 64.5 | 85.4 | 56.4 |
Aguvis-7B | 64.2 | 89.8 | 60.4 | 60.7 | 88.1 | 54.6 | 60.4 | 89.2 | 56.6 |
CogAgent | – | – | 62.3 | – | – | 54.0 | – | – | 59.4 |
Aguvis-72B | 69.5 | 90.8 | 64.0 | 62.6 | 88.6 | 56.5 | 63.5 | 88.5 | 58.2 |
Our Model | | | | | | | | | |
UI-TARS-2B | 62.3 | 90.0 | 56.3 | 58.5 | 87.2 | 50.8 | 58.8 | 89.6 | 52.3 |
UI-TARS-7B | 73.1 | 92.2 | 67.1 | 68.2 | 90.9 | 61.7 | 66.6 | 90.9 | 60.5 |
UI-TARS-72B | 74.7 | 92.5 | 68.6 | 72.4 | 91.2 | 63.5 | 68.9 | 91.8 | 62.1 |
Android Control and GUI Odyssey
Agent Models | AndroidControl-Low Type | AndroidControl-Low Grounding | AndroidControl-Low SR | AndroidControl-High Type | AndroidControl-High Grounding | AndroidControl-High SR | GUIOdyssey Type | GUIOdyssey Grounding | GUIOdyssey SR |
---|
Claude | 74.3 | 0.0 | 19.4 | 63.7 | 0.0 | 12.5 | 60.9 | 0.0 | 3.1 |
GPT-4o | 74.3 | 0.0 | 19.4 | 66.3 | 0.0 | 20.8 | 34.3 | 0.0 | 3.3 |
SeeClick | 93.0 | 73.4 | 75.0 | 82.9 | 62.9 | 59.1 | 71.0 | 52.4 | 53.9 |
InternVL-2-4B | 90.9 | 84.1 | 80.1 | 84.1 | 72.7 | 66.7 | 82.1 | 55.5 | 51.5 |
Qwen2-VL-7B | 91.9 | 86.5 | 82.6 | 83.8 | 77.7 | 69.7 | 83.5 | 65.9 | 60.2 |
Aria-UI | — | 87.7 | 67.3 | — | 43.2 | 10.2 | — | 86.8 | 36.5 |
OS-Atlas-4B | 91.9 | 83.8 | 80.6 | 84.7 | 73.8 | 67.5 | 83.5 | 61.4 | 56.4 |
OS-Atlas-7B | 93.6 | 88.0 | 85.2 | 85.2 | 78.5 | 71.2 | 84.5 | 67.8 | 62.0 |
Aguvis-7B | — | — | 80.5 | — | — | 61.5 | — | — | — |
Aguvis-72B | — | — | 84.4 | — | — | 66.4 | — | — | — |
UI-TARS-2B | 98.1 | 87.3 | 89.3 | 81.2 | 78.4 | 68.9 | 93.9 | 86.8 | 83.4 |
UI-TARS-7B | 98.0 | 89.3 | 90.8 | 83.7 | 80.5 | 72.5 | 94.6 | 90.1 | 87.0 |
UI-TARS-72B | 98.1 | 89.9 | 91.3 | 85.2 | 81.5 | 74.7 | 95.4 | 91.4 | 88.6 |
Online Agent Capability Evaluation
Method | OSWorld (Online) | AndroidWorld (Online) |
---|
Agent Framework | | |
GPT-4o (UGround) | – | 32.8 |
GPT-4o (Aria-UI) | 15.2 | 44.8 |
GPT-4o (Aguvis-7B) | 14.8 | 37.1 |
GPT-4o (Aguvis-72B) | 17.0 | – |
GPT-4o (OS-Atlas-7B) | 14.6 | – |
Agent Model | | |
GPT-4o | 5.0 | 34.5 (SoM) |
Gemini-Pro-1.5 | 5.4 | 22.8 (SoM) |
Aguvis-72B | 10.3 | 26.1 |
Claude Computer-Use | 14.9 (15 steps) | 27.9 |
Claude Computer-Use | 22.0 (50 steps) | – |
Our Model | | |
UI-TARS-7B-SFT | 17.7 (15 steps) | 33.0 |
UI-TARS-7B-DPO | 18.7 (15 steps) | – |
UI-TARS-72B-SFT | 18.8 (15 steps) | 46.6 |
UI-TARS-72B-DPO | 22.7 (15 steps) | – |
UI-TARS-72B-DPO | 24.6 (50 steps) | – |
Model Resource
Hugging Face
Link: https://huggingface.co/bytedance-research/UI-TARS-7B-DPO
GitHub
Link: https://github.com/bytedance/UI-TARS
Prerequisites for Installing ByteDance UI-TARS 7B DPO – GUI Agent Model Locally
Make sure you have the following:
- GPUs: 1xRTXA6000 (for smooth execution).
- Disk Space: 200 GB free.
- RAM: 48 GB for smooth execution
- CPU: 48 Cores for smooth execution
Step-by-Step Process to Install ByteDance UI-TARS 7B DPO – GUI Agent Model Locally
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deployment.
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1x RTX A6000 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy ByteDance UI-TARS 7B DPO – GUI Agent Model on an NVIDIA Cuda Virtual Machine. This proprietary, closed-source parallel computing platform will allow you to install ByteDance UI-TARS 7B DPO – GUI Agent Model on your GPU Node.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to GPUs using SSH
NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.
Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.
Now open your terminal and paste the proxy SSH IP or direct SSH IP.
Next, if you want to check the GPU details, run the command below:
nvidia-smi
Step 8: Check the Available Python version and Install the new version
Run the following commands to check the available Python version.
If you check the version of the python, system has Python 3.8.1 available by default. To install a higher version of Python, you’ll need to use the deadsnakes
PPA.
Run the following commands to add the deadsnakes
PPA:
sudo apt update
sudo apt install -y software-properties-common
sudo add-apt-repository -y ppa:deadsnakes/ppa
sudo apt update
Step 9: Install Python 3.11
Now, run the following command to install Python 3.11 or another desired version:
sudo apt install -y python3.11 python3.11-distutils python3.11-venv
Step 10: Update the Default Python3
Version
Now, run the following command to link the new Python version as the default python3
:
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 1
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 2
sudo update-alternatives --config python3
Then, run the following command to verify that the new Python version is active:
python3 --version
Step 11: Install and Update Pip
Run the following command to install and update the pip:
python3 -m ensurepip --upgrade
python3 -m pip install --upgrade pip
Then, run the following command to check the version of pip:
pip --version
Step 12: Clone the Repository
Run the following command to clone the UI‐TARS repository from GitHub:
git clone https://github.com/bytedance/UI-TARS.git
cd UI-TARS
Step 13: Create and Activate a Virtual Environment
It is recommended to use a virtual environment to manage dependencies, run the following command to create and activate a virtual environment:
python3 -m venv uitars_env
source uitars_env/bin/activate
Step 14: Install Torch and Other Libraries
Run the following command to install the torch and other libraries:
pip install torch torchaudio einops timm pillow
Step 15: Install Transformers
Run the following command to install the transformers:
pip install git+https://github.com/huggingface/transformers
Step 16: Install Accelerate
Run the following command to install the accelerate:
pip install git+https://github.com/huggingface/accelerate
Step 17: Install Diffusers
Run the following command to install the diffusers:
pip install git+https://github.com/huggingface/diffusers
Step 18: Install Huggingface Hub
Run the following command to install the Huggingface hub:
pip install huggingface_hub
Step 19: Install Other Libraries
Run the following command to install the other libraries:
pip install sentencepiece bitsandbytes protobuf decord
Step 20: Create the Script
Run and add the following code to create the script:
cat > inference_pipeline.py <<'EOF'
from transformers import pipeline
# Create a text-generation pipeline with the UI-TARS-7B-DPO model.
generator = pipeline(
"text-generation",
model="bytedance-research/UI-TARS-7B-DPO",
trust_remote_code=True, # Use the custom code provided by the model repo
device=0 # Use GPU 0
)
prompt = "Hello, please describe the GUI and its controls."
response = generator(prompt, max_new_tokens=100)
print("Model Response:")
print(response)
EOF
Step 21: Run the Script and Generate the Model Response
Execute the following command to create the script and generate the model response:
CUDA_VISIBLE_DEVICES=0 python3 inference_pipeline.py
This is the step-by-step guide for interacting with the model on the terminal. Our next step is to interact with the model on a browser locally using the Gradio app.
Step 22: Install Gradio
Run the following command to install the gradio:
pip install gradio==3.48.0
Step 23: Create the Gradio Script
Run and add the following code to create the gradio script:
cat > gradio_demo.py <<'EOF'
import gradio as gr
from transformers import pipeline
# Create a text-generation pipeline with the UI-TARS-7B-DPO model.
# trust_remote_code=True is required for custom model code.
generator = pipeline(
"text-generation",
model="bytedance-research/UI-TARS-7B-DPO",
trust_remote_code=True,
device=0 # Use GPU 0
)
def generate_response(prompt):
# Generate a response using the pipeline; adjust max_new_tokens as desired.
outputs = generator(prompt, max_new_tokens=100)
# The pipeline returns a list of dictionaries; extract the generated text.
return outputs[0]['generated_text']
# Build a simple Gradio interface with a text input and text output.
iface = gr.Interface(
fn=generate_response,
inputs=gr.components.Textbox(label="Prompt", placeholder="Enter your prompt here..."),
outputs=gr.components.Textbox(label="Generated Response"),
title="UI-TARS-7B-DPO Demo",
description="A Gradio demo for the UI-TARS-7B-DPO model by bytedance-research."
)
if __name__ == "__main__":
# Launch the interface on a specified port (e.g., 7860) and allow external access if needed.
iface.launch(server_port=7860, share=False)
EOF
Step 24: Run the Gradio Script
Execute the following command to run the gradio script:
CUDA_VISIBLE_DEVICES=0 python3 gradio_demo.py
Step 25: SSH port forwarding
To forward local port 7860
on your windows machine to port 7860
on the VM, use the following command in command prompt or powershell:
ssh -L 7860:127.0.0.1:7860 -p 40109 -i C:\Users\Acer\.ssh\id_rsa root@38.29.145.28
After running this command, you can access the ByteDance UI-TARS 7B DPO in your local browser at http://127.0.0.1:7860
.
Conclusion
ByteDance UI-TARS 7B DPO offers a powerful solution for automating graphical user interface interactions with high efficiency and accuracy. Its advanced capabilities in perception, reasoning, and task execution make it a versatile tool for various applications across desktop, mobile, and web environments. By following the step-by-step installation guide, users can set up and deploy the model locally, enabling seamless interaction through both terminal and browser-based interfaces. As an open-source project under the Apache License 2.0, it encourages developers to contribute and enhance its functionality. With its structured approach and scalable deployment options, UI-TARS 7B DPO sets a new standard for GUI automation.