Langtrace is an open-source observability tool licensed under AGPL-3.0 and freely available for users and the community. It captures, debugs, and analyzes traces and metrics from all your applications leveraging LLM APIs, vector databases, and LLM-based frameworks.
Langtrace enables seamless tracing with Open Telemetry support, real-time monitoring, performance insights, detailed analytics, and effective debugging tools. It also offers a self-hosting option for full control over deployment.
Prerequisites
- A Virtual Machine (such as the ones provided by NodeShift) with at least:
- 8 vCPUs
- 16GB RAM
- 150GB SSD(Atleast)
- Ubuntu 22.04 VM
- Access to your server via SSH
Step-by-Step process to Install LangTrace Tool Locally
For the purpose of this tutorial, we will use a CPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
However, if you prefer to use a GPU-powered Virtual Machine, you can still follow this guide. LangTrace works on GPU-based VMs as well, performance is better and faster than CPU VM on GPU VM. The installation process remains largely the same, allowing you to achieve similar functionality on a GPU-powered machine. NodeShift’s infrastructure is versatile, enabling you to choose between GPU or CPU configurations based on your specific needs and budget.
Let’s dive into the setup and installation steps to get Screenshot to Code running efficiently on your chosen virtual machine.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
- Visit the NodeShift Platform and create an account.
- Once you have signed up, log into your account.
- Follow the account setup process and provide the necessary details and information.
Step 2: Create a Compute Node (CPU Virtual Machine)
NodeShift Compute Nodes offers flexible and scalable on-demand resources like NodeShift Virtual Machines which are easily deployed and come with general-purpose, CPU-powered, or storage-optimized nodes.
- Navigate to the menu on the left side.
- Select the Compute Nodes option.
- Click the Create Compute Nodes button in the Dashboard to create your first deployment.
Step 3: Select Virtual Machine Uptime Guarantee
- Choose the Virtual Machine Uptime Guarantee option based on your needs. NodeShift offers an uptime SLA of 99.99% for high reliability.
- Click on the “Show reliability info” to review detailed SLA and reliability options.
Step 4: Select a Region
In the “Compute Nodes” tab, select a geographical region where you want to launch the Virtual Machine (e.g., the United States).
Step 5: Choose VM Configuration
- NodeShift provides two options for VM configuration:
- Manual Configuration: Adjust the CPU, RAM, and Storage to your specific requirements.
- Select the number of CPUs (1–96).
- Choose the amount of RAM (1 GB–768 GB).
- Specify the storage size (20 GB–4 TB).
- Predefined Configuration: Choose from predefined configurations optimized for General Purpose, CPU-Powered, or Storage-Optimized nodes.
- If you prefer custom specifications, manually configure the CPU, RAM, and Storage. Otherwise, select a predefined VM configuration suitable for your workload.
Step 6: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy the VM on Ubuntu, but you can choose according to your preference. Other options like CentOS and Debian are also available to install LangTrace.
Step 7: Choose the Billing Cycle & Authentication Method
- Select the billing cycle that best suits your needs. Two options are available: Hourly, ideal for short-term usage and pay-as-you-go flexibility, or Monthly, perfect for long-term projects with a consistent usage rate and potentially lower overall cost.
- Select the authentication method. There are two options: Password and SSH Key. SSH keys are a more secure option. To create them, refer to our official documentation.
Step 8: Additional Details & Complete Deployment
- The ‘Finalize Details’ section allows users to configure the final aspects of the Virtual Machine.
- After finalizing the details, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 9: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 10: Connect via SSH
- Open your terminal
- Run the SSH command:
For example, if your username is root
, the command would be:
ssh root@ip
- If SSH keys are set up, the terminal will authenticate using them automatically.
- If prompted for a password, enter the password associated with the username on the VM.
- You should now be connected to your VM!
Step 11: Clone the Repository
Run the following command to clone the repository:
git clone https://github.com/Scale3-Labs/langtrace.git
Then, run the following command to navigate to the main project directory:
cd langtrace
Step 12: Install Dependencies
Before we install Docker, we need to install some required dependencies.
Run the following command to updating the Ubuntu package source list for the latest version and security updates:
sudo apt update
Then, run the following command to, install the dependency packages:
sudo apt install apt-transport-https ca-certificates curl software-properties-common
Step 13: Add the GPG key for the Docker Repository
We use curl to add the GPG key for the Docker repository.
Run the following command to add the GPG key for the Docker repository:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
Then, run the following command to add Docker APT repository to the system’s source list:
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Step 14: Install Docker
Run the following command to update the package source list again:
sudo apt update
Then, run the following command to install the docker:
sudo apt install docker-ce -y
Step 15: Verify the Docker Installation
Run the following command to verify the docker installation:
sudo systemctl status docker
Step 16: Working with Docker Images and Check Version
Open a new terminal and use SSH command to connect with VM again.
Let’s run a simple test container named hello-world
as a warm-up and see if Docker is running correctly
docker run hello-world
Check the output below in the screenshot.
Then, run the following command to check the docker version:
docker --version
Step 17: Starting the servers
First, run the following command to change the permissions of the Docker socket file (/var/run/docker.sock
) to make it readable and writable by all users on the system.
sudo chmod 666 /var/run/docker.sock
Then, run the following command to start the server with docker compose:
docker compose up -d
Next, run the following command to verify that docker is installed correctly, the daemon is running, and your user has the appropriate permissions to interact with docker:
docker ps
Step 18: Access VM with port forwarding and tunneling
To forward the LangTrace port from your CPU VM to your local machine, use this SSH port forwarding command:
ssh -i "C:\Users\Acer\.ssh\id_rsa" -L 3000:localhost:3000 root@188.227.106.10
Explanation:
-i "C:\Users\Acer\.ssh\id_rsa"
: Specifies the path to your private SSH key.
-L 3000:localhost:3000
: Forwards local port 3000
to port 3000
on the VM.
root@188.227.106.10
: Connects to your VM as the root
user at the IP 188.227.106.10
.
After running this command, you can access the LangTrace in your local browser at localhost:3000
.
Step 19: Access LangTrace UI in Browser
Open any local browser and navigate to http://localhost:3000
to access the LangTrace UI.
Once the interface loads, click on the Admin Login button.
Step 20: Logging In with Admin Credentials
You can find the admin credentials, such as the username and password, in the .env
file on GitHub.
Repo Link: https://github.com/Scale3-Labs/langtrace/blob/main/.env
Next, enter the username and password, then click on the Sign in with Credentials button.
Step 21: Check the User Interface
Now check the user interface on localhost.
Step 22: Install Langtrace Python OpenAI SDK
Run the following command to install the langtrace python openai sdk:
pip3 install langtrace-python-sdk openai
Then, run the following command to upgrade the openai:
pip install --upgrade openai
Next, run the following command to install the specific version of openai:
pip install openai==0.28
Step 23: Create OpenAI API Key
To use the OpenAI API, you need to create an API key. This key will allow you to securely access OpenAI’s services. Follow these steps to generate your API key:
Visit the OpenAI platform and log in to your account. If you do not have an account, you will need to sign up.
Once logged in, navigate to the top right corner of the page where your profile icon is located. Click on it and select API from the dropdown menu. Alternatively, you can directly access the API section by clicking on API in the main dashboard.
In the API section, look for an option that says Create new secret key or View API Key. Click on this option.
After clicking on create, a new API key will be generated for you. Make sure to copy this key immediately as it will only be shown once.
Step 24: Update the System and Install Vim
What is Vim?
Vim is a text editor. The last line of the text editor is used to give commands to vi and provide you with information.
Note: If an error occurs stating that Vim is not a recognized internal or external command, install Vim using the steps below.
Step 1: Update the package list
Before installing any software, we will update the package list using the following command in your terminal:
sudo apt update
Step 2: Install Vim
To install Vim, enter the following command:
sudo apt install vim -y
This command will retrieve and install Vim and its necessary components.
Step 25: Add Code in Configuration File
Run the following command in the terminal to enter in the configuration file:
vi test.py
Entering the editing mode in Vi:
Follow the below steps to enter the editing mode in Vim
Step 1: Open a File in Vim
Step 2: Navigate to Command Mode
When you open a file in Vim, you start in the command mode. You can issue commands to navigate, save, and manipulate text in this mode. To ensure you are in command mode, press the Esc key. This step is crucial because you cannot edit the text in other modes.
Add the following code in configuration file:
from langtrace_python_sdk import langtrace
from langtrace_python_sdk.utils.with_root_span import with_langtrace_root_span
import openai
# Initialize Langtrace with appropriate API key and host
langtrace.init(
api_key="your openai api key",
api_host="http://localhost:3000/api/trace",
)
@with_langtrace_root_span()
def example():
# Set OpenAI API key
openai.api_key = "your openai api key"
# Create a completion request
response = openai.ChatCompletion.create(
model="gpt-4",
messages=[
{
"role": "system",
"content": "You are a helpful assistant.",
},
{
"role": "user",
"content": "What is the capital of India?",
},
],
)
# Print the response content
print(response["choices"][0]["message"]["content"])
# Call the example function
example()
Step 26: Run the Script
Finally, execute the following command to run the script:
python3 test.py
After running the script, you will get the following output. In the code, I am asking, “What is the capital of India?” The output will be, “The capital of India is Delhi,” which means both your script and server are working fine. Refer to the above code screenshot for clarification.
Step 27: Check the Metrics, Traces and Additional Settings
After running the script, your LangTrace UI will be running on localhost:3000. You can open the UI in a browser to check usage, metrics, prompts, and other settings. Refer to the screenshots below for more details.
Conclusion
In this guide, we explain the LangTrace AI open-source Open Telemetry based end-to-end observability tool for LLM applications, providing real-time tracing, evaluations and metrics for popular LLMs, LLM frameworks, vectorDBs and more. and provide a step-by-step tutorial on installing LangTrace AI locally on a NodeShift virtual machine. You’ll learn how to install the required software, set up essential tools like Vim, Docker etc.