Building your own AI agent has never been easier than the current increasing trend towards AI agents. AI agents are systems that autonomously perform tasks for users by perceiving their environment and making decisions by simulating human behavior. They use technologies like natural language processing and machine learning to interact intelligently and adapt over time. With Eliza, a framework developed by ai16z, you can create one in just minutes. Eliza simplifies the process by providing a platform that allows users to develop intelligent agents capable of performing various tasks, from engaging in conversations to automating workflows. By leveraging Eliza, even those with minimal coding experience can harness the power of AI to optimize their existing products or even build a whole new product in a few lines of code.
In this guide, we will walk you through a step-by-step process for building your own AI agent using the Eliza framework. We have curated this guide so that even non-coders can replicate the steps to build their agents. You’ll learn how to set up your environment, customize your agent’s personality, integrate it across social platforms like Twitter, and deploy it on a cloud server to make it publicly accessible for your users.
Prerequisites
- A Virtual Machine (such as the ones provided by NodeShift) with at least:
- 4 vCPUs
- 8 GB RAM
- 50 GB SSD
- Ubuntu 22.04 VM
Note: The prerequisites for this are highly variable across use cases. A high-end configuration could be used for a large-scale deployment.
Step-by-step process to build an AI agent with Eliza
For this tutorial, we’ll use a CPU-powered Virtual Machine by NodeShift, which provides high-compute Virtual Machines at a very affordable cost on a scale that meets GDPR, SOC2, and ISO27001 requirements. It also offers an intuitive and user-friendly interface, making it easier for beginners to get started with Cloud deployments. However, feel free to use any cloud provider you choose and follow the same steps for the rest of the tutorial.
Step 1: Setting up a NodeShift Account
Visit app.nodeshift.com and create an account by filling in basic details, or continue signing up with your Google/GitHub account.
If you already have an account, login straight to your dashboard.
Step 2: Create a Compute Node (CPU Virtual Machine)
After accessing your account, you should see a dashboard (see image), now:
- Navigate to the menu on the left side.
- Click on the Compute Nodes option.
- Click on Start to start creating your very first compute node.
These Compute nodes are CPU-powered virtual machines by NodeShift. These nodes are highly customizable and let you control different environmental configurations, such as vCPUs, RAM, and storage, according to your needs.
Step 3: Select configuration for VM
- The first option you see is the Reliability dropdown. This option lets you choose the uptime guarantee level you seek for your VM (e.g., 99.9%).
- Next, select a geographical region from the Region dropdown where you want to launch your VM (e.g., United States).
- Most importantly, select the correct specifications for your VM according to your workload requirements by sliding the bars for each option.
Step 4: Choose VM Configuration and Image
- After selecting your required configuration options, you’ll see the available VMs in your region and as per (or very close to) your configuration. In our case, we’ll choose a ‘4vCPUs/8GB/160GB SSD’ as the closest match to the “Prerequisites”.
- Next, you’ll need to choose an image for your Virtual Machine. For the scope of this tutorial, we’ll select Ubuntu, as we will deploy the agent on the Ubuntu server.
Step 5: Choose the Billing cycle and Authentication Method
- Two billing cycle options are available: Hourly, ideal for short-term usage, offering pay-as-you-go flexibility, and Monthly for long-term projects with a consistent usage rate and potentially lower cost.
- Next, you’ll need to select an authentication method. Two methods are available: Password and SSH Key. We recommend using SSH keys, as they are a more secure option. To create one, head over to our official documentation.
Step 6: Finalize Details and Create Deployment
Finally, you can also add a VPC (Virtual Private Cloud), which provides an isolated section to launch your cloud resources (Virtual machine, storage, etc.) in a secure, private environment. We’re keeping this option as the default for now, but feel free to create a VPC according to your needs.
Also, you can deploy multiple nodes at once using the Quantity option.
That’s it! You are now ready to deploy the node. Finalize the configuration summary; if it looks good, go ahead and click Create to deploy the node.
Step 7: Connect to active Compute Node using SSH
As soon as you create the node, it will be deployed in a few seconds or a minute. Once deployed, you will see a status Running in green, meaning that our Compute node is ready to use!
Once your node shows this status, follow the below steps to connect to the running VM via SSH:
- Open your terminal and run the below SSH command:
(replace root
with your username and paste the IP of your VM in place of ip
after copying it from the dashboard)
ssh root@ip
2. In some cases, your terminal may take your consent before connecting. Enter ‘yes’.
3. A prompt will request a password. Type the SSH password, and you should be connected.
Output:
Step 8: Install and set up dependencies
First, we will prepare the environment by installing the dependencies required to build and run the agent. Follow the below instructions to install packages one by one:
- Install the latest version of NodeJS (version 23+) with the given instructions on the website.
a) Start with installing nvm
(Node Version Manager).
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash
Output:
b) Download and install NodeJS (restart the terminal before running).
nvm install 23
Output:
c) Confirm the installations using the commands below.
node -v
npm -v
2. Instal pnpm (version 9+)
npm install -g pnpm
Output:
3. Install Git for version control and for cloning the repository. Run the following commands one by one.
apt update
apt install git-all
Output:
4. Also, to use VS Code to work with the AI model on Ubuntu VM, you need to connect your local VS Code editor to your Ubuntu server via SSH with the following steps:
a) Install the “Remote-SSH” Extension by Microsoft on VS Code.
b) Type “Remote-SSH: Connect to Host
” on the Command Palette.
c) Enter the host details, such as username and SSH password, and you should be connected.
Step 9: Install Eliza Framework
Eliza, developed by ai16z, is a framework for building AI agents. It saves your time, cost, and effort by not reinventing the wheel when you can build exceptional agents by extending and customizing the abilities offered by Eliza.
- Clone the Eliza git repository.
git clone https://github.com/ai16z/eliza.git
Output:
2. Install the latest stable version.
Move inside the project directory with “cd eliza
” and run the following command to checkout to the latest stable version of Eliza.
git checkout $(git describe --tags --abbrev=0)
Output:
Once you hit the above command, the branch will be switched to the latest version. Proceed by installing this version with the following command:
pnpm install
Output:
3. Build the local environment.
pnpm build
Output:
Step 10: Configure environment variables
After completing the above installations, you can access the project directory and modules by opening the project folder with VS code connected to the Ubuntu server via SSH (refer to Step 8).
Moving forward, we’ll add the environment variables in the project directory to allow Eliza to access AI models that will be used to feed our agent’s brain.
- First, copy the
.env.example
file to the .env
file.
cp .env.example .env
2. Next, add the API key(s) for the AI model(s) you want to use as your AI agent’s brain.
There are several models you can choose from; here are some of them:
Claude (by Anthropic)
– Human-like interaction with contextual awareness.
Grok (by xAI)
– Conversational AI with real-time access to X (formerly Twitter).
LLAMA
– for customizing and fine-tuning the agent for specific use-case.
OpenAI
– Generative and creative capabilities, but it might be a little costly.
Gaianet
– A public node with several AI models. You don’t need any API key to use it.
For example, if you want to add Anthropic as your AI feed, add its respective API key corresponding to its name in the .env
file.
We’ll use the Qwen72B
model from Gaianet, a public directory for AI models that doesn’t require an API key. However, feel free to experiment with models of your choice.
Let’s copy the configuration details of the chosen model from the website and put it into the corresponding section in the .env
file.
3. Create an X (formerly Twitter) account for the agent
As you may see in the below image, we have created our agent’s account and named it “Just A Chill Agent”. Also, make sure to automate it with your main account; otherwise, the agent may not be able log in to their account automatically.
Next, add the agent’s X account details to the .env file corresponding to the “Twitter” section, as shown below.
Step 11: Build Agent Characterfile
The simplest way to customize an agent is to take the contents of the defaultCharacter.ts
file and overwrite the attributes you want to customize.
Another option is to just code the whole agent’s character from scratch, which can be a bit daunting but is doable. However, for this tutorial, we’ll just tweak and overwrite the default Eliza character file to customize it for our use case.
- Move inside the
agent/src
directory and create agentCharacter.ts
file.
Write the code for the configuration of your agent’s character that may look similar to this:
import { Character, ModelProviderName, defaultCharacter, Clients } from "ai16z/eliza"
export const agentCharacter: Character = {
...defaultCharacter,
clients: [Clients.TWITTER],
modelProvider: ModelProviderName.GAIANET,
name:"justachillagent",
system: `<PUT YOUR CHARACTER DESCRIPTION HERE>`
}
This is how the agentCharacter.ts
file looks like:
2. Import the agentCharacter.ts
into the index.ts
file
Once the above configuration is done, import the agentCharacter.ts
file inside index.ts
and replace all the instances of defaultCharacter.ts
file with your newly created Agent’s characterfile.
import { agentCharacter } from "./agentCharacter.ts"
Step 12: Interact with the Agent
Finally, we are now ready to start the agent and begin interacting with it.
- Run the following command inside the
eliza
directory in the terminal to start the agent:
npm start
Output:
As you may see above, it has started posting tweets. To confirm it has been successfully posted or not, go to the agent’s X account and see if the post has been done.
For instance, this post is automatically done by @justachillagent.
2. Replying to the tweet.
Additionally, we can check if the agent is having a live interaction when we reply to one of their posts.
When we wrote a reply to one of it’s post, this is what it does.
The terminal above shows that the agent has detected an interaction with their post, saved the interactor’s details in their memory and is now creating and posting a reply to that interaction.
As you may see in the above image, it is successfully replying to our interactions within the post!
Conclusion
In this article, we have covered a step-by-step approach to building a very basic social AI agent (or Twitter agent). However, you can build exceptional agents by fine-tuning and customizing Eliza in your own way according to your use case. By walking through the steps to configure and tweak Eliza, you’ve unlocked the power to create something uniquely yours, whether it’s for fun, education, or specialized tasks for optimizing your workflow. But deploying your AI agent isn’t just about code; it’s about reliability, scalability, and the ability of your agent to be up and running all the time to keep interacting with your audience or perform crucial tasks. With NodeShift, you can bring your AI agent to life and keep it running smoothly in production, so you can focus on refining its personality and features without worrying about the backend and under-the-hood deployment tasks.