AnythingLLM is a powerful, open-source full-stack application designed to transform any document, resource, or content into a context-ready reference for language models. It supports both closed and open-source models as well as various vector databases, providing unmatched flexibility. Users can deploy it locally or in the cloud, create workspaces to organize documents with clean contexts, and enjoy features like multi-user support, permissions, and multi-modal capabilities. It also includes custom AI agents, embeddable chat widgets, and robust document management tools. Released under the MIT License, AnythingLLM invites open-source contributors to enhance and expand its capabilities.
Supported LLMs, Embedder Models, Speech models, and Vector Databases
Large Language Models (LLMs):
Embedder models:
Audio Transcription models:
TTS (text-to-speech) support:
STT (speech-to-text) support:
- Native Browser Built-in (default)
Vector Databases:
Resource
GitHub
Link: https://github.com/Mintplex-Labs/anything-llm
Prerequisites
- A Virtual Machine (such as the ones provided by NodeShift) with at least:
- 16 vCPUs
- 64GB RAM
- 250GB SSD
- Ubuntu 22.04 VM
- Access to your server via SSH
“We chose this configuration for smooth execution. You can also use a lower configuration for this tool, but the performance will be slower.”
Step-by-Step Process to Install Anything LLM
For the purpose of this tutorial, we will use a CPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
However, if you prefer to use a GPU-powered Virtual Machine, you can still follow this guide. Anything LLM works on GPU-based VMs as well, performance is better and faster than CPU VM on GPU VM. The installation process remains largely the same, allowing you to achieve similar functionality on a GPU-powered machine. NodeShift’s infrastructure is versatile, enabling you to choose between GPU or CPU configurations based on your specific needs and budget.
Let’s dive into the setup and installation steps to get AnythingLLM running efficiently on your chosen virtual machine.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a Compute Node (CPU Virtual Machine)
NodeShift Compute Nodes offers flexible and scalable on-demand resources like NodeShift Virtual Machines which are easily deployed and come with general-purpose, CPU-powered, or storage-optimized nodes.
- Navigate to the menu on the left side.
- Select the Compute Nodes option.
- Click the Create Compute Nodes button in the Dashboard to create your first deployment.
Step 3: Select Virtual Machine Uptime Guarantee
- Choose the Virtual Machine Uptime Guarantee option based on your needs. NodeShift offers an uptime SLA of 99.99% for high reliability.
- Click on the “Show reliability info” to review detailed SLA and reliability options.
Step 4: Select a Region
In the “Compute Nodes” tab, select a geographical region where you want to launch the Virtual Machine (e.g., the United States).
Step 5: Choose VM Configuration
- NodeShift provides two options for VM configuration:
- Manual Configuration: Adjust the CPU, RAM, and Storage to your specific requirements.
- Select the number of CPUs (1–96).
- Choose the amount of RAM (1 GB–768 GB).
- Specify the storage size (20 GB–4 TB).
- Predefined Configuration: Choose from predefined configurations optimized for General Purpose, CPU-Powered, or Storage-Optimized nodes.
- If you prefer custom specifications, manually configure the CPU, RAM, and Storage. Otherwise, select a predefined VM configuration suitable for your workload.
Step 6: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy the VM on Ubuntu, but you can choose according to your preference. Other options like CentOS and Debian are also available to Install Anything LLM.
Step 7: Choose the Billing Cycle & Authentication Method
- Select the billing cycle that best suits your needs. Two options are available: Hourly, ideal for short-term usage and pay-as-you-go flexibility, or Monthly, perfect for long-term projects with a consistent usage rate and potentially lower overall cost.
- Select the authentication method. There are two options: Password and SSH Key. SSH keys are a more secure option. To create them, refer to our official documentation.
Step 8: Additional Details & Complete Deployment
- The ‘Finalize Details’ section allows users to configure the final aspects of the Virtual Machine.
- After finalizing the details, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 9: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 10: Connect via SSH
- Open your terminal
- Run the SSH command:
For example, if your username is root
, the command would be:
ssh root@ip
- If SSH keys are set up, the terminal will authenticate using them automatically.
- If prompted for a password, enter the password associated with the username on the VM.
- You should now be connected to your VM!
Step 11: Install Docker
Add Docker GPG Key and Repository
Run the following commands to add Docker’s official GPG key and set up the stable repository:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Update the Package Index
Update the local package database to include the Docker packages:
sudo apt update
Install Docker
Install Docker Engine and related components:
sudo apt install docker-ce docker-ce-cli containerd.io -y
Verify Docker Installation
Check Docker’s service status and version to ensure it’s installed successfully:
sudo systemctl status docker
docker --version
Start Docker (If Not Running)
If Docker is inactive, start the service:
sudo systemctl start docker
sudo systemctl enable docker
Step 12: Install Node.js
Add NodeSource Repository
Run the following command to set up Node.js 22.x repository:
curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -
Install Node.js
Install Node.js using apt
:
sudo apt install -y nodejs
Verify Node.js and npm Installation
Check the installed versions of Node.js and npm:
node -v
npm -v
Step 13: Install Yarn
Add Yarn Repository
Add the Yarn GPG key and repository:
curl -fsSL https://dl.yarnpkg.com/debian/pubkey.gpg | sudo gpg --dearmor -o /usr/share/keyrings/yarn-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/yarn-archive-keyring.gpg] https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
Update Package List
Refresh the package index:
sudo apt update
Install Yarn
Install Yarn using apt
:
sudo apt install -y yarn
Verify Yarn Installation
Check the Yarn version:
yarn -v
Step 14: Pull Anything LLM
Run the following command to pull anything llm:
docker pull mintplexlabs/anythingllm
Step 15: Run the Container
Execute the following command to run the container:
docker rm $(docker ps -aq) # Remove any stopped containers
docker run -d -p 3001:3001 \
--cap-add SYS_ADMIN \
-v $HOME/anythingllm:/app/server/storage \
-v $HOME/anythingllm/.env:/app/server/.env \
-e STORAGE_DIR="/app/server/storage" \
mintplexlabs/anythingllm
Verify the Container
Check if the container is running:
docker ps
Step 15: Access the Application
Access the Application
If the container is running, try accessing the application at:
- Localhost:
http://localhost:3001
- Public IP:
http://185.8.107.23:3001
Step 16: Choose LLM Preference
Anything LLM works with many LLM providers; choose according to your needs.
We use OpenAI LLM for the demo.
Step 17: Create OpenAI API Key
To use the OpenAI API, you need to create an API key. This key will allow you to securely access OpenAI’s services. Follow these steps to generate your API key:
Visit the OpenAI platform and log in to your account. If you do not have an account, you will need to sign up.
Once logged in, navigate to the top right corner of the page where your profile icon is located. Click on it and select API from the dropdown menu. Alternatively, you can directly access the API section by clicking on API in the main dashboard.
In the API section, look for an option that says Create new secret key or View API Key. Click on this option.
After clicking on create, a new API key will be generated for you. Make sure to copy this key immediately as it will only be shown once.
Step 18: Add API Key and Select Model
- Generate API Key: Go to OpenAI’s website, sign in, and create an API key from your account settings.
- Add API Key: Copy the API key and paste it into the API Key option.
- Select Model: Choose your desired model (e.g., GPT-4) by specifying it in your application’s settings or configuration.
Step 19: User Setup: Configure Your User Settings
Add team members by entering their email addresses and setting up roles or permissions as needed.
Choose User Type: Select either:
“Just me” if you are the only user.
“My team” if multiple people will access this instance.
For “Just me”:
Proceed with default settings as no additional configuration is required.
For “My team”:
Add team members by entering their email addresses and setting up roles or permissions as needed.
Step 20: User Setup for “Just me” Option
Select “Just me”: Choose the “Just me” option to indicate that only you will use the instance.
Set up a Password:
- If “Yes”: Enter and confirm a secure password to protect your instance.
- If “No”: Proceed without setting a password (not recommended for public-facing instances).
Step 21: Create your first workspace
Create your first workspace and give it a name.
Step 22: Play with Anything LLM
Now ask your questions, queries with Anything LLM.
Conclusion
Anything LLM stands out as a comprehensive, open-source solution for seamlessly integrating various language models and vector databases into a unified, user-friendly application. By following the detailed installation steps, you can deploy it locally or in the cloud, creating a powerful tool for managing and interacting with your documents intelligently. Its robust features, including workspace management, multi-user support, and extensive customization, make it a versatile platform for both individuals and teams. Released under the MIT License, it is open for contributions, inviting developers to enhance and expand its capabilities further. Whether for personal use or collaborative projects, Anything LLM offers a reliable, efficient, and secure way to maximize the potential of modern language technologies.