MilimoChat is a locally-run, privacy-focused chat application designed for users who want full control over their AI interactions. Built with Python and Streamlit, it offers a seamless chat experience, allowing users to engage with advanced language models while maintaining complete data privacy. With customizable personality presets, memory retention features, chat history analytics, and export options, MilimoChat goes beyond a standard chatbot—it becomes a personal AI assistant that learns, adapts, and improves over time.
Features
- Chat Interface: Simple and intuitive interface for seamless interaction with the chatbot.
- Customization Panel: Adjust chatbot personality, tone, and appearance to match user preferences.
- History Analytics: View chat history with insights, visualizations, and key metrics.
- Memory Dashboard: Manage short-term and long-term memory for context-aware responses.
- Export Service: Save chat history and settings in multiple formats (CSV, JSON, PDF).
Resource
GitHub
Link: https://github.com/Milimo-Quantum/milimochat
Prerequisites for Installing MilimoChat with Ollama Locally
Make sure you have the following:
- GPUs: 1xRTXA6000 (for smooth execution).
- Disk Space: 50 GB free.
- RAM: 48 GB(24 Also works) but we use 48 for smooth execution
- CPU: 48 Cores(24 Also works)but we use 48 for smooth execution
Step-by-Step Process to Install MilimoChat with Ollama Locally
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deployment.
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1x RTX A6000 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suit your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy MilimoChat on an NVIDIA Cuda Virtual Machine. This proprietary, closed-source parallel computing platform will allow you to install MilimoChat on your GPU Node.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to GPUs using SSH
NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.
Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.
Now open your terminal and paste the proxy SSH IP or direct SSH IP.
Next, if you want to check the GPU details, run the command below:
nvidia-smi
Step 8: Install Ollama
After connecting to the terminal via SSH, it’s now time to install Ollama from the official Ollama website.
Website Link: https://ollama.com/
Run the following command to install the Ollama:
curl -fsSL https://ollama.com/install.sh | sh
Step 9: Serve Ollama
Run the following command to host the Ollama so that it can be accessed and utilized efficiently:
ollama serve
Step 10: Connect with SSH
Now, open a new tab in the terminal and reconnect using SSH.
Step 11: Pull Any 2 or 3 Model
Run the following command to pull any two or three model from the ollama.
ollama pull <model name>
We will use Llava, Llama 3.2 and Nomic-embed-text Models from Ollama.
Step 12: Check Available Models
Run the following command to check if the downloaded model are available:
ollama list
Step 13: Check the Available Python version and Install the new version
Run the following commands to check the available Python version.
If you check the version of the python, system has Python 3.8.1 available by default. To install a higher version of Python, you’ll need to use the deadsnakes
PPA.
Run the following commands to add the deadsnakes
PPA:
sudo apt update
sudo apt install -y software-properties-common
sudo add-apt-repository -y ppa:deadsnakes/ppa
sudo apt update
Step 14: Install Python 3.11
Now, run the following command to install Python 3.11 or another desired version:
sudo apt install -y python3.11 python3.11-distutils python3.11-venv
Step 15: Update the Default Python3
Version
Now, run the following command to link the new Python version as the default python3
:
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 1
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 2
sudo update-alternatives --config python3
Then, run the following command to verify that the new Python version is active:
python3 --version
Step 16: Install and Update Pip
Run the following command to install and update the pip:
python3 -m ensurepip --upgrade
python3 -m pip install --upgrade pip
Then, run the following command to check the version of pip:
pip --version
Step 17: Clone the Repository
Run the following command to clone the Milimochat repository from GitHub:
git clone https://github.com/Milimo Quantum/milimochat.git
cd milimochat
Step 18: Install the Required Dependencies
Run the following command to install the required dependencies:
pip install -r requirements.txt
Step 19: Run the Application
Execute the following command to run the application:
streamlit run main.py
Step 20: SSH port forwarding
To forward local port 8501 on your windows machine to port 8501 on the VM, use the following command in command prompt or powershell:
ssh -L 8501:localhost:8501 -p 30676 -i C:\Users\Acer\.ssh\id_rsa root@86.122.133.229
Step 21: Access the Application
After doing SSH port forwarding, you can access the application in your local browser at http://localhost:8501/.
Step 22: Select Model and Play with Milimochat
Step 23: Check Other Things
Conclusion
Setting up MilimoChat locally ensures a private, customizable, and efficient AI chat experience without relying on cloud-based solutions. By following this step-by-step guide, you can easily deploy the application on a GPU-powered virtual machine, install Ollama for model management, and configure the required Python environment. With features like chat memory, history analytics, and export options, MilimoChat provides a secure and adaptive chatbot solution tailored to user preferences. Whether for personal use, research, or development, this setup allows for a powerful, locally-controlled AI assistant with full customization and data security.