Automated-AI-Web-Researcher-Ollama is an open-source python program tool available under the MIT license. It is freely accessible to users and the community that turns an LLM, running on Ollama, into an automated researcher, which will with a single query determine focus areas to investigate, do web searches and scrape content from various relevant websites and do research for you all on its own!
It organizes all findings and source links into a comprehensive document, generates clear summaries, and allows interactive Q&A to explore insights further. The tool refines its search capabilities over time, offers rich console updates, and synthesizes information effectively for clear answers. With its research conversation mode, it ensures seamless exploration of collected knowledge.
Prerequisites
- A Virtual Machine (such as the ones provided by NodeShift) with at least:
- 24 vCPUs
- 64GB RAM
- 250GB SSD
- Ubuntu 22.04 VM
- Access to your server via SSH
“We chose this configuration for smooth execution. You can also use a lower configuration for this tool, but the performance will be slower.”
Step-by-Step process to Install Automated-AI-Web-Researcher-Ollama Locally
For the purpose of this tutorial, we will use a CPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
However, if you prefer to use a GPU-powered Virtual Machine, you can still follow this guide. Screenshot to Code works on GPU-based VMs as well, performance is better and faster than CPU VM on GPU VM. The installation process remains largely the same, allowing you to achieve similar functionality on a GPU-powered machine. NodeShift’s infrastructure is versatile, enabling you to choose between GPU or CPU configurations based on your specific needs and budget.
Let’s dive into the setup and installation steps to get Screenshot to Code running efficiently on your chosen virtual machine.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a Compute Node (CPU Virtual Machine)
NodeShift Compute Nodes offers flexible and scalable on-demand resources like NodeShift Virtual Machines which are easily deployed and come with general-purpose, CPU-powered, or storage-optimized nodes.
- Navigate to the menu on the left side.
- Select the Compute Nodes option.
- Click the Create Compute Nodes button in the Dashboard to create your first deployment.
Step 3: Select Virtual Machine Uptime Guarantee
- Choose the Virtual Machine Uptime Guarantee option based on your needs. NodeShift offers an uptime SLA of 99.99% for high reliability.
- Click on the “Show reliability info” to review detailed SLA and reliability options.
Step 4: Select a Region
In the “Compute Nodes” tab, select a geographical region where you want to launch the Virtual Machine (e.g., the United States).
Step 5: Choose VM Configuration
- NodeShift provides two options for VM configuration:
- Manual Configuration: Adjust the CPU, RAM, and Storage to your specific requirements.
- Select the number of CPUs (1–96).
- Choose the amount of RAM (1 GB–768 GB).
- Specify the storage size (20 GB–4 TB).
- Predefined Configuration: Choose from predefined configurations optimized for General Purpose, CPU-Powered, or Storage-Optimized nodes.
- If you prefer custom specifications, manually configure the CPU, RAM, and Storage. Otherwise, select a predefined VM configuration suitable for your workload.
Step 6: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy the VM on Ubuntu, but you can choose according to your preference. Other options like CentOS and Debian are also available to Install Automated-AI-Web-Researcher-Ollama.
Step 7: Choose the Billing Cycle & Authentication Method
- Select the billing cycle that best suits your needs. Two options are available: Hourly, ideal for short-term usage and pay-as-you-go flexibility, or Monthly, perfect for long-term projects with a consistent usage rate and potentially lower overall cost.
- Select the authentication method. There are two options: Password and SSH Key. SSH keys are a more secure option. To create them, refer to our official documentation.
Step 8: Additional Details & Complete Deployment
- The ‘Finalize Details’ section allows users to configure the final aspects of the Virtual Machine.
- After finalizing the details, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 9: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 10: Connect via SSH
- Open your terminal
- Run the SSH command:
For example, if your username is root
, the command would be:
ssh root@ip
- If SSH keys are set up, the terminal will authenticate using them automatically.
- If prompted for a password, enter the password associated with the username on the VM.
- You should now be connected to your VM!
Step 11: Clone the Repository
Run the following command in the terminal to clone the repository:
git clone https://github.com/TheBlewish/Automated-AI-Web-Researcher-Ollama
Then, run the following command in terminal to navigate to the main project directory:
cd Automated-AI-Web-Researcher-Ollama
Step 12: Install Python
Run the following command to install the Python:
apt update
sudo apt install python3 python3-venv python3-pip
This command installs Python 3, the venv
module, and pip
.
Step 13: Create a Virtual Environment
First run the following command to check the version of the Python:
python3 --version
Ensure that you have Python version 3.8 or higher installed to successfully run the Automated AI Web Researcher Ollama.
Once Python is installed, run the following command to create a virtual environment using Python 3.
python3 -m venv venv
Then, run the following command to activate the virtual environment:
source venv/bin/activate
Step 14: Install Requirements
Run the following command to install the dependencies from the requirements.txt
file:
pip install -r requirements.txt
Step 15: Install Ollama
After completing the steps above, it’s time to download Ollama.
Run the following command to install the Ollama:
pip install ollama
If this command does not work, install it using the Snap command.
Run the following command to install the Ollama:
snap install ollama
Then, run the following command to check the version of Ollama:
ollama --version
If this command does not work, download Ollama directly from the official Ollama website.
Website Link: https://ollama.com/download/linux
Run the following command to install the Ollama from website:
curl -fsSL https://ollama.com/install.sh | sh
Then, run the following command to check the available models in Ollama:
ollama list
Step 16: Pull Some Models from Ollama
We will pull three models from Ollama:
First, run the following command to pull the phi3:14b model:
ollama run phi3:14b
Then, run the following command to pull the llama3.2:3b model:
ollama run llama3.2:3b
Next, run the following command to pull the qwen2.5 model:
ollama run qwen2.5
Then, run the following command to check if all the models are available:
ollama list
Step 17: Create an Model File
Run the following command to create an model file:
touch modelfile
The command touch modelfile
is a Linux/Unix command used to create an empty file or update timestamp.
Follow the below steps to enter the editing mode in Vim
Step 1: Open a File in Vim
Step 2: Navigate to Command Mode
When you open a file in Vim, you start in the command mode. You can issue commands to navigate, save, and manipulate text in this mode. To ensure you are in command mode, press the Esc key. This step is crucial because you cannot edit the text in other modes.
Then, run the following command to enter in the modelfile:
vi modelfile
After creating a modelfile
add the following content in modelfile:
FROM your-model-name
PARAMETER num_ctx 38000
Replace “your-model-name” with your chosen model (e.g., phi3:3.8b-mini-128k-instruct, llama3.2 or qwen2.5)
We will try with both the model llama3.2 and phi3:14b.
Then, run the following command to create the model such as:
ollama create llama3.2 -f modelfile
ollama create research-phi3 -f modelfile
Again run the following command to check if all the models are available:
ollama list
Step 18: Edit LLM Config File
Go to the llm_config.py file which should have an Ollama section that looks like this:
LLM_CONFIG_OLLAMA = {
"llm_type": "ollama",
"base_url": "http://localhost:11434", # default Ollama server URL
"model_name": "your_model_name", # Replace with your Ollama model name
"temperature": 0.7,
"top_p": 0.9,
"n_ctx": 55000,
"context_length": 55000,
"stop": ["User:", "\n\n"]
Run the following command to enter in the llm_config.py file:
vim llm_config.py
Follow the below steps to enter the editing mode in Vim
Step 1: Open a File in Vim
Step 2: Navigate to Command Mode
When you open a file in Vim, you start in the command mode. You can issue commands to navigate, save, and manipulate text in this mode. To ensure you are in command mode, press the Esc key. This step is crucial because you cannot edit the text in other modes.
Then, in editing mode, open the configuration file, check the LLM settings for the Ollama option, and add the model name which you want to use (e.g., Llama 3.2).
GitHub Link: https://github.com/TheBlewish/Automated-AI-Web-Researcher-Ollama/blob/main/llm_config.py
Step 19: Edit LLM Wrapper File
Run the following command to enter in the llm_wrapper.py file:
vim llm_wrapper.py
Follow the below steps to enter the editing mode in Vim
Step 1: Open a File in Vim
Step 2: Navigate to Command Mode
When you open a file in Vim, you start in the command mode. You can issue commands to navigate, save, and manipulate text in this mode. To ensure you are in command mode, press the Esc key. This step is crucial because you cannot edit the text in other modes.
Then, in editing mode, open the configuration file, check the model settings, and add the model name which you want to use (e.g., Llama 3.2).
GitHub Link: https://github.com/TheBlewish/Automated-AI-Web-Researcher-Ollama/blob/main/llm_wrapper.py
Save and close the file (Ctrl+X
, Y
, Enter
).
Step 20: Run the Automated-AI-Web-Researcher-Ollama Tool
Now, execute the following command to run the Automated-AI-Web-Researcher-Ollama tool:
python3 Web-LLM.py
Now enter your message or question for the assistant, and press CTRL+D to submit it. That’s it! Your Advanced Research Assistant is now set up. Ask your questions and enjoy!
Conclusion
In this guide, we explain the Automated-AI-Web-Researcher-Ollama open-source python program tool that turns an LLM, running on Ollama, into an automated researcher, which will with a single query determine focus areas to investigate, do web searches and scrape content from various relevant websites and do research for you all on its own and provide a step-by-step tutorial on installing Automated-AI-Web-Researcher-Ollama locally on a NodeShift virtual machine. You’ll learn how to install the required software, set up essential tools like vim.