Cache-Augmented Generation (CAG) introduces a novel approach to leveraging large context windows in modern language models by preloading all necessary information into the model’s memory, eliminating the need for real-time retrieval. This method significantly reduces latency, ensures more reliable responses by avoiding retrieval errors, and simplifies system design compared to retrieval-based methods. By utilizing a preloaded cache during inference, CAG enables streamlined and efficient generation while maintaining relevance to the preloaded context. However, it is best suited for tasks with manageable knowledge sizes, as it relies on fitting the required information within the model’s context window.
Resource
GitHub
Link: https://github.com/hhhuang/CAG
Prerequisites for Setup CAG(Cache-Augmented Generation)
Make sure you have the following:
- GPUs: 1xRTXA6000 (for smooth execution).
- Disk Space: 50 GB free.
- RAM: 64 GB(24 Also works) but we use 64 for smooth execution
- CPU: 64 Cores(24 Also works)but we use 64 for smooth execution
Step-by-Step Process to Setup CAG(Cache-Augmented Generation)
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deployment.
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1x RTX A6000 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy CAG on an NVIDIA Cuda Virtual Machine. This proprietary, closed-source parallel computing platform will allow you to install CAG on your GPU Node.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to GPUs using SSH
NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.
Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.
Now open your terminal and paste the proxy SSH IP or direct SSH IP.
Next, if you want to check the GPU details, run the command below:
nvidia-smi
Step 8: Clone the Repository
Run the following command to clone the CAG(Cache-Augmented Generation) repository:
git clone https://github.com/hhhuang/CAG.git
cd CAG
Step 9: Install the Project Dependencies
Run the following command to install the project dependencies:
pip install -r ./requirements.txt
Step 10: Download Shell Script
Run the following command to download the shell script:
sh ./downloads.sh
Step 11: Install Vim
So, what is Vim?
Vim is a text editor. The last line of the text editor is used to give commands to vi and provide you with information.
Note: If an error occurs which states that Vi is not a recognised internal or external command then install vim using the steps below.
Step 1: Update the package list
Before installing any software, we will update the package list using the following command in your terminal:
sudo apt update
Step 2: Install Vim
To install Vim, enter the following command:
sudo apt install -y vim
This command will retrieve and install Vim and its necessary components.
Step 12: Add Hugging Face Token
Now, run the following command to enter in the .env file:
vim .env
After entering in the .env file:
- Press
i
to enter Insert Mode. You’ll see -- INSERT --
at the bottom of the screen, indicating that you can now type.
- Add the huggingface token.
- After adding token, follow these steps to save and exit:
- Save the File:
- Press
Esc
to leave Insert Mode (you’ll return to Normal Mode).
- Type
:w
q and press Enter
to save the file.
- Exit Vim:
- After saving, type
:q
and press Enter
to exit Vim.
How to Generate a Hugging Face Token
- Create an Account: Go to the Hugging Face website and sign up for an account if you don’t already have one.
- Access Settings: After logging in, click on your profile photo in the top right corner and select “Settings.”
- Navigate to Access Tokens: In the settings menu, find and click on the “Access Tokens” tab.
- Generate a New Token: Click the “New token” button, provide a name for your token, and choose a role (either
read
or write
).
- Generate and Copy Token: Click the “Generate a token” button. Your new token will appear; click “Show” to view it and copy it for use in your applications.
- Secure Your Token: Ensure you keep your token secure and do not expose it in public code repositories.
Step 13: Login Using Your Hugging Face API Token
Use the huggingface_hub
cli to login directly in the terminal.
Run the following command to login in huggingface-cli:
huggingface-cli login
Then, enter the token and press the Enter key. Ensure you press Enter after entering the token so the input will not be visible.
After entering the token, you will see the following output:
Login Successful.
The current active token is (your_token_name).
Check the screenshot below for reference.
Step 14: Run CAG(Cache-Augmented Generation)
Run the following command to start the CAG:
python3 ./kvcache.py --kvcache file --dataset "squad-train" --similarity bertscore \
--maxKnowledge 5 --maxParagraph 100 --maxQuestion 1000 \
--modelname "meta-llama/Llama-3.1-8B-Instruct" --randomSeed 0 \
--output "./result_kvcache.txt"
Usage
rag.py
is for RAG Experiment
kvcache.py
is for CAG Experiment
Parameter Usage — kvcache.py
--kvcache
: “file”
--dataset
: “hotpotqa-train” or “squad-train”
--similarity
“bertscore”
--modelname
: “meta-llama/Llama-3.1-8B-Instruct”
--maxKnowledge
: “”, int, select how many document in dataset, explanation in Note
--maxParagraph
: 100
--maxQuestion
int, max question number, explanation in Note
--randomSeed
: “”, int, a random seed number
--output
: “”, str, output filepath string
--usePrompt
, add this parameter if not using CAG knowledge cache acceleration
Example — kvcache.py
python3 ./kvcache.py --kvcache file --dataset "squad-train" --similarity bertscore \
--maxKnowledge 5 --maxParagraph 100 --maxQuestion 1000 \
--modelname "meta-llama/Llama-3.1-8B-Instruct" --randomSeed 0 \
--output "./result_kvcache.txt"
Parameter Usage — rag.py
--index
: “openai” or “bm25”
--dataset
: “hotpotqa-train” or “squad-train”
--similarity
“bertscore”
--maxKnowledge
: “”, int, select how many document in dataset, explanation in Note
--maxParagraph
: 100
--maxQuestion
int, max question number, explanation in Note
--topk
: int, the similarity topk of retrieval
--modelname
: “meta-llama/Llama-3.1-8B-Instruct”
--randomSeed
: “”, int, a random seed number
--output
: “”, str, output filepath string
Example — rag.py
python3 ./rag.py --index "bm25" --dataset "hotpotqa-train" --similarity bertscore \
--maxKnowledge 80 --maxParagraph 100 --maxQuestion 80 --topk 3 \
--modelname "meta-llama/Llama-3.1-8B-Instruct" --randomSeed 0 \
--output "./rag_results.txt"
Conclusion
Cache-Augmented Generation (CAG) offers a practical and efficient alternative to traditional retrieval-based approaches by eliminating real-time retrieval and leveraging preloaded knowledge within the model’s context. By following the steps outlined in this guide, you can set up and implement CAG seamlessly on a GPU-powered Virtual Machine. This approach not only simplifies system design but also ensures faster and more reliable performance, making it an excellent choice for applications where efficiency and reliability are paramount. With its streamlined workflow and reduced latency, CAG is a forward-thinking solution for maximizing the potential of modern large language models.