Snowflake’s Arctic-Embed-L-v2.0 sets a new standard for multilingual text retrieval, delivering exceptional performance in both English and non-English languages while maintaining high efficiency. With 303 million non-embedding parameters, it ensures rapid inference, scales effortlessly, and supports long contexts of up to 8192 tokens.
Utilizing advanced techniques like Matryoshka Representation Learning and quantization-aware embedding training, it achieves top-tier retrieval quality with compact embeddings as small as 128 bytes per vector. Released under the Apache 2.0 license, this model is designed for seamless integration and enterprise-grade search applications, excelling across benchmarks like MTEB Retrieval, CLEF, and MIRACL.
Quality Benchmarks
Model Name | # params | # non-emb params | # dimensions | BEIR (15) | MIRACL (4) | CLEF (Focused) | CLEF (Full) |
---|
snowflake-arctic-l-v2.0 | 568M | 303M | 1024 | 55.6 | 55.8 | 52.9 | 54.3 |
snowflake-arctic-m | 109M | 86M | 768 | 54.9 | 24.9 | 34.4 | 29.1 |
snowflake-arctic-l | 335M | 303M | 1024 | 56.0 | 34.8 | 38.2 | 33.7 |
me5 base | 560M | 303M | 1024 | 51.4 | 54.0 | 43.0 | 34.6 |
bge-m3 (BAAI) | 568M | 303M | 1024 | 48.8 | 56.8 | 40.8 | 41.3 |
gte (Alibaba) | 305M | 113M | 768 | 51.1 | 52.3 | 47.7 | 53.1 |
Model Resource
Hugging Face
Link: https://huggingface.co/Snowflake/snowflake-arctic-embed-l-v2.0
Prerequisites for Installing Snowflake Arctic Embed v2 Locally
Make sure you have the following:
- GPUs: 1xRTXA6000 (for smooth execution).
- Disk Space: 40 GB free.
- RAM: 48 GB(24 Also works) but we use 48 for smooth execution
- CPU: 48 Cores(24 Also works)but we use 48 for smooth execution
Step-by-Step Process to Install Snowflake Arctic Embed v2 Locally
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deployment.
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1x RTX A6000 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy Snowflake Arctic Embed v2 on a Jupyter Virtual Machine. This open-source platform will allow you to install and run the Snowflake Arctic Embed v2 Model on your GPU node. By running this model on a Jupyter Notebook, we avoid using the terminal, simplifying the process and reducing the setup time. This allows you to configure the model in just a few steps and minutes.
Note: NodeShift provides multiple image template options, such as TensorFlow, PyTorch, NVIDIA CUDA, Deepo, Whisper ASR Webservice, and Jupyter Notebook. With these options, you don’t need to install additional libraries or packages to run Jupyter Notebook. You can start Jupyter Notebook in just a few simple clicks.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to Jupyter Notebook
Once your GPU VM deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ Button in the top right corner.
After clicking the ‘Connect’ button, you can view the Jupyter Notebook.
Now open Python 3(pykernel) Notebook.
Next, If you want to check the GPU details, run the command in the Jupyter Notebook cell:
!nvidia-smi
Step 8: Install Required Libraries
Run the following command to install the required libraries:
!pip install torch transformers sentence-transformers
Step 9: Check CUDA Availability
Run the following command to check the CUDA availability:
import torch
print(torch.cuda.is_available()
If it prints True
, your setup is ready for GPU acceleration.
Step 10: Import Required Modules and Load the Model
Run the following command to import required modules and load the Snowflake Arctic model directly from Hugging Face:
from sentence_transformers import SentenceTransformer
model_name = 'Snowflake/snowflake-arctic-embed-l-v2.0'
model = SentenceTransformer(model_name)
Step 11: Prepare Your Data
Run the following command to define the queries and documents for embedding:
queries = ['What is Snowflake?', 'Where can I get the best tacos?']
documents = ['The Data Cloud!', 'Mexico City of Course!']
Step 12: Generate Embeddings
Run the following command to use the model to generate embeddings for the queries and documents:
query_embeddings = model.encode(queries, device='cuda') # Leverage GPU for computation
document_embeddings = model.encode(documents, device='cuda')
Step 13: Compute Similarity Scores
Run the following code to compare embeddings to calculate similarity scores:
from sklearn.metrics.pairwise import cosine_similarity
scores = cosine_similarity(query_embeddings, document_embeddings)
for query, query_scores in zip(queries, scores):
doc_score_pairs = list(zip(documents, query_scores))
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
print("Query:", query)
for document, score in doc_score_pairs:
print(score, document)
Step 14: Check Multilinguality
Write your queries and prepare your data in multiple languages to test multilinguality.
Conclusion
Arctic Embed v2 is a groundbreaking open-source model from Snowflake that brings state-of-the-art AI capabilities to developers and researchers. Following this guide, you can quickly deploy Arctic Embed v2 on a GPU-powered Virtual Machine with NodeShift, harnessing its full potential. NodeShift provides an accessible, secure, affordable platform to run your AI models efficiently. It is an excellent choice for those experimenting with Arctic Embed v2 and other cutting-edge AI tools.