Nemotron-Mini-4B-Instruct is a compact model designed for tasks like roleplaying, retrieval-augmented question answering, and function calling. Built for speed and on-device use, it was fine-tuned from a compressed version of Nemotron-4 15B using pruning, distillation, and quantization techniques. Supporting up to 4,096 tokens of context, it is optimized for English and ready for commercial applications.
Nemotron-Mini-4B-Instruct is built on a transformer decoder architecture with an embedding size of 3072, 32 attention heads, and an intermediate dimension of 9216. It features Grouped-Query Attention (GQA) and Rotary Position Embeddings (RoPE) for enhanced performance and is based on the Nemotron-4 network design.
Nemotron-Mini-4B-Instruct may produce biased or toxic responses due to training on internet-sourced data and can generate inaccurate or irrelevant answers, especially without using the recommended prompt template. Developers are encouraged to ensure responsible use, address potential misuse, and follow ethical guidelines, as outlined in NVIDIA’s policies.
Prerequisites for deploying Nemotron-Mini Model
Minimum requirements:
- GPUs: 1xRTXA6000 (for smooth execution).
- Disk Space: 40 GB free.
- RAM: 48 GB(24 Also works) but we use 48 for smooth execution
- CPU: 48 Cores(24 Also works)but we use 48 for smooth execution
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deployment.
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1x RTX A6000 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy Nemotron-Mini on an NVIDIA Cuda Virtual Machine. This proprietary, closed-source parallel computing platform will allow you to install Nemotron-Mini Model on your GPU Node.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to GPUs using SSH
NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.
Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.
Now open your terminal and paste the proxy SSH IP or direct SSH IP.
Next, if you want to check the GPU details, run the command below:
!nvidia-smi
Step 8: Install Ollama
After completing the steps above, it’s time to download Ollama from the Ollama website.
Website Link: https://ollama.com/
Run the following command to install the Ollama:
curl -fsSL https://ollama.com/install.sh | sh
After the installation process is complete, run the following command to see a list of available commands:
ollama
Step 9: Select nemotron-mini Model
Select the nemotron-mini model from Ollama website:
Link: https://ollama.com/library/nemotron-mini:4b
Step 10: Serve Ollama
Run the following command to host the Ollama so that it can be accessed and utilized efficiently:
ollama serve
Step 11: Pull nemotron-mini Model
To pull the nemotron-mini model, run the following command:
ollama pull nemotron-mini:4b
Step 12: Run nemotron-mini Model
Now, you can run the model in the terminal using the following command and interact with your model:
ollama run nemotron-mini:4b
This is a step-by-step guide to set up the model and run it directly in your terminal. If you prefer to run this model with Open WebUI for a more graphical interface, we have a separate guide titled “Running AI Models with Open WebUI.” You can check out the guide via the link below:
Link: https://nodeshift.com/blog/running-ai-models-with-open-webui
What is Open WebUI?
Open WebUI is a versatile web-based platform designed to integrate smoothly with a range of language processing interfaces, like Ollama and other tools compatible with OpenAI-style APIs. It offers a suite of features that streamline managing and interacting with language models, adaptable for both server and personal use, transforming your setup into an advanced workstation for language tasks.
This platform lets you manage and communicate with language models through an easy-to-use graphical interface, accessible on both desktops and mobile devices. It even incorporates a voice interaction feature, making it as natural as having a conversation.
Conclusion
The Nemotron-Mini model is a groundbreaking model from NVIDIA that offers advanced capabilities to developers and researchers. By following this step-by-step guide, you can easily deploy Nemotron-Mini on a cloud-based virtual machine using a GPU-powered setup from NodeShift to maximize its potential. NodeShift provides a user-friendly, secure, and cost-effective platform to run your models efficiently. It’s an ideal choice for those exploring Nemotron-Mini and other cutting-edge models.