Microsoft has recently introduced the Phi-4 model, a significant advancement in the realm of artificial intelligence. This new generative AI model is part of Microsoft’s Phi family and is designed to excel in complex reasoning tasks, particularly in mathematical problem-solving. With a streamlined architecture comprising 14 billion parameters, Phi-4 demonstrates that smaller models can achieve remarkable performance without the need for the extensive computational resources typically associated with larger models. This innovative approach not only enhances efficiency but also challenges the prevailing trend in AI development that favors larger models with billions of parameters.
Benchmark Table for Microsoft Phi-4 Model
Category | Benchmark | phi-4 (14B) | phi-3 (14B) | Qwen 2.5 (14B instruct) | GPT-4o-mini | Llama-3.3 (70B instruct) | Qwen 2.5 (72B instruct) | GPT-4o |
---|
Popular Aggregated Benchmark | MMLU | 84.8 | 77.9 | 79.9 | 81.8 | 86.3 | 85.3 | 88.1 |
Science | GPQA | 56.1 | 31.2 | 42.9 | 40.9 | 49.1 | 49.0 | 50.6 |
Math | MGSM MATH | 80.6 80.4 | 53.5 44.6 | 79.6 75.6 | 86.5 73.0 | 89.1 66.3* | 87.3 80.0 | 90.4 74.6 |
Code Generation | HumanEval | 82.6 | 67.8 | 72.1 | 86.2 | 78.9* | 80.4 | 90.6 |
Factual Knowledge | SimpleQA | 3.0 | 7.6 | 5.4 | 9.9 | 20.9 | 10.2 | 39.4 |
Reasoning | DROP | 75.5 | 68.3 | 85.5 | 79.3 | 90.2 | 76.7 | 80.9 |
Phi-4’s capabilities stem from its unique training methodology, which integrates high-quality synthetic datasets with curated organic data. This combination allows the model to perform exceptionally well in STEM-related question-answering and advanced problem-solving scenarios. According to Microsoft, Phi-4 outperforms its larger counterparts, such as Google’s Gemini Pro 1.5, particularly in mathematical reasoning tasks.
The model has achieved impressive benchmarks, scoring 80.4 on the MATH benchmark and excelling in various problem-solving evaluations. Furthermore, Microsoft emphasizes its commitment to responsible AI development by incorporating advanced safety measures within Phi-4, ensuring that it meets ethical standards while delivering superior performance. As organizations increasingly seek efficient AI solutions, Phi-4 represents a pivotal shift towards optimizing model size and capability, potentially redefining industry standards for future AI applications.
Model Release Date:
Model Resource
Ollama
Link: https://ollama.com/vanilj/Phi-4
Prerequisites for Installing Microsoft Phi-4 Locally
Minimum requirements:
- GPUs: 1xRTXA6000 (for smooth execution).
- Disk Space: 40 GB free.
- RAM: 48 GB(24 Also works) but we use 48 for smooth execution
- CPU: 48 Cores(24 Also works)but we use 48 for smooth execution
Step-by-Step Process to Install Microsoft Phi-4 Model Locally
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deployment.
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1x RTX A6000 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy Microsoft Phi-4 on an NVIDIA Cuda Virtual Machine. This proprietary, closed-source parallel computing platform will allow you to install Microsoft Phi-4 on your GPU Node.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to GPUs using SSH
NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.
Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.
Now open your terminal and paste the proxy SSH IP or direct SSH IP.
Next, if you want to check the GPU details, run the command below:
!nvidia-smi
Step 8: Install Ollama
After connecting to the terminal via SSH, it’s now time to install Ollama from the official Ollama website.
Website Link: https://ollama.com/
Run the following command to install the Ollama:
curl -fsSL https://ollama.com/install.sh | sh
Step 9: Serve Ollama
Run the following command to host the Ollama so that it can be accessed and utilized efficiently:
ollama serve
Now, “Ollama is running.”
Step 10: Select Phi-4 Model
Link: https://ollama.com/vanilj/Phi-4
Step 11: Connect with SSH
Now, open a new tab in the terminal and reconnect using SSH.
Step 12: Check Commands
Run, the following command to see a list of available commands:
ollama
Step 13: Pull Phi-4 Model
Run the following command to pull the Phi-4 model:
ollama pull vanilj/Phi-4
Step 14: Check Available Model
Run the following command to check if the downloaded model are available:
ollama list
Step 15: Run Phi-4 Model
Now, you can run the model in the terminal using the following command and interact with your model:
ollama run vanilj/Phi-4
Note: This is a step-by-step guide for interacting with your model. It covers the first method for installing Microsoft Phi-4 locally using Ollama and running it in the terminal.
Option 1: Using Ollama (Terminal)
- Install Ollama: Download and install the Ollama tool from the official site.
- Pull the Model: Run the following command to download the desired model:
ollama pull vanilj/Phi-4
- Run the Model: Start the model in the terminal:
ollama run vanilj/Phi-4
Option 2: Using Open WebUI
- Set Up Open WebUI:
Follow our Open WebUI Setup Guide to configure the interface. Ensure all dependencies are installed and the environment is correctly set up.
- Refresh the Interface:
Confirm that the Microsoft Phi-4 model has been downloaded and is visible in the list of available models on the Open WebUI.
- Select Your Model:
Choose the Microsoft Phi-4 model from the list. This model is available in a single size.
- Start Interaction:
Begin using the model by entering your queries in the interface.
Conclusion
The Phi-4 model is a groundbreaking model from Microsoft that offers advanced capabilities to developers and researchers. By following this step-by-step guide, you can easily install Phi-4 on a cloud-based virtual machine using a GPU-powered setup from NodeShift to maximize its potential. NodeShift provides a user-friendly, secure, and cost-effective platform to run your models efficiently. It’s an ideal choice for those exploring Phi-4 and other cutting-edge models.