How to Install Microsoft AutoGen 0.4 with AutoGen Studio Locally?

by Ayush Kumar | January 24, 2025

hero-image

Ready to build cheaper?

Custom CPU plans from as little as $0.012/hour.

AutoGen is a versatile framework for creating multi-agent applications that can work autonomously or collaborate with humans. It provides a complete ecosystem for building multi-agent workflows, including a robust framework, developer tools, and ready-to-use applications.

With its layered and extensible design, AutoGen offers flexibility for developers, from high-level APIs to low-level components. The Core API supports advanced features like message passing, event-driven agents, and cross-language compatibility for .NET and Python. The AgentChat API simplifies rapid prototyping of multi-agent interactions, while the Extensions API allows for continuous enhancement of capabilities, including LLM client support and code execution. Additionally, AutoGen Studio offers a no-code GUI for building applications, and AutoGen Bench includes tools to benchmark agent performance.

Fully open source under the MIT License and developed by Microsoft Corporation, AutoGen is an excellent opportunity for open-source contributors to shape the future of multi-agent workflows. Explore, contribute, and build innovative solutions with AutoGen!

Resource

GitHub

Link: https://github.com/microsoft/autogen

Prerequisites for Installing Autogen

Make sure you have the following:

  • GPUs: 1xRTXA6000 (for smooth execution).
  • Disk Space: 40 GB free.
  • RAM: 48 GB(24 Also works) but we use 48 for smooth execution
  • CPU: 48 Cores(24 Also works)but we use 48 for smooth execution

Step-by-Step Process to Install Autogen

For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.

Step 1: Sign Up and Set Up a NodeShift Cloud Account

Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.

Follow the account setup process and provide the necessary details and information.

Step 2: Create a GPU Node (Virtual Machine)

GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.

Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deployment.

Step 3: Select a Model, Region, and Storage

In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.

We will use 1x RTX A6000 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.

Step 4: Select Authentication Method

There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.

Step 5: Choose an Image

Next, you will need to choose an image for your Virtual Machine. We will deploy Autogen on an NVIDIA Cuda Virtual Machine. This proprietary, closed-source parallel computing platform will allow you to install Autogen on your GPU Node.

After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.

Step 6: Virtual Machine Successfully Deployed

You will get visual confirmation that your node is up and running.

Step 7: Connect to GPUs using SSH

NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.

Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.

Now open your terminal and paste the proxy SSH IP or direct SSH IP.

Next, if you want to check the GPU details, run the command below:

nvidia-smi

Step 8: Check the Available Python version and Install the new version

Run the following commands to check the available Python version.

If you check the version of the python, system has Python 3.8.1 available by default. To install a higher version of Python, you’ll need to use the deadsnakes PPA.

Run the following commands to add the deadsnakes PPA:

sudo apt update
sudo apt install -y software-properties-common
sudo add-apt-repository -y ppa:deadsnakes/ppa
sudo apt update

Step 9: Install Python 3.11

Now, run the following command to install Python 3.11 or another desired version:

sudo apt install -y python3.11 python3.11-distutils python3.11-venv

Step 10: Update the Default Python3 Version

Now, run the following command to link the new Python version as the default python3:

sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 1
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 2
sudo update-alternatives --config python3

Then, run the following command to verify that the new Python version is active:

python3 --version

Step 11: Install and Update Pip

Run the following command to install and update the pip:

python3 -m ensurepip --upgrade
python3 -m pip install --upgrade pip

Then, run the following command to check the version of pip:

pip --version

Step 12: Install Autogen

Run the following command to install the autogen:

pip install -U "autogen-agentchat" "autogen-ext[openai]"

Step 13: Create OpenAI API Key

To use the OpenAI API, you need to create an API key. This key will allow you to securely access OpenAI’s services. Follow these steps to generate your API key:

  • Log In to OpenAI:

Visit the OpenAI platform and log in to your account. If you do not have an account, you will need to sign up.

  • Access the API Section:

Once logged in, navigate to the top right corner of the page where your profile icon is located. Click on it and select API from the dropdown menu. Alternatively, you can directly access the API section by clicking on API in the main dashboard.

  • Create a New Secret Key:

In the API section, look for an option that says Create new secret key or View API Key. Click on this option.

  • Generate the Key:

After clicking on create, a new API key will be generated for you. Make sure to copy this key immediately as it will only be shown once.

Step 14: Export OpenAI API Key

Run the following command to export the OpenAI API Key:

export OPENAI_API_KEY="your api key"

Step 15: Install Autogenstudio

Run the following command to install the autogenstudio:

pip install -U "autogenstudio"

Step 16: Run Autogenstudio

Finally, execute the following command to run the autogenstudio:

autogenstudio ui --port 8080 --appdir ./my-app

Step 17: SSH port forwarding

To forward local port 8080 on your windows machine to port 8080 on the VM, use the following command in command prompt or powershell:

ssh -i "C:\Users\Acer\.ssh\id_rsa" -L 8080:127.0.0.1:8080 root@85.236.58.220 -p 11206

Explanation of the Command:

  • ssh: Initiates the SSH connection.
  • -i "C:\Users\Acer\.ssh\id_rsa": Specifies the path to your private SSH key.
  • -L 8080:127.0.0.1:8080: Sets up local port forwarding. This forwards traffic from your local port 8080 to port 8080 on the remote machine.
    • 8080: Local port on your machine.
    • 127.0.0.1: Refers to the localhost of the remote machine.
    • 8080: Remote port on the server where the application is running.
  • root@85.236.58.220: Logs into the server as the root user at the given IP.
  • -p 11206: Specifies the custom SSH port (11206) of the server.

Step 18: Access the Application

After running the command, you can access the application running on port 8080 of the remote server by visiting http://127.0.0.1:8080 in your local web browser.

Step 19: Create Session

Create the new session in autogenstudio.

Step 20: Play with Autogenstudio

Conclusion

In this guide, we introduced AutoGen, an open-source framework under the MIT License, designed for building versatile multi-agent applications that can work autonomously or in collaboration with humans. We walked through a detailed step-by-step tutorial on setting up AutoGen on a GPU-powered virtual machine using NodeShift, covering everything from configuring your environment and installing dependencies to running AutoGenStudio for application development. By following this guide, you’ve learned how to install the required software, set up tools like Python and pip, and efficiently deploy and access AutoGen applications for your specific use cases. Whether you’re a developer or an open-source enthusiast, AutoGen opens the door to creating and optimizing multi-agent workflows with ease.

Relevant blog posts

How to Install Granite Vision 3.2 2B Locally?

How to Install Granite Vision 3.2 2B Locally?

March 12, 2025

Granite Vision 3.2-2B is a compact and efficient vision-language model designed for document analysis and image-based reasoning. It specializes in extracting structured information from tables, charts, diagrams, infographics, and scanned documents, making it ideal for business automation, OCR tasks, and data extraction. Built on Granite’s language model foundation, it integrates a SigLIP vision encoder and a multi-layer transformer architecture, ensuring high accuracy in document question-answering, visual classification, and retrieval-augmented generation (RAG). Optimized for enterprise applications, Granite Vision 3.2-2B provides precise and reliable outputs for processing text and visual data in a unified manner.

How to Install & Run OpenManus Locally with Ollama – No API Keys Required

How to Install & Run OpenManus Locally with Ollama – No API Keys Required

March 11, 2025

OpenManus is an open-source alternative to the groundbreaking Manus AI agent, developed by the MetaGPT community. Manus itself is a revolutionary AI Agent designed for everything from task planning to execution and capable of handling complex workflows without human intervention. While Manus is currently in invite-only mode, which restricts its capabilities to be utilized by broader community, OpenManus democratizes these capabilities by providing a free, open-source, and customizable version of Manus for developers and researchers. Built in just three hours, OpenManus is catching eyes of many enthusiasts who were eager to play with Manus, but couldn’t do so because of the invite restrictions. In this article, we will explore how to install OpenManus locally. Though it is supposed to be used with OpenAI or other API keys, we have found a way for you to use it for free using Ollama models!

How to Install Granite-3.2-8B-Instruct Locally?

How to Install Granite-3.2-8B-Instruct Locally?

March 10, 2025

Granite-3.2-8B-Instruct is an advanced 8-billion-parameter language model designed for long-context reasoning, instruction following, and multi-turn dialogue. Built on the foundation of Granite-3.1-8B-Instruct, it has been fine-tuned with high-quality open-source datasets and synthetic data, ensuring enhanced logical reasoning, structured text generation, and multilingual capabilities. Supporting 12 languages, including English, German, Spanish, French, Arabic, and Chinese, the model is well-suited for tasks such as summarization, retrieval-augmented generation (RAG), text classification, and function calling. Optimized for enterprise applications, research, and AI-driven assistants, Granite-3.2-8B-Instruct delivers precise, structured, and contextually aware responses.

See all posts

Ready to build
with us?

The ideal way for organizations young and old to ease their way into the distributed and affordable cloud at their own pace.