Docker has revolutionized how we build, ship, and run applications, making it a must-have tool for developers and organizations. By leveraging the concept of Containerization, Docker ensures that your apps run consistently in different environments, eliminating the “it works on my machine” problem. As an open-source tool, it has become an industry standard for containerization, with its ever-increasing adoption in cloud-native, DevOps, and microservices.
That said, we have created this comprehensive guide to walk you through installing Docker on Ubuntu 22.04. Whether you’re already experienced or just starting your way into containerised environments, this article will help you set up Docker so that you can unleash its full potential. By the end, you’ll be ready to start effortlessly creating, managing, and deploying containers.
Prerequisites
- A Virtual Machine (such as the ones provided by NodeShift) with:
- 2 vCPUs
- 2 GB RAM
- 10 GB SSD
- Ubuntu 22.04 VM
Note: The prerequisites for this are highly variable across use cases. One could use a high-end configuration for a large-scale deployment.
Step-by-step process to install Docker on Ubuntu 22.04
For this tutorial, we’ll use a CPU-powered Virtual Machine by NodeShift, which provides high-compute Virtual Machines at a very affordable cost on a scale that meets GDPR, SOC2, and ISO27001 requirements. Also, it offers an intuitive and user-friendly interface, making it easier for beginners to get started with Cloud deployments. However, feel free to use any cloud provider you choose and follow the same steps for the rest of the tutorial.
Step 1: Setting up a NodeShift Account
Visit app.nodeshift.com and create an account by filling in basic details, or continue signing up with your Google/GitHub account.
If you already have an account, login straight to your dashboard.
Step 2: Create a Compute Node (CPU Virtual Machine)
After accessing your account, you should see a dashboard (see image), now:
- Navigate to the menu on the left side.
- Click on the Compute Nodes option.
- Click on Start to start creating your very first compute node.
These Compute nodes are CPU-powered virtual machines by NodeShift. These nodes are highly customisable and let you control different environmental configurations, such as vCPUs, RAM, and storage, according to your needs.
Step 3: Select configuration for VM
- The first option you see is the Reliability dropdown. This option lets you choose the uptime guarantee level you seek for your VM (e.g., 99.9%).
- Next, select a geographical region from the Region dropdown where you want to launch your VM (e.g., United States).
- Most importantly, select the correct specifications for your VM according to your workload requirements by sliding the bars for each option.
Step 4: Choose VM Configuration and Image
- After selecting your required configuration options, you’ll see the available VMs in your region and as per (or very close to) your configuration. In our case, we’ll choose a ‘4 vCPUs/4GB/80GB SSD’ Compute node.
- Next, you’ll need to choose an image for your Virtual Machine. For the scope of this tutorial, we’ll select Ubuntu, as we will deploy Docker on Ubuntu 22.04.
Step 5: Choose the Billing cycle and Authentication Method
- Two billing cycle options are available: Hourly, ideal for short-term usage, offering pay-as-you-go flexibility, and Monthly for long-term projects with a consistent usage rate and potentially lower cost.
- Next, you’ll need to select an authentication method. Two methods are available: Password and SSH Key. We recommend using SSH keys, as they are a more secure option. To create one, head over to our official documentation.
Step 6: Finalize Details and Create Deployment
Finally, if you want, you can also add a VPC (Virtual Private Cloud), which provides an isolated section for you to launch your cloud resources (Virtual machine, storage, etc.) in a secure, private environment. We’re keeping this option as the default for now, but feel free to create a VPC according to your needs.
Also, you can deploy multiple nodes at once by clicking +
in the Quantity option.
That’s it! You are now ready to deploy the node. Finalize the configuration summary; if it looks good, go ahead and click Create to deploy the node.
Step 7: Connect to active Compute Node using SSH
As soon as you create the node, it will be deployed in a few seconds or a minute. Once deployed, you will see a status Running in green, meaning that our Compute node is ready to use!
Once your node shows this status, follow the below steps to connect to the running VM via SSH:
- Open your terminal and run the below SSH command:
(replace root
with your username and paste the IP of your VM in place of ip
after copying it from the dashboard)
ssh root@ip
- If SSH keys are set up, the terminal will authenticate automatically.
- In some cases, your terminal may take your consent before connecting. Enter ‘yes’, and you should be connected.
Output:
Step 8: Install Dependencies
Before we install Docker, we need to install some required dependencies.
- Let’s start by updating the Ubuntu package source list for the latest version and security updates
sudo apt update
Output:
- Install the dependency packages
sudo apt install apt-transport-https ca-certificates curl software-properties-common
Output:
apt-transport-https
: Allows apt
to download packages over HTTPS, ensuring secure communication when fetching Docker packages.
ca-certificates
: Provides trusted CA certificates to validate the authenticity of Docker’s HTTPS endpoints.
curl
: A command-line tool used to fetch Docker’s GPG keys and verify repository integrity during the installation process.
software-properties-common
: Provides tools to manage apt
repositories, enabling you to add Docker’s official repository to the package manager.
Step 9: Install Docker
- Use curl to add the GPG key for the Docker repository
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
Output:
- Add Docker’s APT repository to the system’s source list
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
- Update the package source list again
sudo apt update
Output:
- Install Docker
sudo apt install docker-ce -y
Output:
- Verify if the Docker is correctly installed and the service is running
sudo systemctl status docker
Output:
Step 10: Working with Docker Images
Docker images are like templates for creating containers. They contain everything needed to run an application, including the code, runtime, libraries, and dependencies. You can pull images from Docker Hub or create your own for specific applications.
Docker Hub is an online library containing pre-built docker images you can use or upload your own for others to use.
- Let’s run a simple test container named
hello-world
as a warm-up and see if Docker is running correctly
docker run hello-world
Output:
As you can see above, once we hit the run command, Docker tries to look for hello-world
image locally; if it can’t find the image, it pulls the image from the Docker Hub and creates the container from that image.
After displaying the output to the terminal, the container exits and stops running.
- To see the current downloaded images on the system
docker images
Output:
- Let’s see how to download a new image in the system
a) Search the image on Docker Hub
(replace kubernetes
with the image you want to search)
docker search kubernetes
Output:
b) Download an image
Now, when we search kubernetes
, the hub shows all the available docker images related to Kubernetes. Let’s go ahead and download one of them, e.g. gofunky/kubernetes
docker pull gofunky/kubernetes
Output:
c) Verify the downloaded image
Log the image list again to check the updated list with our newly downloaded image.
docker images
Our Docker image has been successfully downloaded to the system! Feel free to try pulling new images by following the same steps.
Step 11: Working with Docker Containers
Now, let’s try running some basic commands that can get you started working with Docker containers.
- Create a container
By now, you may have already understood that Docker creates containers using locally downloaded Docker images. So, let’s try creating a container using our newly downloaded gofunky/kubernetes
image.
docker run -d -p 8080:80 gofunky/kubernetes
Output:
The output is the container ID in alphanumeric form, confirming that our container has successfully been created and assigned an ID.
- To log the details of the last created container
docker ps -l
Output:
- To view the list of current active/inactive containers
docker ps -a
Output:
- In case you want to remove a container
(replace <container_id>
with the ID of the container you want to remove, as shown in the list in the previous step)
docker rm <container_id>
Output:
- Verify if the container is deleted
Log the container list again to check if the container has been removed.
docker ps -a
Output:
Note: If you want to remove a Docker image, you must remove the container built with that image before removing the image itself. Since we have already removed gofunky/kubernetes
container, let’s try removing the image as well.
To remove an image:
(replace <image_id>
with the ID of the respective image from the image list)
docker rmi <image_id>
Output:
Verify if the image has been deleted by logging the image list:
docker images
Output:
As you can see, the image has been successfully removed from the system.
Conclusion
This guide covers the step-by-step approach for setting up Docker on Ubuntu 22.04. We also covered basic instructions on working with Docker images and containers. Following the steps outlined, you have a robust environment ready for your operations. Furthermore, we deployed our VM on NodeShift, which helped streamline the deployment process and made the set-up easier with its developer-friendly interface. Whether you’re building applications for your projects or deploying at scale in the cloud, NodeShift offers customised and affordable solutions to optimise your workflows and enhance resource efficiency for your development pipelines.