20 Nov 2025
Containers changed everything. If you're learning DevOps, understanding Docker isn't optional—it's fundamental. Every modern platform, from AWS to Google Cloud, from CI/CD pipelines to Kubernetes, relies on containerization.
In this post, I'll walk you through what Docker is, why it exists, and how to use it—from running your first containers to building your own images. This is hands-on learning: we'll run commands, build applications, and understand the core concepts that power modern infrastructure.
Software behaves differently across environments. An application that works perfectly on your laptop might crash in staging or production due to:
Docker's solution: Each application runs in an isolated container that behaves exactly the same everywhere—on your laptop, in staging, in production, or in the cloud.
Containers are:
This is why containers power:
Before Docker, the standard for application isolation was virtual machines. Understanding the difference is crucial.
Virtual machines virtualize the hardware using a hypervisor (like VMware or VirtualBox).
Each VM contains:
Advantages:
Disadvantages:
Containers share the host machine's Linux kernel and only package what's unique.
Each container contains:
Key technologies enabling containers:
Result: You can run hundreds of containers on the same hardware that might only support 20 VMs.
Starting a VM:
Starting a Container:
This speed difference changes everything about how we develop and deploy software.
Let's get hands-on. These examples demonstrate Docker's core capabilities.
Run Ubuntu in the background:
docker run -d ubuntu sleep infinity
What just happened:
-d)sleep infinity to keep it runningdocker ps
You'll see your container with a unique ID, the image name, and the command it's running.
docker exec -it <container_id> bash
Flags explained:
-i: Interactive mode-t: Allocate a terminalYou're now inside the container with a full Linux environment—completely isolated from your host machine.
Inside the container, check the OS version:
cat /etc/os-release
Open another terminal on your host and run the same command:
cat /etc/os-release
Different OS versions running on the same machine—instantly, with minimal overhead.
Exit the container and check resources:
docker stats
Even multiple containers consume surprisingly little RAM—this is the power of shared kernel architecture.
Docker Hub is a registry of pre-built images—think of it as GitHub for containers.
Finding images:
docker search debian
Running different versions instantly:
docker run -d debian:bullseye
docker run -d debian:buster
Each runs a different Debian version, isolated, on the same host. No dual-boot, no VMs, no complexity.
Popular images you'll use:
Now for the real power—containerizing your own applications.
Let's containerize a simple Node.js web server.
Project structure:
myapp/
├── server.js
├── package.json
└── Dockerfile
server.js:
const express = require('express');
const app = express();
app.get('/', (req, res) => {
res.send('Hello from Docker!');
});
app.listen(3000, () => {
console.log('Server running on port 3000');
});
package.json:
{
"name": "myapp",
"version": "1.0.0",
"dependencies": {
"express": "^4.18.0"
}
}
A Dockerfile is a recipe for building container images.
Dockerfile:
FROM node:16-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
Understanding each instruction:
FROM node:16-alpine
WORKDIR /app
COPY package.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
From your project directory:
docker build -t myapp .
What happens:
myappWatch the build process—you'll see each step executed.
Start the container and map ports:
docker run -p 3001:3000 myapp
Port mapping explained:
-p 3001:3000: Maps host port 3001 to container port 3000localhost:3001 are forwarded to the containerVisit in your browser:
http://localhost:3001
You should see "Hello from Docker!"
Your application is running inside an isolated container—same environment everywhere.
For production-like deployment:
docker run -d -p 3001:3000 --name myapp-container myapp
Flags:
-d: Detached mode (runs in background)--name: Assign a memorable nameManaging the container:
# View logs
docker logs myapp-container
# Follow logs in real-time
docker logs -f myapp-container
# Stop container
docker stop myapp-container
# Start stopped container
docker start myapp-container
# Remove container
docker rm myapp-container
Once you've built an image, you can share it with the world (or your team).
Visit hub.docker.com and create a free account.
docker login
Enter your credentials.
Images need proper naming for pushing:
docker tag myapp YOUR-USERNAME/myapp:v1.0
Tag format: username/repository:tag
docker push YOUR-USERNAME/myapp:v1.0
Now anyone (or you on another machine) can run:
docker pull YOUR-USERNAME/myapp:v1.0
docker run -p 3001:3000 YOUR-USERNAME/myapp:v1.0
This is how teams share images and how CI/CD pipelines deploy applications.
Docker images are built in layers—understanding this is crucial for optimization.
View image layers:
docker history myapp
Each Dockerfile instruction creates a layer. Layers are:
Optimization strategy:
This is why we COPY package.json before COPY . . in our Dockerfile.
Image: Blueprint for containers
Container: Running instance of an image
Analogy: Image is a class, container is an instance.
# Create and start
docker run image-name
# List running containers
docker ps
# List all containers (including stopped)
docker ps -a
# Stop container
docker stop container-id
# Remove container
docker rm container-id
# Remove image
docker rmi image-name
# Images
docker images # List images
docker pull image:tag # Download image
docker rmi image:tag # Remove image
docker build -t name . # Build image
# Containers
docker ps # List running containers
docker ps -a # List all containers
docker run image # Create and start container
docker start container-id # Start stopped container
docker stop container-id # Stop running container
docker rm container-id # Remove container
docker exec -it container bash # Enter container
# Logs and debugging
docker logs container-id # View logs
docker logs -f container-id # Follow logs
docker inspect container-id # Detailed info
docker stats # Resource usage
Building and running containers taught me:
These concepts form the foundation for everything else in modern DevOps—from CI/CD to Kubernetes.
Future topics I'm exploring:
Each of these builds on the Docker fundamentals covered here.
Docker revolutionized how we build, ship, and run applications. By packaging software with its dependencies into lightweight, portable containers, Docker solved the "works on my machine" problem and enabled the cloud-native revolution.
Understanding containers isn't just about learning a tool—it's about understanding the foundation of modern infrastructure. Whether you're deploying microservices, building CI/CD pipelines, or learning Kubernetes, Docker is where it all begins.
Every project I build now starts with a Dockerfile. You can see my work on my GitHub, where I document my DevOps learning journey.