Docker: A Beginner-Friendly Introduction (and Why It Matters in Real DevOps Work)

20 Nov 2025

Docker: A Beginner-Friendly Introduction and Why It Matters in Real DevOps Work

Introduction

Containers changed everything. If you're learning DevOps, understanding Docker isn't optional—it's fundamental. Every modern platform, from AWS to Google Cloud, from CI/CD pipelines to Kubernetes, relies on containerization.

In this post, I'll walk you through what Docker is, why it exists, and how to use it—from running your first containers to building your own images. This is hands-on learning: we'll run commands, build applications, and understand the core concepts that power modern infrastructure.

What Problem Does Docker Solve?

Software behaves differently across environments. An application that works perfectly on your laptop might crash in staging or production due to:

Docker's solution: Each application runs in an isolated container that behaves exactly the same everywhere—on your laptop, in staging, in production, or in the cloud.

Why Containers Matter

Containers are:

This is why containers power:

Understanding the Technology: VMs vs Containers

Before Docker, the standard for application isolation was virtual machines. Understanding the difference is crucial.

Virtual Machines

Virtual machines virtualize the hardware using a hypervisor (like VMware or VirtualBox).

Each VM contains:

Advantages:

Disadvantages:

Containers

Containers share the host machine's Linux kernel and only package what's unique.

Each container contains:

Key technologies enabling containers:

Result: You can run hundreds of containers on the same hardware that might only support 20 VMs.

The Practical Difference

Starting a VM:

  1. Boot entire OS
  2. Initialize kernel
  3. Start services
  4. Load application Time: Minutes

Starting a Container:

  1. Start isolated process
  2. Load application Time: Milliseconds

This speed difference changes everything about how we develop and deploy software.

Running Your First Container

Let's get hands-on. These examples demonstrate Docker's core capabilities.

Step 1: Start an Ubuntu Container

Run Ubuntu in the background:

docker run -d ubuntu sleep infinity

What just happened:

Step 2: List Running Containers

docker ps

You'll see your container with a unique ID, the image name, and the command it's running.

Step 3: Enter the Container

docker exec -it <container_id> bash

Flags explained:

You're now inside the container with a full Linux environment—completely isolated from your host machine.

Step 4: Explore the Isolation

Inside the container, check the OS version:

cat /etc/os-release

Open another terminal on your host and run the same command:

cat /etc/os-release

Different OS versions running on the same machine—instantly, with minimal overhead.

Step 5: Monitor Resource Usage

Exit the container and check resources:

docker stats

Even multiple containers consume surprisingly little RAM—this is the power of shared kernel architecture.

Docker Hub: The Container Registry

Docker Hub is a registry of pre-built images—think of it as GitHub for containers.

Finding images:

docker search debian

Running different versions instantly:

docker run -d debian:bullseye
docker run -d debian:buster

Each runs a different Debian version, isolated, on the same host. No dual-boot, no VMs, no complexity.

Popular images you'll use:

Building Your Own Images: The Dockerfile

Now for the real power—containerizing your own applications.

The Application

Let's containerize a simple Node.js web server.

Project structure:

myapp/
├── server.js
├── package.json
└── Dockerfile

server.js:

const express = require('express');
const app = express();

app.get('/', (req, res) => {
  res.send('Hello from Docker!');
});

app.listen(3000, () => {
  console.log('Server running on port 3000');
});

package.json:

{
  "name": "myapp",
  "version": "1.0.0",
  "dependencies": {
    "express": "^4.18.0"
  }
}

Creating the Dockerfile

A Dockerfile is a recipe for building container images.

Dockerfile:

FROM node:16-alpine

WORKDIR /app

COPY package*.json ./

RUN npm install

COPY . .

EXPOSE 3000

CMD ["node", "server.js"]

Understanding each instruction:

  1. FROM node:16-alpine

    • Start from Node.js 16 base image
    • Alpine variant is minimal (5MB vs 900MB for full Ubuntu)
  2. WORKDIR /app

    • Set working directory inside container
    • All subsequent commands run from here
  3. COPY package.json ./

    • Copy package files first (before source code)
    • Leverages Docker layer caching for faster builds
  4. RUN npm install

    • Install dependencies inside the container
    • This layer is cached until package.json changes
  5. COPY . .

    • Copy application source code
    • Done after dependencies for better caching
  6. EXPOSE 3000

    • Documents which port the application uses
    • Doesn't actually publish the port
  7. CMD ["node", "server.js"]

    • Default command when container starts
    • Can be overridden at runtime

Building the Image

From your project directory:

docker build -t myapp .

What happens:

Watch the build process—you'll see each step executed.

Running Your Container

Start the container and map ports:

docker run -p 3001:3000 myapp

Port mapping explained:

Visit in your browser:

http://localhost:3001

You should see "Hello from Docker!"

Your application is running inside an isolated container—same environment everywhere.

Running in Background

For production-like deployment:

docker run -d -p 3001:3000 --name myapp-container myapp

Flags:

Managing the container:

# View logs
docker logs myapp-container

# Follow logs in real-time
docker logs -f myapp-container

# Stop container
docker stop myapp-container

# Start stopped container
docker start myapp-container

# Remove container
docker rm myapp-container

Sharing Your Images: Docker Hub

Once you've built an image, you can share it with the world (or your team).

Step 1: Create Docker Hub Account

Visit hub.docker.com and create a free account.

Step 2: Login from Terminal

docker login

Enter your credentials.

Step 3: Tag Your Image

Images need proper naming for pushing:

docker tag myapp YOUR-USERNAME/myapp:v1.0

Tag format: username/repository:tag

Step 4: Push to Docker Hub

docker push YOUR-USERNAME/myapp:v1.0

Step 5: Pull From Anywhere

Now anyone (or you on another machine) can run:

docker pull YOUR-USERNAME/myapp:v1.0
docker run -p 3001:3000 YOUR-USERNAME/myapp:v1.0

This is how teams share images and how CI/CD pipelines deploy applications.

Understanding Docker Layers

Docker images are built in layers—understanding this is crucial for optimization.

View image layers:

docker history myapp

Each Dockerfile instruction creates a layer. Layers are:

Optimization strategy:

  1. Put rarely-changing instructions first (FROM, WORKDIR)
  2. Copy dependency files before source code
  3. Install dependencies before copying code
  4. Copy source code last (changes most frequently)

This is why we COPY package.json before COPY . . in our Dockerfile.

Key Concepts Learned

Images vs Containers

Image: Blueprint for containers

Container: Running instance of an image

Analogy: Image is a class, container is an instance.

Container Lifecycle

# Create and start
docker run image-name

# List running containers
docker ps

# List all containers (including stopped)
docker ps -a

# Stop container
docker stop container-id

# Remove container
docker rm container-id

# Remove image
docker rmi image-name

Common Docker Commands Reference

# Images
docker images                    # List images
docker pull image:tag           # Download image
docker rmi image:tag            # Remove image
docker build -t name .          # Build image

# Containers
docker ps                        # List running containers
docker ps -a                     # List all containers
docker run image                 # Create and start container
docker start container-id        # Start stopped container
docker stop container-id         # Stop running container
docker rm container-id           # Remove container
docker exec -it container bash   # Enter container

# Logs and debugging
docker logs container-id         # View logs
docker logs -f container-id      # Follow logs
docker inspect container-id      # Detailed info
docker stats                     # Resource usage

What I Learned

Building and running containers taught me:

  1. Isolation: How namespaces and cgroups provide security and resource control
  2. Portability: Why "it works on my machine" is no longer an excuse
  3. Efficiency: How containers achieve VM-like isolation with minimal overhead
  4. Layer caching: Why Dockerfile instruction order matters
  5. Declarative infrastructure: Defining environments as code

These concepts form the foundation for everything else in modern DevOps—from CI/CD to Kubernetes.

What's Next?

Future topics I'm exploring:

Each of these builds on the Docker fundamentals covered here.

Conclusion

Docker revolutionized how we build, ship, and run applications. By packaging software with its dependencies into lightweight, portable containers, Docker solved the "works on my machine" problem and enabled the cloud-native revolution.

Understanding containers isn't just about learning a tool—it's about understanding the foundation of modern infrastructure. Whether you're deploying microservices, building CI/CD pipelines, or learning Kubernetes, Docker is where it all begins.

Every project I build now starts with a Dockerfile. You can see my work on my GitHub, where I document my DevOps learning journey.

Resources