Step 1 — Docker and Container Concept
Docker containers bundle all the dependencies and configurations required to run an application into a single package. This allows you to:
- Eliminate the “It works on my machine” problem.
- Run the same image consistently across different environments.
- Quickly start and stop applications.
Containers are built upon Linux namespaces and cgroups. Namespaces provide isolation for network, process, and user environments. Cgroups allow for limiting resources such as CPU, RAM, and disk.
Step 2 — Installing Docker
On Ubuntu/Debian:
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg lsb-release
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Verify installation:
docker --version
docker compose version
Ensure Docker service is running:
sudo systemctl status docker
Step 3 — Managing Docker Images
Docker images contain the filesystem and dependencies needed to run containers.
docker pull nginx:latest
docker images
docker rmi nginx:latest
docker tag nginx:latest myrepo/nginx:v1
docker save -o nginx.tar nginx:latest
docker load -i nginx.tar
Images are built in layers, allowing for optimized disk usage and faster builds. As a best practice, group Dockerfile instructions logically and remove unnecessary packages.
Title ID: ART-0156 Category: DevOps & Containerization Slug: docker-container-technologies Difficulty: Beginner–Intermediate Time: 120–180 min Keywords: docker, containers, images, volumes, networks, compose, registry, swarm, kubernetes, devops Published: 2025-09-08 Updated: 2025-09-08
Step 4 — Managing Docker Containers
Once you have images, you can run containers from them. Some essential container commands include:
docker run -d --name web nginx
docker ps -a
docker stop web
docker start web
docker restart web
docker rm web
docker exec -it web bash
docker logs web
Explanation:
- run: Launches a new container.
- -d: Runs in the background (detached mode).
- --name: Assigns a custom name.
- exec: Allows you to enter the container.
- logs: Shows container output logs.
You can also limit resources when running containers:
docker run -d --name limited --memory=256m --cpus=0.5 nginx
Step 5 — Volumes and Data Management
Volumes allow you to persist data beyond the lifecycle of a container.
docker volume create myvol
docker run -d -v myvol:/data nginx
docker volume ls
docker volume inspect myvol
docker volume rm myvol
Volumes are the recommended method for data persistence. They are stored under /var/lib/docker/volumes/
. You can also mount a host directory:
docker run -d -v /home/user/data:/data nginx
To back up a volume:
docker run --rm -v myvol:/data -v $(pwd):/backup busybox tar czf /backup/vol_backup.tar.gz /data
Step 6 — Network Management
Docker networks enable communication between containers.
docker network ls
docker network create mynet
docker run -d --name app1 --network=mynet nginx
docker run -d --name app2 --network=mynet nginx
docker exec -it app1 ping app2
Bridge mode is default. You can also use host
and none
. Custom user-defined networks allow for automatic DNS resolution between containers.
Step 7 — Docker Compose
Docker Compose allows you to define and run multi-container applications with a YAML file.
docker-compose.yml example:
version: '3.9'
services:
web:
image: nginx
ports:
- "80:80"
Start the app:
docker compose up -d
docker compose ps
docker compose logs
docker compose down
You can also use build:
instead of image:
to build from a local Dockerfile.
Step 8 — Using Docker Registry
Registries allow storing and sharing Docker images. Docker Hub is the default registry, but you can run your own.
Login to Docker Hub:
docker login
Tag and push image:
docker tag nginx:latest myuser/nginx:v1
docker push myuser/nginx:v1
Run your own registry (localhost):
docker run -d -p 5000:5000 --name registry registry:2
Tag and push to your local registry:
docker tag nginx:latest localhost:5000/nginx
docker push localhost:5000/nginx
To pull from it:
docker pull localhost:5000/nginx
Step 9 — Dockerfile & Multi-stage Build
A Dockerfile defines how a Docker image is built.
Basic Dockerfile:
FROM python:3.11-slim
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt
CMD ["python", "main.py"]
Build image:
docker build -t myapp:1.0 .
Multi-stage build example:
Builder stage
FROM node:18 AS builder
WORKDIR /build
COPY . .
RUN npm install && npm run build
Runtime stage
FROM nginx:alpine
COPY --from=builder /build/dist /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]
This results in smaller images and better performance.
Step 10 — Orchestration (Swarm vs K8s)
Orchestration tools manage multiple containers and hosts. The most popular options are Docker Swarm and Kubernetes.
Docker Swarm:
docker swarm init
docker service create --replicas 3 --name web nginx
Kubernetes:
kubectl apply -f deployment.yaml
kubectl get pods
kubectl scale deployment web --replicas=5
Swarm is simpler but Kubernetes is more powerful and flexible.
Step 11 — Security Best Practices
- Avoid running as root in containers
- Use minimal base images (e.g., alpine)
- Set resource limits (memory, CPU)
- Regularly scan images (e.g., Trivy)
- Keep Docker engine up-to-date
- Avoid hardcoding secrets in Dockerfiles
Image scanning example with Trivy:
trivy image myapp:1.0
Step 12 — Monitoring & Logging
Use built-in Docker commands or integrate with tools:
Container logs:
docker logs myapp
Monitoring tools:
- Prometheus
- Grafana
- cAdvisor
- ELK / Loki
Example Prometheus + Grafana:
- Use node-exporter and cadvisor as exporters
- Visualize metrics in Grafana dashboards
Step 13 — Real World Example: Flask App
Directory structure:
app/
├── main.py
├── requirements.txt
└── Dockerfile
main.py:
from flask import Flask app = Flask(name)
@app.route("/")
def hello():
return "Hello from Docker!"
requirements.txt: flask
Dockerfile:
FROM python:3.11-slim
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt
CMD ["python", "main.py"]
Build & Run:
docker build -t flask-app .
docker run -d -p 5000:5000 flask-app
Visit http://localhost:5000 to test it.
Conclusion: In this guide, you learned how to work with Docker and container technologies step by step. From image and container management to networking, volumes, multi-container orchestration, and security — you are now equipped with the skills to dockerize your own applications and deploy them in real-world environments.
Next Steps:
- Learn Kubernetes basics
- Explore Helm charts
- Implement CI/CD pipelines using Jenkins or GitHub Actions
- Monitor your infrastructure with Prometheus and Grafana