Introduction to Docker
Docker is a containerization platform that packages applications and dependencies into lightweight, portable containers that run consistently across different environments. Containers are a way to package and distribute software applications in a way that makes them easy to deploy and run consistently across different environments. Docker enables the creation of isolated environments, eliminates “it works on my machine” issues, and provides consistency from development to production
Why Use Docker
Docker provides several key advantages for modern application development and deployment:
- Consistency: Eliminates “it works on my machine” issues by packaging dependencies and environment with the app
- Isolation: Each container runs independently, avoiding conflicts between applications
- Portability: Containers can run on any system with Docker installed, including cloud, local, and CI/CD environments
- Efficiency: Containers are lightweight compared to virtual machines, sharing the host OS kernel
Docker Architecture
The Docker ecosystem consists of several core components that work together to provide containerization capabilities:
- Docker Engine: Core component that creates and manages containers
- Images: Read-only templates with instructions for creating containers
- Containers: Running instances of images
- Docker Hub: Public registry for sharing and downloading images
- Dockerfile: Script with instructions to build a Docker image
Creating and Optimizing Docker files
Basic Dockerfile Structure
A well-structured Dockerfile follows best practices for layer optimization and caching efficiency
FROM node:18-alpine
WORKDIR /app
# Copy package files first to leverage caching
COPY package*.json ./
RUN npm ci --only=production
# Copy remaining source code separately
COPY . .
EXPOSE 3000
CMD ["npm", "start"]Understanding Docker Layers
Each instruction in a Dockerfile creates a new image layer. Docker images are layer-based, meaning each instruction in a Dockerfile creates a new layer, and this structure allows Docker to efficiently cache unchanged layers. Only the RUN, COPY, and ADD instructions create layers, while other instructions create temporary intermediate images
Layer Optimization Strategies
Minimize the Number of Layers Consolidate multiple RUN commands into a single instruction where possible
# Instead of multiple RUN commands
RUN apt-get update -y
RUN apt-get upgrade -y
RUN apt-get install vim -y
RUN apt-get install net-tools -y
# Use a single RUN command
RUN apt-get update -y && \
apt-get upgrade -y && \
apt-get install -y vim net-tools dnsutils && \
rm -rf /var/lib/apt/lists/*Optimize Build Context and Cache Efficiency Structure your Dockerfile to separate dependencies from source code. This approach ensures that dependency installation happens before copying source code, so dependencies remain cached unless package files change
Multi-Stage Builds
Multi-stage builds reduce the size of your final image by creating a cleaner separation between building and runtime environments. Use multiple FROM statements within a single Dockerfile to construct separate build stages:
# Build stage
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
# Production stage
FROM node:18-alpine AS production
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY . .
EXPOSE 3000
CMD ["npm", "start"]Multi-stage builds provide reduced image size, enhanced security by minimizing the attack surface, and better caching capabilities.
Building and Running Docker Images
Basic Commands
Build and run Docker images using these essential commands:
# Build the image
docker build -t flymate-flights-service .
# Run the built image
docker run -it --init -p 3000:3000 -v "$(pwd)":/app flymate-flights-service:latestDocker Volumes
Purpose and Implementation
Docker volumes persist data beyond container lifecycle, essential for databases and node_modules. Volumes are managed by Docker and survive container restarts and removal
# Create a volume
docker volume create flymate-flights-service-node-modules
# Run with volume
docker run -it --init -p 3000:3000 \
-v "$(pwd)":/app \
-v flymate-flights-service-node-modules:/app/node_modules \
flymate-flights-service:latestBind Mounts vs Volumes
Bind mounts link host directories to containers, while volumes are managed by Docker and preferred for data persistence. Volumes provide better performance and are more secure than bind mounts.
Docker Networks
Network Types
Docker provides several network drivers for different use cases: Bridge Network Bridge networking is the default and most prevalent form of network. It creates an isolated network for containers on the same Docker host:
# Create a custom bridge network
docker network create flymate
# Run container with custom network
docker run --network flymate --name flymate-flights-service -p 3000:3000 flymate-flights-service:latestHost Network The host network allows containers to use the host machine’s network directly. The container shares the host’s IP address and networking resources:
docker run --network host flymate-flights-serviceNone Network The none network completely isolates containers from network access. Containers have no network interfaces and cannot communicate with other containers or external systems:
docker run --network none flymate-flights-serviceOverlay Network Overlay networks are used in multi-host configurations like Docker Swarm, allowing containers on different hosts to communicate securely:
docker network create --driver overlay flymate-overlayInter-Container Communication
Containers can’t communicate directly unless on the same Docker network. Containers on the same network can communicate using container names as hostnames. Update environment variables to use container names instead of localhost:
# Instead of localhost
FLIGHT_SERVICE=http://localhost:3000
# Use container name
FLIGHT_SERVICE=http://flymate-flights-service:3000Network Management Commands
# List networks
docker network ls
# Inspect network
docker network inspect flymate
# Create network
docker network create flymate
# Connect container to network
docker network connect flymate container_name
# Remove unused networks
docker network pruneEnvironment Variables and Build Arguments
ARG vs ENV
ARG (Build-time Variables) ARG variables are only available during the docker build process and are not present in the final running container:
ARG NODE_ENV=production
ARG APP_VERSION=1.0.0
RUN echo "Building for environment: $NODE_ENV"ENV (Runtime Variables) ENV variables are available both during build and in running containers:
ENV NODE_ENV=production
ENV PORT=3000Combining ARG and ENV
Use ARG to make ENV variables configurable at build time:
ARG NODE_ENV=production
ENV NODE_ENV=$NODE_ENV
ARG PORT=3000
ENV PORT=$PORTPassing Variables via Terminal
Build Arguments
# Pass build arguments
docker build --build-arg NODE_ENV=development --build-arg PORT=4000 -t flymate-api .
# With docker-compose
version: '3'
services:
app:
build:
context: .
args:
NODE_ENV: development
PORT: 4000
Environment Variables at Runtime
# Pass environment variables at runtime
docker run -e NODE_ENV=production -e PORT=3000 flymate-api
# From environment file
docker run --env-file .env flymate-api
# With docker-compose
version: '3'
services:
app:
environment:
- NODE_ENV=production
- PORT=3000
# or
env_file:
- .envPractical Example
FROM node:18-alpine
WORKDIR /app
# Build arguments
ARG NODE_ENV=production
ARG PORT=3000
# Environment variables
ENV NODE_ENV=$NODE_ENV
ENV PORT=$PORT
COPY package*.json ./
RUN npm ci
COPY . .
EXPOSE $PORT
CMD ["npm", "start"]Docker Compose
Purpose and Benefits
Docker Compose orchestrates multi-container applications with a single file, supporting networking, volumes, environment variables, and build contexts. It simplifies the management of complex applications with multiple interconnected services.
Complete Configuration Example
version: "3"
networks:
flymate:
driver: bridge
volumes:
flymate-api-gateway:
flymate-flights-service:
flymate-booking-service:
services:
api-gateway:
build:
context: ../FlyMate-API-Gateway
args:
NODE_ENV: ${NODE_ENV:-development}
PORT: 5000
networks:
- flymate
ports:
- "5000:5000"
volumes:
- ../FlyMate-API-Gateway:/app
- flymate-api-gateway:/app/node_modules
environment:
- NODE_ENV=${NODE_ENV:-development}
- PORT=5000
- FLIGHT_SERVICE=http://flights-service:3000
- BOOKING_SERVICE=http://booking-service:4000
booking-service:
build:
context: ../FlyMate-Booking-Service
args:
NODE_ENV: ${NODE_ENV:-development}
networks:
- flymate
ports:
- "4000:4000"
volumes:
- ../FlyMate-Booking-Service:/app
- flymate-booking-service:/app/node_modules
environment:
- NODE_ENV=${NODE_ENV:-development}
- PORT=4000
- FLIGHT_SERVICE=http://flights-service:3000
flights-service:
build:
context: ../FlyMate-Flights-Service
args:
NODE_ENV: ${NODE_ENV:-development}
networks:
- flymate
ports:
- "3000:3000"
volumes:
- ../FlyMate-Flights-Service:/app
- flymate-flights-service:/app/node_modules
environment:
- NODE_ENV=${NODE_ENV:-development}
- PORT=3000
- BOOKING_SERVICE=http://booking-service:4000Docker Compose Commands
# Build and run
docker compose build
docker compose build --no-cache # (optional, for clean build)
docker compose up -d
# Stop and remove
docker compose down
# View logs
docker compose logs -fImage Management and Distribution
Docker Hub Integration
Push and pull images from Docker Hub for distribution and deployment 1:
# Tag your image
docker tag flymate-flights-service username/flymate-flights-service:latest
# Push to Docker Hub
docker push username/flymate-flights-service:latest
# Pull from Docker Hub
docker pull username/flymate-flights-service:latestImage Layers and Caching
Each Dockerfile instruction creates a new image layer. Layers are cached, so unchanged layers speed up rebuilds. Changing an early layer (e.g., COPY package.json) invalidates all subsequent layers, so order your Dockerfile for maximum cache efficiency.
Kubernetes Integration
Prerequisites and Setup
Install prerequisites: minikube, kubectl, and kompose to convert Docker Compose to Kubernetes YAML:
# Start Minikube
minikube start
# Check Minikube status in Docker DesktopKubernetes-Ready Docker Compose
Update Docker Compose for Kubernetes deployment:
version: "3"
networks:
flymate:
driver: bridge
services:
api-gateway:
image: username/flymate-api-gateway
networks:
- flymate
ports:
- "5000:5000"
labels:
kompose.service.type: loadBalancer
environment:
- NODE_ENV=development
- PORT=5000
- FLIGHT_SERVICE=http://flights-service:3000
- BOOKING_SERVICE=http://booking-service:4000
booking-service:
image: username/flymate-booking-service
networks:
- flymate
ports:
- "4000:4000"
labels:
kompose.service.type: loadBalancer
environment:
- NODE_ENV=development
- PORT=4000
- FLIGHT_SERVICE=http://flights-service:3000
flights-service:
image: username/flymate-flights-service
networks:
- flymate
ports:
- "3000:3000"
labels:
kompose.service.type: loadBalancer
environment:
- NODE_ENV=development
- PORT=3000
- BOOKING_SERVICE=http://booking-service:4000Key changes for Kubernetes:
- Remove volumes (Kubernetes is for deployment, not local dev)
- Replace local build with images from Docker Hub
- Add labels for load balancers
- Use hyphens (
-) in service names (no underscores)
Advanced Docker Concepts
Container Isolation and Namespaces
Docker achieves container isolation through Linux features like namespaces, cgroups, and seccomp. Namespaces provide isolated workspace for containers, including process ID, network, user, and IPC namespaces.
Docker Storage Drivers
Configure Docker to use custom storage drivers by editing the daemon configuration:
{
"storage-driver": "overlay2"
}Docker Content Trust
Docker Content Trust ensures that only signed images are pulled or run. It uses digital signatures to verify publisher identity and data integrity.
Security Best Practices
Container Security
Implement comprehensive security measures for production deployments:
- Use minimal base images like Alpine
- Avoid running containers as root
- Regularly scan images for vulnerabilities
- Implement network policies and isolate containers
- Use Docker Content Trust for image verification
- Store secrets securely using Docker Secrets or external tools like HashiCorp Vault
Secret Management
Use Docker Secrets for secure secret management in Swarm mode. For Kubernetes, use Kubernetes Secrets. Avoid storing sensitive information in environment variables or image layers 2. Integrate with external secret management tools like HashiCorp Vault or AWS Secrets Manager.
Debugging and Troubleshooting
Container Debugging Process
Follow a systematic approach when debugging failed containers 2:
- Check container logs using
docker logs <container_name> - Inspect container state with
docker inspect <container_name> - Monitor resource usage with
docker stats - Verify resource limits and dependencies
- Check network connectivity and port mappings
- Examine the Dockerfile for misconfigurations
Common Debugging Commands
# View container logs
docker logs flymate-flights-service
# Inspect container
docker inspect flymate-flights-service
# Monitor resource usage
docker stats
# Execute commands in running container
docker exec -it flymate-flights-service /bin/sh
# Check network connectivity
docker network ls
docker network inspect flymatePerformance Optimization
Container Performance
Optimize container performance through several strategies:
- Set appropriate memory and CPU limits
- Use efficient base images
- Implement proper logging strategies
- Use volume mounts for persistent data
- Optimize network configuration
- Implement container resource constraints
Resource Management
# Set resource limits in Dockerfile
FROM node:18-alpine
WORKDIR /app
# Add health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:3000/health || exit 1
COPY package*.json ./
RUN npm ci --only=production
COPY . .
EXPOSE 3000
CMD ["npm", "start"]Dockerizing Next.js Applications
Key Differences from Regular Node.js Apps
Next.js requires specific Docker configuration due to its unique build process and server requirements. The main differences include:
- Standalone Output: Next.js can generate a
standalonefolder with only necessary files, eliminating the need to install allnode_modulesin production - Static File Handling: Next.js creates
.next/staticandpublicdirectories that need special handling in Docker containers - Server File: Next.js generates its own
server.jsfile when using standalone output, replacing the need fornext start
Essential Configuration
First, update your next.config.js to enable standalone output
/** @type {import('next').NextConfig} */
const nextConfig = {
output: 'standalone'
}
module.exports = nextConfigDocker Setup
Create .dockerignore
node_modules
.next
.git
npm-debug.log*
README.mdMulti-stage Dockerfile
# Dependencies
FROM node:18-alpine AS deps
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
# Build
FROM node:18-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
# Production
FROM node:18-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
ENV HOSTNAME "0.0.0.0"
CMD ["node", "server.js"]Build and Run Commands
# Build the image
docker build -t flymate-app .
# Run the container
docker run -p 3000:3000 flymate-appHealth Checks
Implement health checks in your applications and containers:
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:3000/health || exit 1