Docker Course #10: Final Project — Full-Stack App with Docker

Welcome to the Docker Course - Part 10 of 10. This is the final article of the course! In this project, you will build a complete full-stack application with Docker: a React frontend served by Nginx, a Node.js/Express API, a PostgreSQL database, and Redis for caching.

Source: Wikimedia Commons
This project brings together everything you have learned throughout the course: Dockerfiles, multi-stage builds, Docker Compose, volumes, networks, health checks, environment variables, and security best practices. By the end, you will have a production-ready Docker setup that you can adapt for your own projects.
Project Architecture
Our full-stack application consists of four services:
| Service | Technology | Port | Purpose |
|---|---|---|---|
| Frontend | React + Nginx | 80 | User interface (static files served by Nginx) |
| API | Node.js + Express | 3000 | REST API backend |
| Database | PostgreSQL 16 | 5432 | Persistent data storage |
| Cache | Redis 7 | 6379 | Response caching and session storage |
The project directory structure looks like this:
1fullstack-docker-app/
2├── frontend/
3│ ├── public/
4│ ├── src/
5│ │ ├── App.jsx
6│ │ ├── index.js
7│ │ └── api.js
8│ ├── nginx.conf
9│ ├── Dockerfile
10│ └── package.json
11├── api/
12│ ├── src/
13│ │ ├── index.js
14│ │ ├── routes/
15│ │ │ └── tasks.js
16│ │ ├── middleware/
17│ │ │ └── cache.js
18│ │ └── db.js
19│ ├── Dockerfile
20│ └── package.json
21├── database/
22│ └── init.sql
23├── docker-compose.yml
24├── docker-compose.dev.yml
25├── docker-compose.prod.yml
26├── .env
27├── .env.example
28├── .dockerignore
29└── Makefile
The API Service: Node.js + Express
Let us start with the API. This is a simple task management REST API that uses PostgreSQL for storage and Redis for caching.
API package.json
1{
2 "name": "fullstack-api",
3 "version": "1.0.0",
4 "main": "src/index.js",
5 "scripts": {
6 "start": "node src/index.js",
7 "dev": "nodemon src/index.js",
8 "test": "jest --coverage"
9 },
10 "dependencies": {
11 "express": "^4.18.2",
12 "pg": "^8.12.0",
13 "redis": "^4.6.13",
14 "cors": "^2.8.5",
15 "helmet": "^7.1.0",
16 "morgan": "^1.10.0"
17 },
18 "devDependencies": {
19 "nodemon": "^3.1.0",
20 "jest": "^29.7.0"
21 }
22}
Database connection (api/src/db.js)
1const { Pool } = require('pg');
2
3const pool = new Pool({
4 host: process.env.DB_HOST || 'database',
5 port: parseInt(process.env.DB_PORT || '5432'),
6 database: process.env.DB_NAME || 'fullstack_app',
7 user: process.env.DB_USER || 'appuser',
8 password: process.env.DB_PASSWORD || 'apppass',
9 max: 20,
10 idleTimeoutMillis: 30000,
11 connectionTimeoutMillis: 5000,
12});
13
14pool.on('error', (err) => {
15 console.error('Unexpected database error:', err);
16});
17
18module.exports = pool;
Cache middleware (api/src/middleware/cache.js)
1const { createClient } = require('redis');
2
3const redisClient = createClient({
4 url: process.env.REDIS_URL || 'redis://cache:6379',
5});
6
7redisClient.on('error', (err) => console.error('Redis error:', err));
8redisClient.on('connect', () => console.log('Connected to Redis'));
9
10// Connect to Redis
11(async () => {
12 await redisClient.connect();
13})();
14
15// Cache middleware factory
16function cacheMiddleware(keyPrefix, ttlSeconds = 60) {
17 return async (req, res, next) => {
18 const cacheKey = keyPrefix + ':' + req.originalUrl;
19
20 try {
21 const cached = await redisClient.get(cacheKey);
22 if (cached) {
23 console.log('Cache HIT:', cacheKey);
24 return res.json(JSON.parse(cached));
25 }
26 console.log('Cache MISS:', cacheKey);
27
28 // Override res.json to cache the response
29 const originalJson = res.json.bind(res);
30 res.json = (data) => {
31 redisClient.setEx(cacheKey, ttlSeconds, JSON.stringify(data));
32 return originalJson(data);
33 };
34
35 next();
36 } catch (err) {
37 console.error('Cache error:', err);
38 next(); // Continue without cache on error
39 }
40 };
41}
42
43// Invalidate cache by pattern
44async function invalidateCache(pattern) {
45 try {
46 const keys = await redisClient.keys(pattern);
47 if (keys.length > 0) {
48 await redisClient.del(keys);
49 console.log('Cache invalidated:', keys.length, 'keys');
50 }
51 } catch (err) {
52 console.error('Cache invalidation error:', err);
53 }
54}
55
56module.exports = { redisClient, cacheMiddleware, invalidateCache };
Task routes (api/src/routes/tasks.js)
1const express = require('express');
2const pool = require('../db');
3const { cacheMiddleware, invalidateCache } = require('../middleware/cache');
4
5const router = express.Router();
6
7// GET /api/tasks - List all tasks (cached for 30 seconds)
8router.get('/', cacheMiddleware('tasks', 30), async (req, res) => {
9 try {
10 const result = await pool.query(
11 'SELECT * FROM tasks ORDER BY created_at DESC'
12 );
13 res.json({ data: result.rows, total: result.rowCount });
14 } catch (err) {
15 console.error('Error fetching tasks:', err);
16 res.status(500).json({ error: 'Failed to fetch tasks' });
17 }
18});
19
20// POST /api/tasks - Create a new task
21router.post('/', async (req, res) => {
22 const { title, description } = req.body;
23
24 if (!title || !title.trim()) {
25 return res.status(400).json({ error: 'Title is required' });
26 }
27
28 try {
29 const result = await pool.query(
30 'INSERT INTO tasks (title, description) VALUES ($1, $2) RETURNING *',
31 [title.trim(), (description || '').trim()]
32 );
33 await invalidateCache('tasks:*');
34 res.status(201).json({ data: result.rows[0] });
35 } catch (err) {
36 console.error('Error creating task:', err);
37 res.status(500).json({ error: 'Failed to create task' });
38 }
39});
40
41// PATCH /api/tasks/:id/complete - Mark a task as completed
42router.patch('/:id/complete', async (req, res) => {
43 try {
44 const result = await pool.query(
45 'UPDATE tasks SET completed = true, updated_at = NOW() WHERE id = $1 RETURNING *',
46 [req.params.id]
47 );
48 if (result.rowCount === 0) {
49 return res.status(404).json({ error: 'Task not found' });
50 }
51 await invalidateCache('tasks:*');
52 res.json({ data: result.rows[0] });
53 } catch (err) {
54 console.error('Error completing task:', err);
55 res.status(500).json({ error: 'Failed to complete task' });
56 }
57});
58
59// DELETE /api/tasks/:id - Delete a task
60router.delete('/:id', async (req, res) => {
61 try {
62 const result = await pool.query(
63 'DELETE FROM tasks WHERE id = $1 RETURNING *',
64 [req.params.id]
65 );
66 if (result.rowCount === 0) {
67 return res.status(404).json({ error: 'Task not found' });
68 }
69 await invalidateCache('tasks:*');
70 res.json({ message: 'Task deleted', data: result.rows[0] });
71 } catch (err) {
72 console.error('Error deleting task:', err);
73 res.status(500).json({ error: 'Failed to delete task' });
74 }
75});
76
77module.exports = router;
Main server (api/src/index.js)
1const express = require('express');
2const cors = require('cors');
3const helmet = require('helmet');
4const morgan = require('morgan');
5const taskRoutes = require('./routes/tasks');
6
7const app = express();
8const PORT = process.env.PORT || 3000;
9
10// Middleware
11app.use(helmet());
12app.use(cors({ origin: process.env.CORS_ORIGIN || '*' }));
13app.use(morgan('combined'));
14app.use(express.json());
15
16// Routes
17app.use('/api/tasks', taskRoutes);
18
19// Health check endpoint
20app.get('/health', async (req, res) => {
21 const pool = require('./db');
22 try {
23 await pool.query('SELECT 1');
24 res.json({
25 status: 'healthy',
26 timestamp: new Date().toISOString(),
27 uptime: process.uptime(),
28 service: 'api',
29 });
30 } catch (err) {
31 res.status(503).json({ status: 'unhealthy', error: err.message });
32 }
33});
34
35// Start server
36app.listen(PORT, '0.0.0.0', () => {
37 console.log('API server running on port ' + PORT);
38 console.log('Environment: ' + (process.env.NODE_ENV || 'development'));
39});
API Dockerfile (multi-stage)
1# api/Dockerfile
2
3# Stage 1: Install dependencies
4FROM node:20-alpine AS deps
5WORKDIR /app
6COPY package*.json ./
7RUN npm ci --only=production
8
9# Stage 2: Development
10FROM node:20-alpine AS development
11WORKDIR /app
12COPY package*.json ./
13RUN npm ci
14COPY . .
15EXPOSE 3000
16CMD ["npm", "run", "dev"]
17
18# Stage 3: Production
19FROM node:20-alpine AS production
20WORKDIR /app
21
22RUN addgroup -S appgroup && adduser -S appuser -G appgroup
23
24COPY --from=deps --chown=appuser:appgroup /app/node_modules ./node_modules
25COPY --chown=appuser:appgroup src ./src
26COPY --chown=appuser:appgroup package.json ./
27
28USER appuser
29EXPOSE 3000
30CMD ["node", "src/index.js"]
The Frontend Service: React + Nginx
The frontend is a React application that gets built into static files and served by Nginx. Nginx also acts as a reverse proxy, forwarding API requests to the backend.
Nginx configuration (frontend/nginx.conf)
1# frontend/nginx.conf
2server {
3 listen 80;
4 server_name localhost;
5
6 # Serve static files
7 root /usr/share/nginx/html;
8 index index.html;
9
10 # SPA routing: send all non-file requests to index.html
11 location / {
12 try_files $uri $uri/ /index.html;
13 }
14
15 # Reverse proxy API requests to the backend
16 location /api/ {
17 proxy_pass http://api:3000;
18 proxy_http_version 1.1;
19 proxy_set_header Upgrade $http_upgrade;
20 proxy_set_header Connection 'upgrade';
21 proxy_set_header Host $host;
22 proxy_set_header X-Real-IP $remote_addr;
23 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
24 proxy_set_header X-Forwarded-Proto $scheme;
25 proxy_cache_bypass $http_upgrade;
26 }
27
28 # Health check endpoint
29 location /nginx-health {
30 access_log off;
31 return 200 "healthy";
32 add_header Content-Type text/plain;
33 }
34
35 # Security headers
36 add_header X-Frame-Options "SAMEORIGIN" always;
37 add_header X-Content-Type-Options "nosniff" always;
38 add_header X-XSS-Protection "1; mode=block" always;
39 add_header Referrer-Policy "strict-origin-when-cross-origin" always;
40
41 # Cache static assets
42 location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2)$ {
43 expires 1y;
44 add_header Cache-Control "public, immutable";
45 }
46
47 # Gzip compression
48 gzip on;
49 gzip_types text/plain text/css application/json application/javascript text/xml application/xml;
50 gzip_min_length 256;
51}
Frontend Dockerfile (multi-stage)
1# frontend/Dockerfile
2
3# Stage 1: Build React app
4FROM node:20-alpine AS builder
5WORKDIR /app
6
7COPY package*.json ./
8RUN npm ci
9
10COPY . .
11RUN npm run build
12
13# Stage 2: Serve with Nginx
14FROM nginx:alpine AS production
15
16# Remove default nginx config
17RUN rm /etc/nginx/conf.d/default.conf
18
19# Copy custom nginx config
20COPY nginx.conf /etc/nginx/conf.d/default.conf
21
22# Copy built React app
23COPY --from=builder /app/build /usr/share/nginx/html
24
25# Create non-root user for nginx
26RUN chown -R nginx:nginx /usr/share/nginx/html \
27 && chown -R nginx:nginx /var/cache/nginx \
28 && chown -R nginx:nginx /var/log/nginx \
29 && touch /var/run/nginx.pid \
30 && chown -R nginx:nginx /var/run/nginx.pid
31
32USER nginx
33EXPOSE 80
34CMD ["nginx", "-g", "daemon off;"]
The Database: PostgreSQL with Init Script
PostgreSQL uses an init script that runs automatically when the container starts for the first time:
1# database/init.sql
2-- Create the tasks table
3CREATE TABLE IF NOT EXISTS tasks (
4 id SERIAL PRIMARY KEY,
5 title VARCHAR(255) NOT NULL,
6 description TEXT DEFAULT '',
7 completed BOOLEAN DEFAULT FALSE,
8 created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
9 updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
10);
11
12-- Create an index for faster queries
13CREATE INDEX IF NOT EXISTS idx_tasks_completed ON tasks(completed);
14CREATE INDEX IF NOT EXISTS idx_tasks_created_at ON tasks(created_at DESC);
15
16-- Insert sample data
17INSERT INTO tasks (title, description) VALUES
18 ('Set up Docker environment', 'Install Docker Desktop and verify it works'),
19 ('Build the API', 'Create the Express.js REST API with PostgreSQL'),
20 ('Build the frontend', 'Create the React app with task management UI'),
21 ('Configure Docker Compose', 'Set up all services with health checks'),
22 ('Deploy to production', 'Push images and deploy the full stack');
23
24-- Display confirmation
25SELECT 'Database initialized with ' || COUNT(*) || ' sample tasks' AS status FROM tasks;
.sql or .sh files placed in /docker-entrypoint-initdb.d/ when the container starts for the first time (when the data volume is empty). This is perfect for schema setup and seed data.
Docker Compose: Bringing It All Together
Now let us create the Docker Compose file that ties all four services together. We will create three files: a base configuration, a development override, and a production override.
Environment variables (.env)
1# .env
2DB_HOST=database
3DB_PORT=5432
4DB_NAME=fullstack_app
5DB_USER=appuser
6DB_PASSWORD=secure_password_123
7REDIS_URL=redis://cache:6379
8NODE_ENV=production
9CORS_ORIGIN=http://localhost
Base configuration (docker-compose.yml)
1# docker-compose.yml - Base configuration
2
3services:
4 # ─── Frontend (React + Nginx) ───────────────────
5 frontend:
6 build:
7 context: ./frontend
8 target: production
9 restart: unless-stopped
10 ports:
11 - "80:80"
12 depends_on:
13 api:
14 condition: service_healthy
15 networks:
16 - frontend-net
17 healthcheck:
18 test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost/nginx-health"]
19 interval: 30s
20 timeout: 5s
21 retries: 3
22 start_period: 10s
23
24 # ─── API (Node.js + Express) ────────────────────
25 api:
26 build:
27 context: ./api
28 target: production
29 restart: unless-stopped
30 environment:
31 NODE_ENV: ${NODE_ENV:-production}
32 DB_HOST: ${DB_HOST}
33 DB_PORT: ${DB_PORT}
34 DB_NAME: ${DB_NAME}
35 DB_USER: ${DB_USER}
36 DB_PASSWORD: ${DB_PASSWORD}
37 REDIS_URL: ${REDIS_URL}
38 CORS_ORIGIN: ${CORS_ORIGIN:-http://localhost}
39 PORT: 3000
40 depends_on:
41 database:
42 condition: service_healthy
43 cache:
44 condition: service_healthy
45 networks:
46 - frontend-net
47 - backend-net
48 healthcheck:
49 test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/health"]
50 interval: 15s
51 timeout: 5s
52 retries: 5
53 start_period: 20s
54 cap_drop:
55 - ALL
56 security_opt:
57 - no-new-privileges:true
58
59 # ─── Database (PostgreSQL) ──────────────────────
60 database:
61 image: postgres:16-alpine
62 restart: unless-stopped
63 environment:
64 POSTGRES_DB: ${DB_NAME}
65 POSTGRES_USER: ${DB_USER}
66 POSTGRES_PASSWORD: ${DB_PASSWORD}
67 volumes:
68 - postgres-data:/var/lib/postgresql/data
69 - ./database/init.sql:/docker-entrypoint-initdb.d/01-init.sql:ro
70 networks:
71 - backend-net
72 healthcheck:
73 test: ["CMD-SHELL", "pg_isready -U ${DB_USER} -d ${DB_NAME}"]
74 interval: 10s
75 timeout: 5s
76 retries: 5
77 start_period: 30s
78 cap_drop:
79 - ALL
80 cap_add:
81 - CHOWN
82 - SETUID
83 - SETGID
84 - FOWNER
85 - DAC_READ_SEARCH
86
87 # ─── Cache (Redis) ─────────────────────────────
88 cache:
89 image: redis:7-alpine
90 restart: unless-stopped
91 command: redis-server --maxmemory 128mb --maxmemory-policy allkeys-lru --appendonly yes
92 volumes:
93 - redis-data:/data
94 networks:
95 - backend-net
96 healthcheck:
97 test: ["CMD", "redis-cli", "ping"]
98 interval: 10s
99 timeout: 5s
100 retries: 3
101 start_period: 10s
102 cap_drop:
103 - ALL
104
105volumes:
106 postgres-data:
107 name: fullstack-postgres-data
108 redis-data:
109 name: fullstack-redis-data
110
111networks:
112 frontend-net:
113 name: fullstack-frontend
114 driver: bridge
115 backend-net:
116 name: fullstack-backend
117 driver: bridge
118 internal: true
backend-net network has internal: true. This means the database and Redis are not accessible from outside Docker. Only the API (which is on both networks) can reach them. The frontend can only reach the API through the frontend-net.
Development vs Production Configurations
In development, you want hot reloading, debug ports, and exposed database ports. In production, you want optimized builds, security hardening, and resource limits.
Development override (docker-compose.dev.yml)
1# docker-compose.dev.yml - Development overrides
2
3services:
4 frontend:
5 build:
6 context: ./frontend
7 target: builder # Use the build stage with source code
8 ports:
9 - "3001:3000" # React dev server port
10 volumes:
11 - ./frontend/src:/app/src # Hot reload
12 command: npm start
13 environment:
14 - REACT_APP_API_URL=http://localhost:3000
15
16 api:
17 build:
18 context: ./api
19 target: development # Use development stage with nodemon
20 ports:
21 - "3000:3000"
22 - "9229:9229" # Node.js debugger port
23 volumes:
24 - ./api/src:/app/src # Hot reload
25 environment:
26 NODE_ENV: development
27
28 database:
29 ports:
30 - "5432:5432" # Expose for local database tools
31
32 cache:
33 ports:
34 - "6379:6379" # Expose for local Redis tools
Production override (docker-compose.prod.yml)
1# docker-compose.prod.yml - Production overrides
2
3services:
4 frontend:
5 read_only: true
6 tmpfs:
7 - /tmp:rw,noexec,nosuid,size=16m
8 - /var/cache/nginx:rw,noexec,nosuid,size=64m
9 - /var/run:rw,noexec,nosuid,size=1m
10 deploy:
11 resources:
12 limits:
13 cpus: "0.5"
14 memory: 128M
15 logging:
16 driver: json-file
17 options:
18 max-size: "10m"
19 max-file: "3"
20
21 api:
22 read_only: true
23 tmpfs:
24 - /tmp:rw,noexec,nosuid,size=64m
25 deploy:
26 resources:
27 limits:
28 cpus: "1.0"
29 memory: 512M
30 reservations:
31 cpus: "0.25"
32 memory: 128M
33 logging:
34 driver: json-file
35 options:
36 max-size: "10m"
37 max-file: "5"
38
39 database:
40 read_only: true
41 tmpfs:
42 - /tmp:rw,noexec,nosuid,size=64m
43 - /run/postgresql:rw,noexec,nosuid,size=16m
44 deploy:
45 resources:
46 limits:
47 cpus: "1.0"
48 memory: 1G
49 reservations:
50 cpus: "0.5"
51 memory: 256M
52 logging:
53 driver: json-file
54 options:
55 max-size: "10m"
56 max-file: "5"
57
58 cache:
59 read_only: true
60 tmpfs:
61 - /tmp:rw,noexec,nosuid,size=16m
62 deploy:
63 resources:
64 limits:
65 cpus: "0.5"
66 memory: 256M
67 logging:
68 driver: json-file
69 options:
70 max-size: "5m"
71 max-file: "3"
Running in different environments
1# Development (with hot reload, debug ports, exposed DB)
2docker compose -f docker-compose.yml -f docker-compose.dev.yml up -d
3
4# Production (with resource limits, read-only, security hardening)
5docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
6
7# View all running services
8docker compose ps
9
10# View logs for a specific service
11docker compose logs -f api
12
13# Run database migrations or seed data
14docker compose exec api node src/seed.js
15
16# Access the database directly (development only)
17docker compose exec database psql -U appuser -d fullstack_app
18
19# Access the Redis CLI
20docker compose exec cache redis-cli
Health Checks and Service Dependencies
Our Docker Compose configuration includes a carefully designed startup order with health checks:
- PostgreSQL starts first and becomes healthy when it can accept connections
- Redis starts in parallel with PostgreSQL and becomes healthy when it responds to PING
- API waits for both PostgreSQL and Redis to be healthy, then starts and becomes healthy when its
/healthendpoint responds - Frontend waits for the API to be healthy, then starts serving
1# Watch the startup sequence
2docker compose up -d && docker compose logs -f
3
4# Expected startup order:
5# [database] PostgreSQL init process complete; ready for start up.
6# [cache] Ready to accept connections
7# [api] Connected to Redis
8# [api] API server running on port 3000
9# [frontend] nginx: the configuration file syntax is ok
10
11# Check health status of all services
12docker compose ps
13
14# Expected output:
15# NAME SERVICE STATUS PORTS
16# frontend frontend running (healthy) 0.0.0.0:80->80/tcp
17# api api running (healthy)
18# database database running (healthy)
19# cache cache running (healthy)
docker compose logs <service-name>. Common issues include wrong environment variables, port conflicts, and health check failures during the start period.
The .dockerignore and Makefile
Two supporting files that make your life easier:
.dockerignore
1# .dockerignore
2node_modules
3npm-debug.log*
4.git
5.gitignore
6.env
7.env.*
8!.env.example
9docker-compose*.yml
10Dockerfile*
11README.md
12LICENSE
13.vscode
14.idea
15coverage
16.nyc_output
17*.md
Makefile (convenience commands)
1# Makefile
2.PHONY: dev prod down logs clean test
3
4# Start in development mode
5dev:
6 docker compose -f docker-compose.yml -f docker-compose.dev.yml up -d
7 @echo "Development environment started!"
8 @echo "Frontend: http://localhost:3001"
9 @echo "API: http://localhost:3000"
10 @echo "Database: localhost:5432"
11
12# Start in production mode
13prod:
14 docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d --build
15 @echo "Production environment started!"
16 @echo "App: http://localhost"
17
18# Stop all services
19down:
20 docker compose down
21
22# View logs
23logs:
24 docker compose logs -f
25
26# Clean everything (including volumes!)
27clean:
28 docker compose down -v --rmi local
29 @echo "All containers, volumes, and local images removed."
30
31# Run tests in containers
32test:
33 docker compose -f docker-compose.yml -f docker-compose.dev.yml run --rm api npm test
34
35# Database shell
36db-shell:
37 docker compose exec database psql -U appuser -d fullstack_app
38
39# Redis CLI
40redis-cli:
41 docker compose exec cache redis-cli
1# Usage:
2make dev # Start development environment
3make prod # Start production environment
4make down # Stop everything
5make logs # View logs
6make clean # Remove everything including data
7make test # Run tests
8make db-shell # Open PostgreSQL shell
Deploy Checklist
Before deploying your Dockerized application to production, go through this checklist:
Security
- All containers run as non-root users
- Unnecessary capabilities are dropped (
cap_drop: ALL) -
no-new-privilegesis enabled - Read-only filesystem where possible
- No secrets in image layers (use environment variables or secrets)
- Images scanned with Trivy (no CRITICAL vulnerabilities)
- Database is not exposed to the internet (internal network)
- CORS is configured for the correct origin
Reliability
- Health checks configured for all services
- Restart policy set (
unless-stoppedoralways) - Resource limits set (CPU and memory)
- Logging configured with size limits
- Database data persisted with named volumes
- Backup strategy for database volumes
Performance
- Multi-stage builds used (minimal final images)
-
.dockerignoreexcludes unnecessary files - Nginx gzip compression enabled
- Static assets have cache headers
- Redis configured with appropriate memory limits
Operations
- CI/CD pipeline builds and pushes images automatically
- Rollback strategy defined (previous image tag)
- Monitoring and alerting configured
-
.env.examplecommitted to version control -
.envfile is in.gitignore
Course Summary and Next Steps
Congratulations! You have completed the Docker from Scratch Course. Throughout 10 articles, you have gone from the basics of containerization to building a complete, production-ready full-stack application.
Here is a recap of everything you learned:
| Part | Topic |
|---|---|
| Part 1 | What is Docker? Installation and first container |
| Part 2 | Docker images: pulling, building, and managing |
| Part 3 | Dockerfile: building custom images |
| Part 4 | Docker volumes: persistent data storage |
| Part 5 | Docker networking: container communication |
| Part 6 | Docker Compose: multi-container applications |
| Part 7 | Multi-stage builds: image optimization |
| Part 8 | Docker security: best practices and hardening |
| Part 9 | Docker in CI/CD: GitHub Actions automation |
| Part 10 | Final project: full-stack app with Docker |
What comes next?
Now that you have mastered Docker, here are the natural next steps in your DevOps journey:
- Kubernetes: Learn container orchestration at scale. Kubernetes automates deployment, scaling, and management of containerized applications across clusters of machines. It is the industry standard for running containers in production.
- Terraform: Manage your cloud infrastructure as code. Provision servers, networks, and services on AWS, Azure, or GCP with declarative configuration files.
- Monitoring: Set up observability with Prometheus and Grafana for metrics, or the ELK stack (Elasticsearch, Logstash, Kibana) for centralized logging.
- Service Mesh: Explore Istio or Linkerd for advanced networking between microservices: traffic management, security, and observability.
- Cloud-native: Dive deeper into cloud-native patterns like microservices, event-driven architecture, and serverless containers (AWS Fargate, Azure Container Instances, Google Cloud Run).
For more details, check the official Docker documentation: https://docs.docker.com/get-started/docker-overview/
Comments
Sign in to leave a comment
No comments yet. Be the first!