Docker Course #8: Docker Security — Best Practices
Welcome to the Docker Course - Part 8 of 10. In this article, you will learn Docker security best practices to harden your containers, reduce your attack surface, and protect your applications in production.

Source: Wikimedia Commons
Containers provide process isolation, but they are not a security boundary by default. A misconfigured container can expose your host system, leak secrets, or provide attackers with a foothold into your infrastructure. Security must be intentional — it does not happen automatically.
In this article, we will cover the most important security practices for Docker containers, from Dockerfile hardening to runtime protections.
Running Containers as Non-Root
By default, containers run as root. This is the single biggest security risk in Docker. If an attacker escapes the container, they have root access to the host system.
Always create and use a non-root user in your Dockerfiles:
1# BAD: Running as root (default)
2FROM node:20-alpine
3WORKDIR /app
4COPY . .
5RUN npm ci --only=production
6CMD ["node", "server.js"]
7
8# GOOD: Running as non-root user
9FROM node:20-alpine
10WORKDIR /app
11
12# Create a dedicated user and group
13RUN addgroup -S appgroup && adduser -S appuser -G appgroup
14
15COPY --chown=appuser:appgroup package*.json ./
16RUN npm ci --only=production
17COPY --chown=appuser:appgroup . .
18
19# Switch to non-root user BEFORE CMD
20USER appuser
21EXPOSE 3000
22CMD ["node", "server.js"]
For different base images, the syntax varies:
1# Alpine-based images
2RUN addgroup -S appgroup && adduser -S appuser -G appgroup
3
4# Debian/Ubuntu-based images
5RUN groupadd -r appgroup && useradd -r -g appgroup -s /bin/false appuser
6
7# Some images include a non-root user already
8# node images have: USER node
9# python images: create your own
COPY --chown=user:group or RUN chown to set proper ownership. Otherwise the application may not be able to read its own files.
Choosing Minimal Base Images
Every package in your base image is a potential vulnerability. The fewer packages you have, the smaller your attack surface.
| Image | Size | Packages | Security Level |
|---|---|---|---|
ubuntu:22.04 |
~77 MB | ~100+ | Low — many unnecessary packages |
debian:bookworm-slim |
~75 MB | ~80+ | Medium — reduced package set |
alpine:3.19 |
~7 MB | ~15 | High — minimal package set |
gcr.io/distroless/static |
~2 MB | ~0 | Very high — no shell, no package manager |
scratch |
0 MB | 0 | Maximum — empty image |
1# Best practice: Use the smallest image that works for your application
2
3# For Go (static binary)
4FROM scratch
5COPY --from=builder /app/server /server
6CMD ["/server"]
7
8# For Java
9FROM eclipse-temurin:21-jre-alpine
10# Not eclipse-temurin:21-jdk (JDK includes compiler, not needed at runtime)
11
12# For Node.js
13FROM node:20-alpine
14# Not node:20 (full image with build tools)
15
16# For Python
17FROM python:3.12-slim
18# Not python:3.12 (full image with gcc, make, etc.)
Scanning Images with Trivy
Trivy is a comprehensive vulnerability scanner for container images. It checks for known CVEs (Common Vulnerabilities and Exposures) in your base image and application dependencies.
1# Install Trivy (macOS)
2brew install trivy
3
4# Install Trivy (Linux)
5curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -s -- -b /usr/local/bin
6
7# Scan a Docker image
8trivy image myapp:latest
9
10# Scan with severity filter (only HIGH and CRITICAL)
11trivy image --severity HIGH,CRITICAL myapp:latest
12
13# Scan and fail if critical vulnerabilities are found (useful in CI)
14trivy image --exit-code 1 --severity CRITICAL myapp:latest
15
16# Scan a Dockerfile for misconfigurations
17trivy config Dockerfile
18
19# Scan your project dependencies
20trivy fs --scanners vuln .
Example Trivy output:
1$ trivy image --severity HIGH,CRITICAL myapp:latest
2
3myapp:latest (alpine 3.19.1)
4=============================
5Total: 2 (HIGH: 1, CRITICAL: 1)
6
7+-----------+------------------+----------+-------------------+---------------+
8| LIBRARY | VULNERABILITY ID | SEVERITY | INSTALLED VERSION | FIXED VERSION |
9+-----------+------------------+----------+-------------------+---------------+
10| libcrypto | CVE-2024-XXXX | CRITICAL | 3.1.4-r1 | 3.1.4-r3 |
11| libssl | CVE-2024-YYYY | HIGH | 3.1.4-r1 | 3.1.4-r3 |
12+-----------+------------------+----------+-------------------+---------------+
Never Store Secrets in Image Layers
This is one of the most common Docker security mistakes. Anything you put in a Dockerfile layer is permanently stored in the image, even if you delete it in a later layer.
1# BAD: Secret is permanently baked into the image layers
2FROM node:20-alpine
3WORKDIR /app
4COPY . .
5# This .env file with secrets is now in the image forever!
6COPY .env .
7# Even if you delete it, it is still in the previous layer
8RUN rm .env
9CMD ["node", "server.js"]
10
11# BAD: Secret visible in build args (shown in docker history)
12FROM node:20-alpine
13ARG DATABASE_PASSWORD
14ENV DB_PASS=${DATABASE_PASSWORD}
15
16# GOOD: Use BuildKit secret mounts (never stored in layers)
17# syntax=docker/dockerfile:1
18FROM node:20-alpine
19WORKDIR /app
20COPY package*.json ./
21RUN --mount=type=secret,id=npmrc,target=/root/.npmrc npm ci
22COPY . .
23CMD ["node", "server.js"]
24
25# GOOD: Pass secrets at runtime via environment variables
26FROM node:20-alpine
27WORKDIR /app
28COPY . .
29RUN npm ci --only=production
30# Secret is provided when running the container, not baked in
31# docker run -e DB_PASS=secret123 myapp
32CMD ["node", "server.js"]
1# Check if secrets are leaked in image layers
2docker history myapp:latest
3
4# Inspect image for environment variables
5docker inspect myapp:latest | grep -A 10 "Env"
Read-Only Filesystem and Security Options
Running containers with a read-only filesystem prevents attackers from writing malicious files or modifying your application:
1# Run with read-only root filesystem
2docker run --read-only myapp:latest
3
4# Allow writes only to specific directories
5docker run --read-only \
6 --tmpfs /tmp:rw,noexec,nosuid \
7 --tmpfs /var/run:rw,noexec,nosuid \
8 -v app-logs:/app/logs \
9 myapp:latest
In Docker Compose:
1services:
2 api:
3 image: myapp:latest
4 read_only: true
5 tmpfs:
6 - /tmp:rw,noexec,nosuid
7 - /var/run:rw,noexec,nosuid
8 volumes:
9 - app-logs:/app/logs
10 security_opt:
11 - no-new-privileges:true
no-new-privileges
The no-new-privileges security option prevents processes inside the container from gaining additional privileges through setuid/setgid binaries:
1# Run with no-new-privileges
2docker run --security-opt no-new-privileges myapp:latest
This is a simple but powerful protection. It prevents privilege escalation attacks where a process exploits a setuid binary to gain root access.
Linux Capabilities: cap_drop and cap_add
Docker containers run with a default set of Linux capabilities. You should drop all capabilities and add back only the ones your application needs:
1# docker-compose.yml
2services:
3 api:
4 image: myapp:latest
5 cap_drop:
6 - ALL # Drop ALL capabilities
7 cap_add:
8 - NET_BIND_SERVICE # Only if you need to bind to ports below 1024
9 security_opt:
10 - no-new-privileges:true
1# Command line equivalent
2docker run \
3 --cap-drop ALL \
4 --cap-add NET_BIND_SERVICE \
5 --security-opt no-new-privileges \
6 myapp:latest
Common capabilities and when you need them:
| Capability | What it allows | When needed |
|---|---|---|
NET_BIND_SERVICE |
Bind to ports below 1024 | Web servers on port 80/443 |
CHOWN |
Change file ownership | Entrypoint scripts that fix permissions |
SETUID/SETGID |
Change process UID/GID | Processes that switch users |
SYS_PTRACE |
Debug processes | Only for debugging (never in production) |
cap_drop: ALL and add capabilities only when your application fails without them. This is the principle of least privilege.
Network Isolation
Properly isolating container networks prevents unauthorized communication between services:
1# docker-compose.yml with network isolation
2services:
3 # Public-facing reverse proxy
4 nginx:
5 image: nginx:alpine
6 ports:
7 - "443:443"
8 networks:
9 - frontend
10 read_only: true
11
12 # Application server (not directly exposed)
13 api:
14 image: myapp:latest
15 networks:
16 - frontend # Can talk to nginx
17 - backend # Can talk to database
18 security_opt:
19 - no-new-privileges:true
20 cap_drop:
21 - ALL
22
23 # Database (only accessible from backend network)
24 database:
25 image: postgres:16-alpine
26 # NO ports exposed to the host!
27 networks:
28 - backend
29 volumes:
30 - db-data:/var/lib/postgresql/data
31 read_only: true
32 tmpfs:
33 - /tmp:rw,noexec,nosuid
34 - /run/postgresql:rw,noexec,nosuid
35
36networks:
37 frontend:
38 driver: bridge
39 backend:
40 driver: bridge
41 internal: true # No external access!
42
43volumes:
44 db-data:
Key network security principles:
- Do not expose database ports to the host unless absolutely necessary for development
- Use
internal: truefor networks that should not have external access - Place services in the minimum number of networks they need
- Use a reverse proxy (nginx, Traefik) as the only public-facing service
Docker Content Trust
Docker Content Trust (DCT) ensures that the images you pull are signed and have not been tampered with:
1# Enable Docker Content Trust
2export DOCKER_CONTENT_TRUST=1
3
4# Now all docker pull and docker push commands require signed images
5docker pull nginx:alpine # Will fail if the image is not signed
6
7# Sign and push your own images
8docker trust sign myrepo/myapp:latest
9
10# Inspect trust data
11docker trust inspect myrepo/myapp:latest
12
13# View signers
14docker trust inspect --pretty myrepo/myapp:latest
1# Pin images by digest instead of tag (immutable reference)
2FROM node:20-alpine@sha256:abc123def456...
3# This guarantees you always get the exact same image
Practical Hardened Dockerfile Example
Let us put all the security best practices together into a single, production-ready Dockerfile:
1# syntax=docker/dockerfile:1
2
3# ========================================
4# Stage 1: Build
5# ========================================
6FROM node:20-alpine AS builder
7WORKDIR /app
8
9# Install dependencies first (better layer caching)
10COPY package*.json ./
11RUN npm ci
12
13# Copy source and build
14COPY . .
15RUN npm run build
16
17# Remove dev dependencies
18RUN npm prune --production
19
20# ========================================
21# Stage 2: Security scan (optional CI step)
22# ========================================
23FROM aquasec/trivy:latest AS scanner
24COPY --from=builder /app /scan-target
25RUN trivy fs --exit-code 1 --severity CRITICAL /scan-target
26
27# ========================================
28# Stage 3: Production runtime
29# ========================================
30FROM node:20-alpine AS runtime
31
32# Security: Install only necessary runtime packages, then clean up
33RUN apk add --no-cache dumb-init \
34 && rm -rf /var/cache/apk/*
35
36# Security: Create non-root user
37RUN addgroup -S appgroup && adduser -S appuser -G appgroup
38
39WORKDIR /app
40
41# Security: Copy files with proper ownership
42COPY --from=builder --chown=appuser:appgroup /app/dist ./dist
43COPY --from=builder --chown=appuser:appgroup /app/node_modules ./node_modules
44COPY --from=builder --chown=appuser:appgroup /app/package.json ./
45
46# Security: Switch to non-root user
47USER appuser
48
49# Security: Use dumb-init to handle signals properly
50ENTRYPOINT ["dumb-init", "--"]
51
52EXPOSE 3000
53CMD ["node", "dist/index.js"]
54
55# Security metadata labels
56LABEL maintainer="[email protected]"
57LABEL org.opencontainers.image.source="https://github.com/org/repo"
And the corresponding Docker Compose configuration with all security options:
1# docker-compose.yml - Production hardened
2services:
3 api:
4 image: myapp:latest
5 read_only: true
6 tmpfs:
7 - /tmp:rw,noexec,nosuid,size=64m
8 security_opt:
9 - no-new-privileges:true
10 cap_drop:
11 - ALL
12 deploy:
13 resources:
14 limits:
15 cpus: "1.0"
16 memory: 512M
17 reservations:
18 cpus: "0.25"
19 memory: 128M
20 healthcheck:
21 test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/health"]
22 interval: 30s
23 timeout: 5s
24 retries: 3
25 logging:
26 driver: json-file
27 options:
28 max-size: "10m"
29 max-file: "3"
30 networks:
31 - app-network
32 restart: unless-stopped
33
34networks:
35 app-network:
36 driver: bridge
Security Checklist
Use this checklist for every Docker image you deploy to production:
- Run as non-root user (
USERinstruction) - Use minimal base image (Alpine, slim, or distroless)
- Scan for vulnerabilities (Trivy or equivalent)
- No secrets in image layers
- Read-only root filesystem where possible
- Drop all capabilities (
cap_drop: ALL) - Enable no-new-privileges
- Set resource limits (CPU, memory)
- Use multi-stage builds
- Pin base image versions (or use digests)
- Network isolation (internal networks for databases)
- Health checks configured
- Logging configured with size limits
-
.dockerignorefile to exclude sensitive files
For more details, check the official Docker security documentation: https://docs.docker.com/engine/security/
Summary and Next Steps
In this article, you learned the essential Docker security best practices. Here is what we covered:
- Non-root users: Always run containers as non-root
- Minimal base images: Alpine, slim, distroless, or scratch
- Vulnerability scanning: Using Trivy to find and fix CVEs
- Secrets management: Never bake secrets into image layers
- Read-only filesystem: Prevent unauthorized file modifications
- Linux capabilities: Drop all, add only what is needed
- Network isolation: Separate public and private networks
- Docker Content Trust: Image signing and verification
- Hardened Dockerfile: A complete production-ready example
Comments
Sign in to leave a comment
No comments yet. Be the first!