Docker Images & Containers: Dockerfile Explained, docker build, run, exec, and Image Layers
📅 Published: Feb 2026
⏱️ Estimated Reading Time: 18 minutes
🏷️ Tags: Docker, Docker Images, Dockerfile, Containers, Docker Commands, Image Layers
Introduction: From Code to Running Application
When you build a Docker image, you are creating a template for running your application. When you run a container, you are creating a live instance of that template. Understanding how images are built and how containers run is essential for effective Docker usage.
This guide covers the complete lifecycle: writing a Dockerfile, building images, running containers, and understanding the layered architecture that makes Docker efficient.
Part 1: The Dockerfile
What is a Dockerfile?
A Dockerfile is a text file containing instructions for building a Docker image. It is like a recipe. Each instruction adds a layer to the image. Docker reads the Dockerfile and executes the instructions in order.
Dockerfile Instructions
FROM
The FROM instruction sets the base image. Every Dockerfile must start with FROM. You can use official images like ubuntu, node, python, or alpine.
FROM ubuntu:22.04 FROM node:18-alpine FROM python:3.11-slim
Choosing a base image affects image size and available tools. Alpine images are very small but use musl libc instead of glibc. Slim images are Debian-based with minimal packages.
WORKDIR
The WORKDIR instruction sets the working directory for subsequent instructions. If the directory does not exist, Docker creates it.
WORKDIR /app WORKDIR /usr/src/app
Always set a WORKDIR. This keeps your Dockerfile clean and avoids confusion about where files are located.
COPY and ADD
COPY copies files from the build context to the image. ADD does the same but can also extract tar files and download from URLs.
COPY package.json /app/ COPY . /app COPY --from=builder /app/dist ./dist
Use COPY for most cases. Use ADD only when you need the extra features.
RUN
RUN executes commands in a new layer on top of the current image. It is used to install packages, create directories, and set up the environment.
RUN apt-get update && apt-get install -y nginx RUN npm install RUN pip install -r requirements.txt
Combine related commands into a single RUN to reduce layers. Use && to chain commands.
ENV
ENV sets environment variables that persist in the container.
ENV NODE_ENV=production ENV PORT=3000 ENV DATABASE_URL=postgres://localhost/mydb
EXPOSE
EXPOSE documents which port the container listens on. It does not actually publish the port. Use the -p flag with docker run to publish ports.
EXPOSE 80 EXPOSE 3000 EXPOSE 8080/tcp
CMD
CMD provides defaults for an executing container. There can be only one CMD per Dockerfile. If you provide arguments to docker run, they override CMD.
CMD ["node", "app.js"] CMD ["nginx", "-g", "daemon off;"] CMD python app.py
CMD has three forms:
Exec form (preferred):
CMD ["executable", "param1", "param2"]Shell form:
CMD command param1 param2Entrypoint parameters:
CMD ["param1", "param2"]
ENTRYPOINT
ENTRYPOINT configures the container to run as an executable. Unlike CMD, ENTRYPOINT is not overridden by docker run arguments.
ENTRYPOINT ["python", "app.py"] ENTRYPOINT ["nginx"]
When both ENTRYPOINT and CMD are used, CMD provides default arguments to ENTRYPOINT.
ENTRYPOINT ["docker-entrypoint.sh"] CMD ["nginx", "-g", "daemon off;"]
USER
USER sets the user to use when running the container. Never run containers as root in production.
USER node USER 1000:1000
ARG
ARG defines build-time variables. These are not available in running containers.
ARG NODE_VERSION=18
FROM node:${NODE_VERSION}-alpineVOLUME
VOLUME creates a mount point for external storage.
VOLUME /data VOLUME /var/lib/mysql
HEALTHCHECK
HEALTHCHECK tells Docker how to test if the container is working.
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \ CMD curl -f http://localhost/ || exit 1
Complete Dockerfile Examples
Node.js Application
# Build stage FROM node:18-alpine AS builder WORKDIR /app COPY package*.json ./ RUN npm ci --only=production COPY . . RUN npm run build # Production stage FROM node:18-alpine WORKDIR /app COPY --from=builder /app/node_modules ./node_modules COPY --from=builder /app/dist ./dist COPY --from=builder /app/package*.json ./ EXPOSE 3000 USER node CMD ["node", "dist/server.js"]
Python Application
FROM python:3.11-slim
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
gcc \
&& rm -rf /var/lib/apt/lists/*
# Install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application
COPY . .
EXPOSE 8000
CMD ["gunicorn", "-b", "0.0.0.0:8000", "app:app"]Nginx Static Site
FROM nginx:alpine # Copy custom configuration COPY nginx.conf /etc/nginx/nginx.conf # Copy static files COPY ./public /usr/share/nginx/html EXPOSE 80
Part 2: Docker Build
The Build Process
The docker build command reads a Dockerfile and builds an image.
# Basic build docker build -t myapp . # Build with tag docker build -t myapp:v1.0.0 . # Build with specific Dockerfile docker build -f Dockerfile.prod -t myapp . # Build without cache docker build --no-cache -t myapp . # Build with build arguments docker build --build-arg NODE_VERSION=20 -t myapp .
Build Context
The build context is the set of files located at the specified PATH. When you run docker build ., the current directory is the build context. Docker sends the entire build context to the Docker daemon.
To speed up builds and reduce context size, use a .dockerignore file.
node_modules .git *.log .env .DS_Store coverage dist
Build Cache
Docker caches layers. If a layer has not changed, Docker reuses the cached layer. This makes subsequent builds much faster.
Layers are invalidated when:
The instruction changes
Files in COPY or ADD change
Order instructions to maximize cache usage. Put frequently changing instructions near the bottom.
# Good: Dependencies first (change rarely) FROM node:18 WORKDIR /app COPY package*.json ./ RUN npm install COPY . . # Bad: Code first (changes often, invalidates everything) FROM node:18 WORKDIR /app COPY . . RUN npm install
Part 3: Docker Run
Basic Container Operations
# Run a container docker run nginx # Run with a name docker run --name webserver nginx # Run in detached mode (background) docker run -d nginx # Run with port mapping docker run -p 8080:80 nginx # Run with volume mount docker run -v /host/data:/container/data nginx # Run with environment variables docker run -e DATABASE_URL=postgres://localhost myapp # Run and remove automatically when stopped docker run --rm nginx # Run with resource limits docker run --memory=512m --cpus=1 myapp
Port Mapping
Port mapping connects container ports to host ports.
# Map host port 8080 to container port 80 docker run -p 8080:80 nginx # Map to random host port docker run -p 80 nginx # Map multiple ports docker run -p 80:80 -p 443:443 nginx # Map to specific host IP docker run -p 127.0.0.1:8080:80 nginx
Volume Mounting
Volumes persist data beyond the container lifecycle.
# Bind mount (host directory) docker run -v /host/data:/container/data nginx # Named volume docker run -v mydata:/data nginx # Read-only mount docker run -v /host/data:/container/data:ro nginx
Environment Variables
# Single variable docker run -e DATABASE_URL=postgres://localhost myapp # Multiple variables docker run -e DATABASE_URL -e API_KEY myapp # From file docker run --env-file .env myapp
Part 4: Docker Exec
Running Commands in Containers
The docker exec command runs a new process in a running container.
# Run a command docker exec mycontainer ls -la # Interactive shell docker exec -it mycontainer /bin/bash # Run as different user docker exec -u www-data mycontainer whoami # Set working directory docker exec -w /app mycontainer ls
Common Use Cases
Debugging
# Get a shell in a running container docker exec -it myapp /bin/bash # View processes docker exec myapp ps aux # Check network connectivity docker exec myapp curl localhost:8080/health
Managing Applications
# Run database migrations docker exec myapp npm run migrate # Clear cache docker exec myapp redis-cli FLUSHALL # Reload configuration docker exec nginx nginx -s reload
Viewing Logs
# Tail logs docker exec myapp tail -f /var/log/app.log # View error logs docker exec myapp cat /var/log/error.log
Part 5: Image Layers
How Layers Work
Every instruction in a Dockerfile creates a new layer. Layers are stacked on top of each other. When you run a container, Docker adds a writable container layer on top.
Layer 5: CMD ["node", "app.js"] (metadata) Layer 4: COPY . . (application code) Layer 3: RUN npm install (dependencies) Layer 2: COPY package.json ./ (package.json) Layer 1: FROM node:18-alpine (base image)
Layer Commands
# Show image layers docker history myapp # Show image layers with details docker history --no-trunc myapp # Show image size breakdown docker history --human myapp
Layer Efficiency
Why layers are efficient:
Sharing: Multiple images can share base layers. If you have ten Node.js applications, they share the same Node.js base layer.
Caching: Docker caches layers. If a layer hasn't changed, Docker reuses it.
Storage: Layers are stored once. Running ten containers from the same image uses storage for one image plus ten small writable layers.
Layer Best Practices
Minimize layers
Combine related commands into a single RUN.
# Bad: Three layers RUN apt-get update RUN apt-get install -y nginx RUN apt-get clean # Good: One layer RUN apt-get update && apt-get install -y nginx && apt-get clean
Order layers by change frequency
Put rarely changing instructions first.
# Good: Dependencies first FROM node:18 WORKDIR /app COPY package*.json ./ RUN npm install COPY . . # Bad: Code first (invalidates everything) FROM node:18 WORKDIR /app COPY . . RUN npm install
Clean up in the same layer
If you install packages and then remove temporary files, do it in the same layer.
RUN apt-get update && \
apt-get install -y build-essential && \
npm install && \
apt-get purge -y build-essential && \
rm -rf /var/lib/apt/lists/*Use multi-stage builds
Multi-stage builds keep final images small.
# Build stage FROM node:18 AS builder WORKDIR /app COPY package*.json ./ RUN npm ci COPY . . RUN npm run build # Production stage FROM node:18-alpine WORKDIR /app COPY --from=builder /app/dist ./dist COPY --from=builder /app/node_modules ./node_modules CMD ["node", "dist/server.js"]
Part 6: Common Operations
Managing Images
# List images docker images docker image ls # Remove image docker rmi myapp docker image rm myapp # Remove unused images docker image prune docker image prune -a # Tag image docker tag myapp:latest myregistry/myapp:v1.0 # Save image to tar file docker save -o myapp.tar myapp:latest # Load image from tar file docker load -i myapp.tar
Managing Containers
# List containers docker ps # Running only docker ps -a # All containers docker ps -q # Quiet (IDs only) # Stop container docker stop mycontainer # Kill container (force stop) docker kill mycontainer # Remove container docker rm mycontainer docker rm -f mycontainer # Force remove running # Remove stopped containers docker container prune # Rename container docker rename oldname newname
Inspecting
# Show container logs docker logs mycontainer docker logs -f mycontainer # Follow logs docker logs --tail 100 mycontainer # Show container processes docker top mycontainer # Show container stats docker stats mycontainer docker stats # All containers # Show container details docker inspect mycontainer docker inspect --format='{{.NetworkSettings.IPAddress}}' mycontainer # Show container resource usage docker stats --no-stream mycontainer
Real-World Scenarios
Scenario 1: Building a Development Image
A team needs a consistent development environment for a Python application.
FROM python:3.11-slim
WORKDIR /app
# Install development dependencies
RUN apt-get update && apt-get install -y \
gcc \
git \
vim \
&& rm -rf /var/lib/apt/lists/*
# Install Python dependencies
COPY requirements-dev.txt .
RUN pip install --no-cache-dir -r requirements-dev.txt
# Mount code at runtime (not copied in image)
CMD ["python", "app.py"]Build and run:
docker build -t myapp-dev . docker run -v $(pwd):/app -p 8000:8000 myapp-dev
Scenario 2: Optimizing Production Image Size
A Node.js application image is 1.2 GB. It needs to be smaller.
Before optimization:
FROM node:18 WORKDIR /app COPY . . RUN npm install EXPOSE 3000 CMD ["node", "server.js"]
After optimization (multi-stage):
FROM node:18 AS builder WORKDIR /app COPY package*.json ./ RUN npm ci --only=production FROM node:18-alpine WORKDIR /app COPY --from=builder /app/node_modules ./node_modules COPY . . EXPOSE 3000 CMD ["node", "server.js"]
Result: Image size reduced from 1.2 GB to 150 MB.
Scenario 3: Debugging a Failing Container
A container exits immediately after starting. You need to understand why.
# Check container status docker ps -a # View logs docker logs failing-container # If logs don't help, run interactive shell docker run -it myapp /bin/sh # Override entrypoint to debug docker run -it --entrypoint /bin/sh myapp # Inspect container details docker inspect failing-container
Summary
| Component | Purpose | Key Commands |
|---|---|---|
| Dockerfile | Define how to build an image | FROM, RUN, COPY, CMD |
| Build | Create an image from Dockerfile | docker build |
| Run | Start a container from an image | docker run, -p, -v, -e |
| Exec | Run commands in running containers | docker exec |
| Layers | Efficient image storage and caching | docker history |
Understanding these concepts is essential for creating efficient, secure, and maintainable Docker images.
Practice Questions
What is the difference between CMD and ENTRYPOINT?
Why should you order Dockerfile instructions from least to most frequently changing?
How do multi-stage builds reduce image size?
What is the build context and why does it matter?
How do you debug a container that exits immediately after starting?
Learn More
Practice Docker images and containers with hands-on exercises in our interactive labs:
https://devops.trainwithsky.com/
Comments
Post a Comment