Skip to main content

Docker Images & Containers:

 Docker Images & Containers: Dockerfile Explained, docker build, run, exec, and Image Layers

📅 Published: Feb 2026
⏱️ Estimated Reading Time: 18 minutes
🏷️ Tags: Docker, Docker Images, Dockerfile, Containers, Docker Commands, Image Layers


Introduction: From Code to Running Application

When you build a Docker image, you are creating a template for running your application. When you run a container, you are creating a live instance of that template. Understanding how images are built and how containers run is essential for effective Docker usage.

This guide covers the complete lifecycle: writing a Dockerfile, building images, running containers, and understanding the layered architecture that makes Docker efficient.


Part 1: The Dockerfile

What is a Dockerfile?

A Dockerfile is a text file containing instructions for building a Docker image. It is like a recipe. Each instruction adds a layer to the image. Docker reads the Dockerfile and executes the instructions in order.

Dockerfile Instructions

FROM
The FROM instruction sets the base image. Every Dockerfile must start with FROM. You can use official images like ubuntu, node, python, or alpine.

dockerfile
FROM ubuntu:22.04
FROM node:18-alpine
FROM python:3.11-slim

Choosing a base image affects image size and available tools. Alpine images are very small but use musl libc instead of glibc. Slim images are Debian-based with minimal packages.

WORKDIR
The WORKDIR instruction sets the working directory for subsequent instructions. If the directory does not exist, Docker creates it.

dockerfile
WORKDIR /app
WORKDIR /usr/src/app

Always set a WORKDIR. This keeps your Dockerfile clean and avoids confusion about where files are located.

COPY and ADD
COPY copies files from the build context to the image. ADD does the same but can also extract tar files and download from URLs.

dockerfile
COPY package.json /app/
COPY . /app
COPY --from=builder /app/dist ./dist

Use COPY for most cases. Use ADD only when you need the extra features.

RUN
RUN executes commands in a new layer on top of the current image. It is used to install packages, create directories, and set up the environment.

dockerfile
RUN apt-get update && apt-get install -y nginx
RUN npm install
RUN pip install -r requirements.txt

Combine related commands into a single RUN to reduce layers. Use && to chain commands.

ENV
ENV sets environment variables that persist in the container.

dockerfile
ENV NODE_ENV=production
ENV PORT=3000
ENV DATABASE_URL=postgres://localhost/mydb

EXPOSE
EXPOSE documents which port the container listens on. It does not actually publish the port. Use the -p flag with docker run to publish ports.

dockerfile
EXPOSE 80
EXPOSE 3000
EXPOSE 8080/tcp

CMD
CMD provides defaults for an executing container. There can be only one CMD per Dockerfile. If you provide arguments to docker run, they override CMD.

dockerfile
CMD ["node", "app.js"]
CMD ["nginx", "-g", "daemon off;"]
CMD python app.py

CMD has three forms:

  • Exec form (preferred): CMD ["executable", "param1", "param2"]

  • Shell form: CMD command param1 param2

  • Entrypoint parameters: CMD ["param1", "param2"]

ENTRYPOINT
ENTRYPOINT configures the container to run as an executable. Unlike CMD, ENTRYPOINT is not overridden by docker run arguments.

dockerfile
ENTRYPOINT ["python", "app.py"]
ENTRYPOINT ["nginx"]

When both ENTRYPOINT and CMD are used, CMD provides default arguments to ENTRYPOINT.

dockerfile
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["nginx", "-g", "daemon off;"]

USER
USER sets the user to use when running the container. Never run containers as root in production.

dockerfile
USER node
USER 1000:1000

ARG
ARG defines build-time variables. These are not available in running containers.

dockerfile
ARG NODE_VERSION=18
FROM node:${NODE_VERSION}-alpine

VOLUME
VOLUME creates a mount point for external storage.

dockerfile
VOLUME /data
VOLUME /var/lib/mysql

HEALTHCHECK
HEALTHCHECK tells Docker how to test if the container is working.

dockerfile
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD curl -f http://localhost/ || exit 1

Complete Dockerfile Examples

Node.js Application

dockerfile
# Build stage
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build

# Production stage
FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/package*.json ./

EXPOSE 3000
USER node
CMD ["node", "dist/server.js"]

Python Application

dockerfile
FROM python:3.11-slim

WORKDIR /app

# Install system dependencies
RUN apt-get update && apt-get install -y \
    gcc \
    && rm -rf /var/lib/apt/lists/*

# Install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy application
COPY . .

EXPOSE 8000
CMD ["gunicorn", "-b", "0.0.0.0:8000", "app:app"]

Nginx Static Site

dockerfile
FROM nginx:alpine

# Copy custom configuration
COPY nginx.conf /etc/nginx/nginx.conf

# Copy static files
COPY ./public /usr/share/nginx/html

EXPOSE 80

Part 2: Docker Build

The Build Process

The docker build command reads a Dockerfile and builds an image.

bash
# Basic build
docker build -t myapp .

# Build with tag
docker build -t myapp:v1.0.0 .

# Build with specific Dockerfile
docker build -f Dockerfile.prod -t myapp .

# Build without cache
docker build --no-cache -t myapp .

# Build with build arguments
docker build --build-arg NODE_VERSION=20 -t myapp .

Build Context

The build context is the set of files located at the specified PATH. When you run docker build ., the current directory is the build context. Docker sends the entire build context to the Docker daemon.

To speed up builds and reduce context size, use a .dockerignore file.

dockerignore
node_modules
.git
*.log
.env
.DS_Store
coverage
dist

Build Cache

Docker caches layers. If a layer has not changed, Docker reuses the cached layer. This makes subsequent builds much faster.

Layers are invalidated when:

  • The instruction changes

  • Files in COPY or ADD change

Order instructions to maximize cache usage. Put frequently changing instructions near the bottom.

dockerfile
# Good: Dependencies first (change rarely)
FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .

# Bad: Code first (changes often, invalidates everything)
FROM node:18
WORKDIR /app
COPY . .
RUN npm install

Part 3: Docker Run

Basic Container Operations

bash
# Run a container
docker run nginx

# Run with a name
docker run --name webserver nginx

# Run in detached mode (background)
docker run -d nginx

# Run with port mapping
docker run -p 8080:80 nginx

# Run with volume mount
docker run -v /host/data:/container/data nginx

# Run with environment variables
docker run -e DATABASE_URL=postgres://localhost myapp

# Run and remove automatically when stopped
docker run --rm nginx

# Run with resource limits
docker run --memory=512m --cpus=1 myapp

Port Mapping

Port mapping connects container ports to host ports.

bash
# Map host port 8080 to container port 80
docker run -p 8080:80 nginx

# Map to random host port
docker run -p 80 nginx

# Map multiple ports
docker run -p 80:80 -p 443:443 nginx

# Map to specific host IP
docker run -p 127.0.0.1:8080:80 nginx

Volume Mounting

Volumes persist data beyond the container lifecycle.

bash
# Bind mount (host directory)
docker run -v /host/data:/container/data nginx

# Named volume
docker run -v mydata:/data nginx

# Read-only mount
docker run -v /host/data:/container/data:ro nginx

Environment Variables

bash
# Single variable
docker run -e DATABASE_URL=postgres://localhost myapp

# Multiple variables
docker run -e DATABASE_URL -e API_KEY myapp

# From file
docker run --env-file .env myapp

Part 4: Docker Exec

Running Commands in Containers

The docker exec command runs a new process in a running container.

bash
# Run a command
docker exec mycontainer ls -la

# Interactive shell
docker exec -it mycontainer /bin/bash

# Run as different user
docker exec -u www-data mycontainer whoami

# Set working directory
docker exec -w /app mycontainer ls

Common Use Cases

Debugging

bash
# Get a shell in a running container
docker exec -it myapp /bin/bash

# View processes
docker exec myapp ps aux

# Check network connectivity
docker exec myapp curl localhost:8080/health

Managing Applications

bash
# Run database migrations
docker exec myapp npm run migrate

# Clear cache
docker exec myapp redis-cli FLUSHALL

# Reload configuration
docker exec nginx nginx -s reload

Viewing Logs

bash
# Tail logs
docker exec myapp tail -f /var/log/app.log

# View error logs
docker exec myapp cat /var/log/error.log

Part 5: Image Layers

How Layers Work

Every instruction in a Dockerfile creates a new layer. Layers are stacked on top of each other. When you run a container, Docker adds a writable container layer on top.

text
Layer 5: CMD ["node", "app.js"]     (metadata)
Layer 4: COPY . .                    (application code)
Layer 3: RUN npm install             (dependencies)
Layer 2: COPY package.json ./        (package.json)
Layer 1: FROM node:18-alpine         (base image)

Layer Commands

bash
# Show image layers
docker history myapp

# Show image layers with details
docker history --no-trunc myapp

# Show image size breakdown
docker history --human myapp

Layer Efficiency

Why layers are efficient:

  • Sharing: Multiple images can share base layers. If you have ten Node.js applications, they share the same Node.js base layer.

  • Caching: Docker caches layers. If a layer hasn't changed, Docker reuses it.

  • Storage: Layers are stored once. Running ten containers from the same image uses storage for one image plus ten small writable layers.

Layer Best Practices

Minimize layers

Combine related commands into a single RUN.

dockerfile
# Bad: Three layers
RUN apt-get update
RUN apt-get install -y nginx
RUN apt-get clean

# Good: One layer
RUN apt-get update && apt-get install -y nginx && apt-get clean

Order layers by change frequency

Put rarely changing instructions first.

dockerfile
# Good: Dependencies first
FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .

# Bad: Code first (invalidates everything)
FROM node:18
WORKDIR /app
COPY . .
RUN npm install

Clean up in the same layer

If you install packages and then remove temporary files, do it in the same layer.

dockerfile
RUN apt-get update && \
    apt-get install -y build-essential && \
    npm install && \
    apt-get purge -y build-essential && \
    rm -rf /var/lib/apt/lists/*

Use multi-stage builds

Multi-stage builds keep final images small.

dockerfile
# Build stage
FROM node:18 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# Production stage
FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["node", "dist/server.js"]

Part 6: Common Operations

Managing Images

bash
# List images
docker images
docker image ls

# Remove image
docker rmi myapp
docker image rm myapp

# Remove unused images
docker image prune
docker image prune -a

# Tag image
docker tag myapp:latest myregistry/myapp:v1.0

# Save image to tar file
docker save -o myapp.tar myapp:latest

# Load image from tar file
docker load -i myapp.tar

Managing Containers

bash
# List containers
docker ps          # Running only
docker ps -a       # All containers
docker ps -q       # Quiet (IDs only)

# Stop container
docker stop mycontainer

# Kill container (force stop)
docker kill mycontainer

# Remove container
docker rm mycontainer
docker rm -f mycontainer  # Force remove running

# Remove stopped containers
docker container prune

# Rename container
docker rename oldname newname

Inspecting

bash
# Show container logs
docker logs mycontainer
docker logs -f mycontainer  # Follow logs
docker logs --tail 100 mycontainer

# Show container processes
docker top mycontainer

# Show container stats
docker stats mycontainer
docker stats  # All containers

# Show container details
docker inspect mycontainer
docker inspect --format='{{.NetworkSettings.IPAddress}}' mycontainer

# Show container resource usage
docker stats --no-stream mycontainer

Real-World Scenarios

Scenario 1: Building a Development Image

A team needs a consistent development environment for a Python application.

dockerfile
FROM python:3.11-slim

WORKDIR /app

# Install development dependencies
RUN apt-get update && apt-get install -y \
    gcc \
    git \
    vim \
    && rm -rf /var/lib/apt/lists/*

# Install Python dependencies
COPY requirements-dev.txt .
RUN pip install --no-cache-dir -r requirements-dev.txt

# Mount code at runtime (not copied in image)
CMD ["python", "app.py"]

Build and run:

bash
docker build -t myapp-dev .
docker run -v $(pwd):/app -p 8000:8000 myapp-dev

Scenario 2: Optimizing Production Image Size

A Node.js application image is 1.2 GB. It needs to be smaller.

Before optimization:

dockerfile
FROM node:18
WORKDIR /app
COPY . .
RUN npm install
EXPOSE 3000
CMD ["node", "server.js"]

After optimization (multi-stage):

dockerfile
FROM node:18 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production

FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]

Result: Image size reduced from 1.2 GB to 150 MB.


Scenario 3: Debugging a Failing Container

A container exits immediately after starting. You need to understand why.

bash
# Check container status
docker ps -a

# View logs
docker logs failing-container

# If logs don't help, run interactive shell
docker run -it myapp /bin/sh

# Override entrypoint to debug
docker run -it --entrypoint /bin/sh myapp

# Inspect container details
docker inspect failing-container

Summary

ComponentPurposeKey Commands
DockerfileDefine how to build an imageFROM, RUN, COPY, CMD
BuildCreate an image from Dockerfiledocker build
RunStart a container from an imagedocker run-p-v-e
ExecRun commands in running containersdocker exec
LayersEfficient image storage and cachingdocker history

Understanding these concepts is essential for creating efficient, secure, and maintainable Docker images.


Practice Questions

  1. What is the difference between CMD and ENTRYPOINT?

  2. Why should you order Dockerfile instructions from least to most frequently changing?

  3. How do multi-stage builds reduce image size?

  4. What is the build context and why does it matter?

  5. How do you debug a container that exits immediately after starting?


Learn More

Practice Docker images and containers with hands-on exercises in our interactive labs:
https://devops.trainwithsky.com/

Comments

Popular posts from this blog

Introduction to Terraform – The Future of Infrastructure as Code

  Introduction to Terraform – The Future of Infrastructure as Code In today’s fast-paced DevOps world, managing infrastructure manually is outdated . This is where Terraform comes in—a powerful Infrastructure as Code (IaC) tool that allows you to define, provision, and manage cloud infrastructure efficiently . Whether you're working with AWS, Azure, Google Cloud, or on-premises servers , Terraform provides a declarative, automation-first approach to infrastructure deployment. Shape Your Future with AI & Infinite Knowledge...!! Read In-Depth Tech & Self-Improvement Blogs http://www.skyinfinitetech.com Watch Life-Changing Videos on YouTube https://www.youtube.com/@SkyInfinite-Learning Transform Your Skills, Business & Productivity – Join Us Today! In today’s digital-first world, agility and automation are no longer optional—they’re essential. Companies across the globe are rapidly shifting their operations to the cloud to keep up with the pace of innovatio...

📊 Monitoring & Logging in Kubernetes – Tools like Prometheus, Grafana, and Fluentd

  Monitoring & Logging in Kubernetes – Tools like Prometheus, Grafana, and Fluentd Monitoring and logging are essential for maintaining a healthy and well-performing Kubernetes cluster. In this guide, we’ll cover why monitoring is important, key monitoring tools like Prometheus and Grafana, and logging tools like Fluentd to help you gain visibility into your cluster’s performance and logs. Shape Your Future with AI & Infinite Knowledge...!! Want to Generate Text-to-Voice, Images & Videos? http://www.ai.skyinfinitetech.com Read In-Depth Tech & Self-Improvement Blogs http://www.skyinfinitetech.com Watch Life-Changing Videos on YouTube https://www.youtube.com/@SkyInfinite-Learning Transform Your Skills, Business & Productivity – Join Us Today! 🚀 Introduction In today’s fast-paced cloud-native environment, Kubernetes has emerged as the de-facto container orchestration platform. But deploying and managing applications in Kubernetes is just half the ba...

🔒 Kubernetes Security – RBAC, Network Policies, and Secrets Management

  Kubernetes Security – RBAC, Network Policies, and Secrets Management Security is a critical aspect of managing Kubernetes clusters. In this guide, we'll cover essential security mechanisms like Role-Based Access Control (RBAC) , Network Policies , and Secrets Management to help you secure your Kubernetes environment effectively. Shape Your Future with AI & Infinite Knowledge...!! Want to Generate Text-to-Voice, Images & Videos? http://www.ai.skyinfinitetech.com Read In-Depth Tech & Self-Improvement Blogs http://www.skyinfinitetech.com Watch Life-Changing Videos on YouTube https://www.youtube.com/@SkyInfinite-Learning Transform Your Skills, Business & Productivity – Join Us Today! 🚀 Introduction: Why Kubernetes Security Is Non-Negotiable As Kubernetes becomes the backbone of modern cloud-native infrastructure, security is no longer optional—it’s mission-critical . With multiple moving parts like containers, pods, services, nodes, and more, Kuberne...