Skip to main content

Docker Interview & Practice:

 Docker Interview & Practice: Docker Scenarios and Hands-on Projects

📅 Published: Feb 2026
⏱️ Estimated Reading Time: 20 minutes
🏷️ Tags: Docker Interview, Containerization, DevOps Interview, Docker Scenarios, Hands-on Projects


Introduction: What Docker Interviewers Look For

Docker interviews test your ability to containerize applications, troubleshoot container issues, and design multi-container architectures. Interviewers want to see that you understand not just commands, but the concepts behind containerization.

The most valued Docker skills in interviews are:

  • Understanding of container fundamentals (images, containers, layers)

  • Ability to write efficient Dockerfiles

  • Experience with multi-container applications and Docker Compose

  • Knowledge of networking and storage patterns

  • Security best practices

  • Troubleshooting container issues

This guide covers the questions you are likely to face and the hands-on projects that will prove your skills.


Part 1: Docker Interview Questions

Foundational Questions

Q1: Explain the difference between a Docker image and a container.

A Docker image is a read-only template containing application code, libraries, dependencies, and configuration. It is the blueprint for creating containers. An image does not run; it is stored on disk.

A Docker container is a runnable instance of an image. It adds a writable layer on top of the image layers. When you start a container, the image becomes a running process with its own isolated filesystem, network, and process space.

You can create many containers from the same image. Each container is isolated but shares the read-only image layers.

Q2: What is the difference between CMD and ENTRYPOINT?

CMD provides default arguments for the container. It can be overridden by command-line arguments. If the container runs with an alternative command, CMD is ignored.

ENTRYPOINT configures the container to run as an executable. It is not overridden by command-line arguments. Instead, command-line arguments are appended to the ENTRYPOINT.

When both are used together, ENTRYPOINT defines the executable and CMD provides default arguments.

dockerfile
ENTRYPOINT ["nginx"]
CMD ["-g", "daemon off;"]

Q3: How do you reduce Docker image size?

Several techniques reduce image size:

  • Use minimal base images (alpine, slim)

  • Use multi-stage builds to exclude build tools

  • Combine RUN commands to reduce layers

  • Clean package manager cache in the same layer

  • Remove temporary files and dependencies

  • Use .dockerignore to exclude unnecessary files

A Node.js application can be reduced from 1 GB to 150 MB using these techniques.

Q4: Explain Docker layers and how caching works.

Every instruction in a Dockerfile creates a layer. Layers are stacked on top of each other. When you build an image, Docker caches each layer. If a layer has not changed, Docker reuses the cached layer.

This is why ordering matters. Put rarely changing instructions (FROM, RUN apt-get) before frequently changing instructions (COPY .). This maximizes cache reuse and speeds up builds.

Q5: How do you persist data in containers?

Containers are ephemeral. Data written to the container's writable layer is lost when the container is removed.

To persist data, use volumes or bind mounts:

  • Volumes: Managed by Docker, stored in /var/lib/docker/volumes/. Preferred for production.

  • Bind mounts: Map a host directory into the container. Good for development.

bash
docker run -v myvolume:/data postgres
docker run -v $(pwd):/app node:18

Intermediate Questions

Q6: What are the different Docker network drivers and when would you use them?

DriverUse Case
bridgeDefault for standalone containers. Good for single-host applications.
hostRemoves network isolation. Use for performance-critical applications.
overlayConnects containers across multiple hosts. Required for Docker Swarm.
macvlanAssigns MAC addresses to containers. For legacy applications expecting physical network.
noneNo network. For completely isolated containers.

For most applications, user-defined bridge networks provide the best balance of isolation and functionality.

Q7: How do you handle configuration in containers?

Several approaches:

  • Environment variables: Simple, good for non-sensitive config

  • Configuration files via bind mounts: Mount config files at runtime

  • Config services: Use Consul, etcd, or similar

  • Secrets managers: For sensitive data (Vault, AWS Secrets Manager)

In development, bind mounts are convenient. In production, prefer environment variables or configuration services.

Q8: Explain the difference between COPY and ADD.

COPY copies files from the build context to the image. It is simple and transparent.

ADD has additional features:

  • Can extract tar files automatically

  • Can copy from URLs

Because ADD can have unexpected behavior, COPY is preferred for most cases. Use ADD only when you need the extra features.

Q9: How do you debug a container that exits immediately?

When a container exits immediately, use these techniques:

bash
# Check logs
docker logs mycontainer

# Run with interactive shell to see the issue
docker run -it myapp /bin/sh

# Override entrypoint to debug
docker run -it --entrypoint /bin/sh myapp

# Inspect container after exit
docker inspect mycontainer

Common causes: missing dependencies, incorrect CMD, configuration errors, or insufficient permissions.

Q10: What is Docker Swarm and how does it compare to Kubernetes?

Docker Swarm is Docker's native orchestration solution. It is simpler than Kubernetes and integrates directly with Docker commands.

AspectDocker SwarmKubernetes
ComplexitySimpleComplex
SetupMinutesHours
Learning curveGentleSteep
FeaturesBasicExtensive
CommunitySmallerLarge
Use caseSimple workloads, smaller teamsComplex applications, large organizations

Advanced Questions

Q11: How do you secure a Docker container?

Security requires multiple layers:

  • Image: Use official images, scan for vulnerabilities, run as non-root

  • Runtime: Drop capabilities, use read-only root, set resource limits

  • Network: Use custom networks, avoid host mode

  • Host: Keep Docker updated, use TLS for remote access

  • Secrets: Never embed secrets, use secrets managers

bash
docker run \
  --user 1000:1000 \
  --read-only \
  --cap-drop=ALL \
  --cap-add=NET_BIND_SERVICE \
  --security-opt no-new-privileges \
  --memory=512m \
  --cpus=0.5 \
  myapp

Q12: How do you handle database migrations in containers?

Database migrations require careful ordering. Common patterns:

Pattern 1: Run migrations before app starts

yaml
services:
  app:
    image: myapp
    command: sh -c "npm run migrate && npm start"
    depends_on:
      - db

Pattern 2: Separate migration container

yaml
services:
  migrate:
    image: myapp
    command: npm run migrate
    depends_on:
      - db

  app:
    image: myapp
    command: npm start
    depends_on:
      - migrate

Pattern 3: Entrypoint script

dockerfile
COPY docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["docker-entrypoint.sh"]
bash
#!/bin/bash
npm run migrate
exec npm start

Q13: How do you implement health checks?

Health checks tell Docker if your container is working correctly.

dockerfile
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD curl -f http://localhost/ || exit 1

In docker-compose.yml:

yaml
services:
  web:
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost"]
      interval: 30s
      timeout: 10s
      retries: 3

Q14: What are the best practices for Docker in production?

  • Use specific image tags, never latest

  • Run as non-root user

  • Use read-only root filesystem

  • Set resource limits

  • Scan images before deployment

  • Use orchestration (Swarm, Kubernetes)

  • Implement logging and monitoring

  • Rotate secrets regularly

  • Keep images minimal

  • Use multi-stage builds

Q15: How do you backup and restore Docker volumes?

Backup:

bash
docker run --rm -v myvolume:/data -v $(pwd):/backup alpine \
  tar czf /backup/volume-backup.tar.gz -C /data .

Restore:

bash
docker run --rm -v myvolume:/data -v $(pwd):/backup alpine \
  tar xzf /backup/volume-backup.tar.gz -C /data

Part 2: Docker Scenarios

Scenario 1: Container Not Starting

Problem: A container exits immediately after starting. You need to diagnose and fix the issue.

Symptoms:

bash
docker ps -a
CONTAINER ID   STATUS                     
abc123         Exited (1) 2 seconds ago

Troubleshooting steps:

bash
# 1. Check logs
docker logs abc123
# Output: Error: Cannot find module '/app/server.js'

# 2. Inspect container
docker inspect abc123

# 3. Run with interactive shell
docker run -it myapp /bin/sh
# Inside container, check if files exist
ls -la /app
# Output: server.js missing

# 4. Check Dockerfile
cat Dockerfile
# Missing COPY command!

# 5. Fix Dockerfile
COPY . /app

# 6. Rebuild and test
docker build -t myapp .
docker run myapp

Resolution: Add missing COPY instruction to Dockerfile.


Scenario 2: Container Can't Connect to Database

Problem: Application container cannot connect to database container.

Symptoms:

text
Error: connect ECONNREFUSED 127.0.0.1:5432

Troubleshooting steps:

bash
# 1. Check if database container is running
docker ps | grep postgres

# 2. Check if both containers are on same network
docker inspect app | jq '.[0].NetworkSettings.Networks'
docker inspect db | jq '.[0].NetworkSettings.Networks'

# 3. If not on same network, connect them
docker network create mynetwork
docker network connect mynetwork app
docker network connect mynetwork db

# 4. Check if app is using correct hostname
docker exec app env | grep DATABASE_URL
# Should use container name, not localhost

# 5. Test connectivity
docker exec app ping db

Resolution: Ensure both containers on same network, use container name as hostname.


Scenario 3: Image Too Large

Problem: Docker image is 1.2 GB. It needs to be smaller for faster deployments.

Before:

dockerfile
FROM node:18
WORKDIR /app
COPY . .
RUN npm install
CMD ["node", "server.js"]

Optimized version:

dockerfile
# Multi-stage build
FROM node:18 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production

FROM node:18-alpine
RUN addgroup -g 1001 -S nodejs && \
    adduser -S nodejs -u 1001 -G nodejs
USER nodejs
WORKDIR /app
COPY --from=builder --chown=nodejs:nodejs /app/node_modules ./node_modules
COPY --chown=nodejs:nodejs . .
EXPOSE 3000
CMD ["node", "server.js"]

Result: Image size reduced from 1.2 GB to 150 MB.


Scenario 4: Permissions Issue with Bind Mount

Problem: Application cannot write to bind-mounted directory.

Symptoms:

text
Error: EACCES: permission denied, open '/app/data/output.txt'

Troubleshooting steps:

bash
# 1. Check host file permissions
ls -la $(pwd)/data
# drwxr-xr-x 2 user user 4096 data

# 2. Check container user
docker run --rm myapp id
# uid=1000(node) gid=1000(node)

# 3. Run with correct user
docker run -u $(id -u):$(id -g) -v $(pwd)/data:/app/data myapp

# 4. Or change permissions on host
chmod 777 $(pwd)/data

Resolution: Match container user ID to host user ID, or use named volumes instead of bind mounts.


Scenario 5: Port Already in Use

Problem: Cannot start container because port 80 is already in use.

Symptoms:

text
docker: Error response from daemon: driver failed programming external connectivity
Bind for 0.0.0.0:80 failed: port is already allocated.

Troubleshooting steps:

bash
# 1. Find what's using port 80
sudo lsof -i :80
# or
netstat -tulpn | grep :80

# 2. Stop the conflicting container
docker stop existing-nginx

# 3. Or change port mapping
docker run -p 8080:80 nginx

Resolution: Stop conflicting container or use different host port.


Part 3: Hands-on Projects

Project 1: Containerize a Web Application

Goal: Take a simple web application and create a Docker image for it.

Requirements:

  • Create a Dockerfile

  • Build the image

  • Run the container

  • Verify the application works

Steps:

  1. Create a simple web application (Node.js example):

javascript
// app.js
const http = require('http');

const server = http.createServer((req, res) => {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Hello from Docker!\n');
});

server.listen(3000, () => {
  console.log('Server running on port 3000');
});
  1. Create package.json:

json
{
  "name": "docker-demo",
  "version": "1.0.0",
  "main": "app.js",
  "scripts": {
    "start": "node app.js"
  }
}
  1. Create Dockerfile:

dockerfile
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
USER node
EXPOSE 3000
CMD ["node", "app.js"]
  1. Build and run:

bash
docker build -t web-app .
docker run -p 3000:3000 web-app
curl localhost:3000

Project 2: Multi-Container Application with Docker Compose

Goal: Create a multi-container application with a web server, application, and database.

Requirements:

  • Node.js application connected to PostgreSQL

  • All services defined in docker-compose.yml

  • Persistent data for database

docker-compose.yml:

yaml
version: '3.8'

services:
  db:
    image: postgres:15-alpine
    environment:
      POSTGRES_USER: appuser
      POSTGRES_PASSWORD: apppass
      POSTGRES_DB: myapp
    volumes:
      - db-data:/var/lib/postgresql/data
    networks:
      - app-network
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U appuser"]
      interval: 10s
      timeout: 5s
      retries: 5

  app:
    build: ./app
    environment:
      DATABASE_URL: postgresql://appuser:apppass@db:5432/myapp
    ports:
      - "3000:3000"
    volumes:
      - ./app:/app
      - /app/node_modules
    networks:
      - app-network
    depends_on:
      db:
        condition: service_healthy

volumes:
  db-data:

networks:
  app-network:
    driver: bridge

app/Dockerfile:

dockerfile
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]

app/server.js:

javascript
const { Client } = require('pg');
const express = require('express');
const app = express();

const client = new Client({
  connectionString: process.env.DATABASE_URL
});

client.connect();

app.get('/', async (req, res) => {
  const result = await client.query('SELECT NOW()');
  res.json({ time: result.rows[0] });
});

app.listen(3000, () => console.log('App running on port 3000'));

Run:

bash
docker-compose up -d
curl http://localhost:3000
docker-compose down -v  # Clean up

Project 3: Development Environment with Hot Reload

Goal: Create a development environment where code changes are reflected immediately.

docker-compose.dev.yml:

yaml
version: '3.8'

services:
  app:
    build: .
    ports:
      - "3000:3000"
      - "9229:9229"  # Debug port
    volumes:
      - .:/app
      - /app/node_modules
    environment:
      - NODE_ENV=development
    command: npm run dev

package.json:

json
{
  "scripts": {
    "dev": "nodemon --inspect=0.0.0.0:9229 server.js"
  },
  "devDependencies": {
    "nodemon": "^3.0.0"
  }
}

Run:

bash
docker-compose -f docker-compose.dev.yml up
# Edit code, see changes immediately

Project 4: CI/CD Pipeline with Docker

Goal: Create a GitHub Actions workflow that builds, tests, and pushes a Docker image.

.github/workflows/docker.yml:

yaml
name: Docker Build and Push

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

jobs:
  build:
    runs-on: ubuntu-latest
    
    steps:
      - uses: actions/checkout@v4
      
      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3
      
      - name: Login to Docker Hub
        uses: docker/login-action@v3
        with:
          username: ${{ secrets.DOCKER_USERNAME }}
          password: ${{ secrets.DOCKER_TOKEN }}
      
      - name: Build and test
        run: |
          docker build -t myapp:test .
          docker run --rm myapp:test npm test
      
      - name: Build and push
        if: github.ref == 'refs/heads/main'
        uses: docker/build-push-action@v5
        with:
          push: true
          tags: |
            myuser/myapp:latest
            myuser/myapp:${{ github.sha }}
      
      - name: Scan image
        run: |
          docker scout cves myapp:latest

Project 5: Production-Ready Container

Goal: Create a hardened, production-ready Docker image.

Dockerfile.prod:

dockerfile
# Build stage
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production

# Production stage
FROM node:18-alpine
RUN addgroup -g 1001 -S nodejs && \
    adduser -S nodejs -u 1001 -G nodejs

WORKDIR /app
COPY --from=builder --chown=nodejs:nodejs /app/node_modules ./node_modules
COPY --chown=nodejs:nodejs . .

# Security hardening
USER nodejs
ENV NODE_ENV=production

# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD node -e "require('http').get('http://localhost:3000/health', (r) => {process.exit(r.statusCode === 200 ? 0 : 1)})"

EXPOSE 3000
CMD ["node", "server.js"]

Run:

bash
docker build -f Dockerfile.prod -t myapp:prod .
docker run -d \
  --name myapp \
  --user 1001:1001 \
  --read-only \
  --tmpfs /tmp:rw,noexec,nosuid,size=100m \
  --cap-drop=ALL \
  --cap-add=NET_BIND_SERVICE \
  --security-opt no-new-privileges \
  --memory=512m \
  --cpus=0.5 \
  -p 3000:3000 \
  myapp:prod

Docker Interview Preparation Checklist

Fundamentals

  • Explain Docker architecture (client, daemon, registry)

  • Differentiate between images and containers

  • Understand Dockerfile instructions (FROM, RUN, COPY, CMD, ENTRYPOINT)

  • Explain image layers and caching

  • Describe container lifecycle

Dockerfile Best Practices

  • Use specific base image tags

  • Combine RUN commands

  • Use multi-stage builds

  • Run as non-root user

  • Use .dockerignore

  • Order layers for cache optimization

Networking

  • Explain bridge, host, overlay networks

  • Demonstrate port mapping

  • Configure container communication

  • Use custom networks

Storage

  • Explain volumes vs bind mounts

  • Create and use named volumes

  • Backup and restore volumes

Docker Compose

  • Write docker-compose.yml

  • Use environment variables

  • Configure depends_on and health checks

  • Manage multi-container applications

Security

  • Drop unnecessary capabilities

  • Use read-only root filesystem

  • Set resource limits

  • Scan images for vulnerabilities

  • Never embed secrets

Troubleshooting

  • Use docker logs

  • Use docker exec

  • Inspect containers

  • Debug exit codes

  • Resolve network issues


Practice Questions

  1. Containerize a Python Flask application with dependencies.

  2. Create a Docker Compose setup for a WordPress site with MySQL.

  3. Optimize a Node.js Docker image from 1 GB to under 200 MB.

  4. Debug a container that exits with error "Cannot find module".

  5. Set up a development environment with live reload using bind mounts.

  6. Create a production Dockerfile with security hardening.

  7. Implement a CI/CD pipeline that builds and pushes Docker images.

  8. Configure health checks for a web application container.


Learn More

Practice Docker interview questions and hands-on projects in our interactive labs:
https://devops.trainwithsky.com/

Comments

Popular posts from this blog

Introduction to Terraform – The Future of Infrastructure as Code

  Introduction to Terraform – The Future of Infrastructure as Code In today’s fast-paced DevOps world, managing infrastructure manually is outdated . This is where Terraform comes in—a powerful Infrastructure as Code (IaC) tool that allows you to define, provision, and manage cloud infrastructure efficiently . Whether you're working with AWS, Azure, Google Cloud, or on-premises servers , Terraform provides a declarative, automation-first approach to infrastructure deployment. Shape Your Future with AI & Infinite Knowledge...!! Read In-Depth Tech & Self-Improvement Blogs http://www.skyinfinitetech.com Watch Life-Changing Videos on YouTube https://www.youtube.com/@SkyInfinite-Learning Transform Your Skills, Business & Productivity – Join Us Today! In today’s digital-first world, agility and automation are no longer optional—they’re essential. Companies across the globe are rapidly shifting their operations to the cloud to keep up with the pace of innovatio...

📊 Monitoring & Logging in Kubernetes – Tools like Prometheus, Grafana, and Fluentd

  Monitoring & Logging in Kubernetes – Tools like Prometheus, Grafana, and Fluentd Monitoring and logging are essential for maintaining a healthy and well-performing Kubernetes cluster. In this guide, we’ll cover why monitoring is important, key monitoring tools like Prometheus and Grafana, and logging tools like Fluentd to help you gain visibility into your cluster’s performance and logs. Shape Your Future with AI & Infinite Knowledge...!! Want to Generate Text-to-Voice, Images & Videos? http://www.ai.skyinfinitetech.com Read In-Depth Tech & Self-Improvement Blogs http://www.skyinfinitetech.com Watch Life-Changing Videos on YouTube https://www.youtube.com/@SkyInfinite-Learning Transform Your Skills, Business & Productivity – Join Us Today! 🚀 Introduction In today’s fast-paced cloud-native environment, Kubernetes has emerged as the de-facto container orchestration platform. But deploying and managing applications in Kubernetes is just half the ba...

🔒 Kubernetes Security – RBAC, Network Policies, and Secrets Management

  Kubernetes Security – RBAC, Network Policies, and Secrets Management Security is a critical aspect of managing Kubernetes clusters. In this guide, we'll cover essential security mechanisms like Role-Based Access Control (RBAC) , Network Policies , and Secrets Management to help you secure your Kubernetes environment effectively. Shape Your Future with AI & Infinite Knowledge...!! Want to Generate Text-to-Voice, Images & Videos? http://www.ai.skyinfinitetech.com Read In-Depth Tech & Self-Improvement Blogs http://www.skyinfinitetech.com Watch Life-Changing Videos on YouTube https://www.youtube.com/@SkyInfinite-Learning Transform Your Skills, Business & Productivity – Join Us Today! 🚀 Introduction: Why Kubernetes Security Is Non-Negotiable As Kubernetes becomes the backbone of modern cloud-native infrastructure, security is no longer optional—it’s mission-critical . With multiple moving parts like containers, pods, services, nodes, and more, Kuberne...