Docker Fundamentals: What is Docker, Containers vs Virtual Machines, and Docker Architecture
📅 Published: Feb 2026
⏱️ Estimated Reading Time: 16 minutes
🏷️ Tags: Docker, Containers, Virtualization, DevOps, Containerization
Introduction: The Problem Docker Solves
The "It Works on My Machine" Problem
Every developer has experienced this frustration. You write code on your laptop. It runs perfectly. You push it to the team repository. Your colleague pulls it down, and it fails. The operating system is different. The dependencies are different. The configuration is different.
This is the environment inconsistency problem. It causes wasted hours, deployment failures, and production incidents.
Docker solves this problem by packaging applications with everything they need to run. The application runs the same way on your laptop, your colleague's machine, your test environment, and your production servers.
What is Docker?
The Simple Definition
Docker is a platform for developing, shipping, and running applications in containers. A container is a lightweight, standalone, executable package that includes everything needed to run software: code, runtime, system tools, libraries, and settings.
Think of Docker as a shipping container for software. Just as a shipping container can hold any cargo and be moved between ships, trains, and trucks without repacking, a Docker container can hold any application and run on any system that supports Docker.
What Docker Provides
Consistency
An application in a Docker container runs the same way everywhere. No more "it works on my machine."
Isolation
Containers are isolated from each other and from the host system. They have their own filesystem, network, and process space.
Portability
A container can run on any system with Docker: your laptop, a data center server, or any cloud provider.
Efficiency
Containers share the host operating system kernel. They are lightweight and start in milliseconds.
Containers vs Virtual Machines
The Traditional Virtual Machine
A virtual machine (VM) emulates an entire computer. It includes a full operating system, virtual hardware, and the application. Each VM runs its own operating system kernel.
When you run a VM, you are allocating dedicated resources: CPU, memory, and storage. The hypervisor manages the virtual hardware and schedules the VMs.
Virtual Machine Characteristics:
Full operating system per VM
Boots in minutes
GBs in size
Complete isolation
Resource overhead
Slower startup
The Docker Container
A container packages the application and its dependencies but shares the host operating system kernel. Instead of virtualizing hardware, containers virtualize the operating system.
When you run a container, it runs as an isolated process on the host system. The container engine manages the isolation using Linux kernel features.
Container Characteristics:
Shares host kernel
Starts in milliseconds
MBs in size
Process-level isolation
Minimal overhead
Fast startup
Side-by-Side Comparison
| Aspect | Virtual Machine | Docker Container |
|---|---|---|
| Operating System | Full OS per VM | Shares host OS |
| Startup Time | Minutes | Milliseconds |
| Disk Usage | GBs | MBs |
| Memory Usage | High (dedicated) | Low (shared) |
| Isolation | Complete hardware isolation | Process isolation |
| Portability | VM format dependent | Any Docker host |
| Use Cases | Multiple OS types, legacy apps | Microservices, cloud-native apps |
When to Use Each
Use Virtual Machines When:
You need to run different operating systems (Windows and Linux on same host)
You require complete hardware-level isolation
You are running legacy applications that cannot be containerized
You need to run kernel-level operations
Use Docker Containers When:
You are building microservices
You need rapid deployment and scaling
You want to maximize server density
You are developing cloud-native applications
You need consistent development and production environments
Docker Architecture
The Client-Server Model
Docker uses a client-server architecture. The Docker client communicates with the Docker daemon. The daemon does the heavy lifting: building, running, and managing containers.
┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ Docker │────▶│ Docker │────▶│ Containers │ │ Client │ │ Daemon │ │ │ └─────────────┘ └─────────────┘ └─────────────┘
Docker Client
The client is the command-line interface you interact with. When you type docker run, the client sends this command to the daemon. The client can be on the same machine as the daemon or connect remotely.
Docker Daemon
The daemon (dockerd) listens for API requests and manages Docker objects: images, containers, networks, and volumes. It can also communicate with other daemons to manage Docker services.
Key Docker Components
Dockerfile
A Dockerfile is a text file with instructions for building a Docker image. It defines the base operating system, installs dependencies, copies code, and sets the command to run.
FROM ubuntu:22.04 RUN apt-get update && apt-get install -y nginx COPY ./app /var/www/html EXPOSE 80 CMD ["nginx", "-g", "daemon off;"]
Docker Image
An image is a read-only template for creating containers. It contains the application code, libraries, dependencies, and configuration. Images are built from Dockerfiles. You can share images through registries.
Think of an image as a class definition in programming. It defines what the container will contain but does not run anything itself.
Docker Container
A container is a runnable instance of an image. You can create, start, stop, move, or delete a container. Each container is isolated from others. Containers are lightweight and can be created quickly.
Think of a container as an instance of a class. It is the running version of the image.
Docker Registry
A registry stores Docker images. Docker Hub is the public registry. Organizations can run private registries. You pull images from registries to run containers. You push images to registries to share them.
Underlying Technology
Docker containers are built on Linux kernel features. Understanding these helps explain how containers achieve isolation without the overhead of virtual machines.
Namespaces
Namespaces provide isolation for running processes. Docker uses several namespaces:
PID namespace: Processes inside a container cannot see processes outside
NET namespace: Each container gets its own network stack
MNT namespace: Each container has its own filesystem view
USER namespace: Containers can have root inside but not on the host
Control Groups (cgroups)
Cgroups limit and account for resource usage. Docker uses cgroups to control how much CPU, memory, disk I/O, and network bandwidth each container can use. Without cgroups, one container could consume all host resources.
Union Filesystems
Union filesystems allow multiple filesystems to be layered. Docker uses overlayfs or similar technologies to build images in layers. This makes images efficient to store and fast to build.
The Layered Filesystem
Docker images are built in layers. Each instruction in a Dockerfile creates a new layer. Layers are cached and reused.
┌─────────────────┐ │ Container │ ← Writable layer ├─────────────────┤ │ Layer: app │ ← Copy app code ├─────────────────┤ │ Layer: deps │ ← Install dependencies ├─────────────────┤ │ Layer: base │ ← Base OS └─────────────────┘
When you run a container from an image, Docker adds a writable layer on top. Changes made in the container do not affect the underlying image. This makes containers fast to start and efficient to store.
If you run ten containers from the same image, they share the image layers. Each container gets its own writable layer. This is much more efficient than having ten copies of the entire image.
Docker Workflow
Basic Workflow
The typical Docker workflow has five steps:
1. Write a Dockerfile
Create a Dockerfile that describes how to build your application image.
2. Build an image
Run docker build -t myapp . to create an image from the Dockerfile.
3. Run a container
Run docker run myapp to start a container from the image.
4. Test and debug
Use docker logs, docker exec, and other commands to interact with running containers.
5. Push to registry
Run docker push myregistry/myapp to share the image.
Development vs Production
In development, you often mount your source code into the container. This lets you edit code and see changes without rebuilding the image.
docker run -v $(pwd):/app -p 8080:8080 myapp
In production, you build a complete image with the code included. You do not mount code volumes. You push the image to a registry and pull it to production servers.
Common Docker Commands
| Command | Purpose |
|---|---|
docker build -t myapp . | Build an image from Dockerfile |
docker run myapp | Run a container from an image |
docker ps | List running containers |
docker ps -a | List all containers |
docker stop container_id | Stop a running container |
docker rm container_id | Remove a stopped container |
docker images | List images |
docker rmi image_id | Remove an image |
docker pull ubuntu | Pull an image from registry |
docker push myregistry/myapp | Push an image to registry |
docker exec -it container bash | Run a command in running container |
docker logs container | View container logs |
Real-World Scenarios
Scenario 1: Development Environment Consistency
A team of five developers works on a Python web application. Each developer has a different operating system: macOS, Windows, Ubuntu.
Before Docker:
Setting up the development environment takes hours
Dependencies work on one machine but not another
Python version mismatches cause bugs
"It works on my machine" is a daily occurrence
With Docker:
The team creates a Dockerfile with Python version and dependencies
Each developer runs the same container
The application runs identically on all machines
New team members are productive in minutes
Scenario 2: Microservices Architecture
A company is building a microservices application with ten services. Each service has its own dependencies and configuration.
Before Docker:
Running ten services on one machine requires managing port conflicts
Services interfere with each other's dependencies
Starting all services requires running ten commands
Testing a change to one service risks breaking others
With Docker:
Each service runs in its own container
Docker Compose starts all services with one command
Services are isolated; changes to one do not affect others
Port conflicts are eliminated
Each service can use different language versions or frameworks
Scenario 3: CI/CD Pipeline
A company deploys applications to production weekly. Each deployment requires building and testing in a clean environment.
Before Docker:
CI servers must have all dependencies pre-installed
Build agents become polluted with old versions
Reproducing build failures is difficult
Production environment differs from build environment
With Docker:
The CI pipeline builds the application in a container
Each build starts from a clean image
The same image used in CI is deployed to production
Build failures are easy to reproduce locally
Production and development environments match exactly
Summary
Docker has transformed how applications are built, shipped, and run. The key concepts to remember:
Containers package applications with their dependencies
Images are read-only templates for containers
Dockerfiles define how to build images
Registries store and share images
Namespaces and cgroups provide isolation and resource limits
Docker is not a replacement for virtual machines. It is a different tool for a different purpose. VMs provide hardware isolation and run different operating systems. Containers provide process isolation, efficiency, and consistency.
Practice Questions
Explain the difference between a Docker image and a Docker container.
Why are containers more efficient than virtual machines?
What are the three key Linux kernel features that enable Docker containers?
How does Docker solve the "it works on my machine" problem?
When would you choose a virtual machine over a container?
Learn More
Practice Docker fundamentals with hands-on exercises in our interactive labs:
https://devops.trainwithsky.com/
Comments
Post a Comment