Kubernetes Fundamentals: What is Kubernetes? K8s Architecture (Master & Worker Nodes) and Control Plane Components
📅 Published: Feb 2026
⏱️ Estimated Reading Time: 18 minutes
🏷️ Tags: Kubernetes, K8s Architecture, Container Orchestration, Control Plane, Worker Nodes, DevOps
Introduction: The Container Orchestration Problem
Docker changed how we package and run applications. But running containers at scale creates new problems:
Where do you run your containers?
How do you schedule them across multiple machines?
What happens when a container dies?
How do you scale from 10 to 1000 containers?
How do you update containers without downtime?
How do containers find and talk to each other?
Kubernetes solves these problems. It is a platform for automating deployment, scaling, and operations of containerized applications across clusters of machines.
This guide covers the fundamentals: what Kubernetes is, its architecture, and the components that make it work.
Part 1: What is Kubernetes?
The Simple Definition
Kubernetes (often called K8s) is an open-source platform for managing containerized workloads and services. It handles scheduling, scaling, service discovery, load balancing, and self-healing.
Think of Kubernetes as an operating system for your data center. Just as an OS manages individual computers, Kubernetes manages clusters of servers. It decides where to run your containers, when to restart them, and how to connect them.
Why Kubernetes?
| Problem | Kubernetes Solution |
|---|---|
| Container dies | Automatically restarts it |
| Traffic increases | Automatically scales up replicas |
| Node fails | Reschedules containers on healthy nodes |
| New version released | Rolling update with zero downtime |
| Containers need to communicate | Built-in service discovery and load balancing |
| Configuration management | ConfigMaps and Secrets |
What Kubernetes Is Not
Kubernetes is not:
A traditional PaaS (though it can be used to build one)
A serverless framework (though it can run serverless workloads)
A configuration management tool (use Ansible, Chef, Puppet alongside it)
A complete CI/CD platform (though it integrates with them)
Kubernetes focuses on container orchestration. Other tools handle building images, managing code, and provisioning infrastructure.
Part 2: The Origin of the Name
Kubernetes comes from the Greek word for "helmsman" or "pilot". The name reflects its role in steering containerized applications.
The abbreviation K8s comes from counting the eight letters between K and s. This pattern (first letter, number of letters, last letter) is common for long technical names: i18n (internationalization), a11y (accessibility).
Part 3: Kubernetes Architecture Overview
The Cluster
A Kubernetes cluster consists of two types of nodes:
Control Plane (Master Nodes): Manage the cluster
Worker Nodes: Run your applications
┌─────────────────────────────────────────────────────────────┐ │ Kubernetes Cluster │ │ ┌─────────────────────────────────────────────────────┐ │ │ │ Control Plane │ │ │ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │ │ │ │ API │ │ Scheduler│ │Controller│ │ etcd │ │ │ │ │ │ Server │ │ │ │ Manager │ │ │ │ │ │ │ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │ │ │ └─────────────────────────────────────────────────────┘ │ │ │ │ │ ┌─────────────┼─────────────┐ │ │ ▼ ▼ ▼ │ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │ │ Worker │ │ Worker │ │ Worker │ │ │ │ Node │ │ Node │ │ Node │ │ │ │ ┌─────────┐ │ │ ┌─────────┐ │ │ ┌─────────┐ │ │ │ │ │ Pod │ │ │ │ Pod │ │ │ │ Pod │ │ │ │ │ │ Container│ │ │ │ Container│ │ │ │ Container│ │ │ │ │ └─────────┘ │ │ └─────────┘ │ │ └─────────┘ │ │ │ │ ┌─────────┐ │ │ ┌─────────┐ │ │ ┌─────────┐ │ │ │ │ │ Pod │ │ │ │ Pod │ │ │ │ Pod │ │ │ │ │ │ Container│ │ │ │ Container│ │ │ │ Container│ │ │ │ │ └─────────┘ │ │ └─────────┘ │ │ └─────────┘ │ │ │ └─────────────┘ └─────────────┘ └─────────────┘ │ └─────────────────────────────────────────────────────────────┘
Control Plane vs Worker Nodes
| Aspect | Control Plane | Worker Nodes |
|---|---|---|
| Purpose | Manages the cluster | Runs applications |
| Components | API server, scheduler, controller manager, etcd | Kubelet, kube-proxy, container runtime |
| User interaction | Yes (via kubectl) | No |
| Runs your workloads | No (ideally) | Yes |
| Fault tolerance | Multiple masters for HA | Many workers for scale |
Part 4: Control Plane Components
The control plane makes global decisions about the cluster. It detects and responds to cluster events.
API Server (kube-apiserver)
The API server is the front door to Kubernetes. All administrative tasks go through it. You interact with it using kubectl or REST API calls.
What it does:
Exposes the Kubernetes API
Processes REST operations (create, read, update, delete)
Validates and configures API objects
Handles authentication and authorization
Why it matters: The API server is the only component that talks to etcd. All other components communicate through it.
# These commands all go through the API server kubectl get pods kubectl create deployment nginx --image=nginx kubectl delete pod my-pod
etcd
etcd is a distributed key-value store. It holds the entire configuration and state of the cluster.
What it stores:
Cluster state
Node information
Pod definitions
ConfigMaps and Secrets
Service definitions
Deployment states
Why it matters: etcd is the source of truth. If etcd fails, the cluster loses its state. Always back up etcd.
# Backup etcd ETCDCTL_API=3 etcdctl snapshot save snapshot.db # Restore from backup ETCDCTL_API=3 etcdctl snapshot restore snapshot.db
Scheduler (kube-scheduler)
The scheduler decides which worker node runs each new pod.
What it considers:
Resource requirements (CPU, memory)
Node availability
Affinity and anti-affinity rules
Taints and tolerations
Pod priority
How it works:
Watches for unscheduled pods
Filters nodes that meet requirements
Scores remaining nodes
Binds pod to the highest-scoring node
# Pod with resource requests - scheduler uses these apiVersion: v1 kind: Pod metadata: name: my-pod spec: containers: - name: app image: nginx resources: requests: memory: "256Mi" cpu: "500m"
Controller Manager (kube-controller-manager)
The controller manager runs controller processes that regulate the cluster state.
What controllers do:
Node Controller: Detects and responds to node failures
Replication Controller: Maintains correct number of pod replicas
Endpoint Controller: Populates Service endpoints
Service Account Controller: Creates default service accounts
Deployment Controller: Manages rolling updates
How controllers work:
Each controller watches the desired state (from API server) and current state (from API server). If they don't match, the controller takes action to reconcile them.
Desired State (etcd) → Controller → Action → Current State
↑ │
└───────────────────────────────────────────┘
(reconcile loop)Cloud Controller Manager
In cloud environments, the cloud controller manager interfaces with the cloud provider's API.
What it handles:
Node management (detecting cloud VMs)
Route management (setting up network routes)
Service management (creating load balancers)
Volume management (provisioning storage)
This component is specific to cloud providers (AWS, GCP, Azure, etc.).
Part 5: Worker Node Components
Worker nodes run your applications. Each worker node has components that communicate with the control plane.
Kubelet
The kubelet is the primary node agent. It runs on every worker node and ensures containers are running in pods.
What it does:
Registers node with the cluster
Watches for pod assignments from API server
Pulls container images
Starts and stops containers
Reports node and pod status back to control plane
# Check kubelet status on a node systemctl status kubelet journalctl -u kubelet -f
Container Runtime
The container runtime runs the actual containers. Kubernetes supports multiple runtimes:
containerd (most common)
CRI-O
Docker Engine (deprecated but still used)
The runtime pulls images and starts containers.
Kube-proxy
Kube-proxy manages network rules on each node. It enables service discovery and load balancing.
What it does:
Maintains network rules for services
Forwards traffic to the correct pod
Performs load balancing across pods
Handles service IP to pod IP translation
Proxy modes:
| Mode | Description | Performance |
|---|---|---|
| iptables | Default, uses Linux iptables rules | Good |
| ipvs | Uses Linux IPVS, better for large clusters | Better |
| userspace | Legacy, forwards through userspace | Poor |
Part 6: Pods - The Smallest Deployable Unit
Before understanding Kubernetes fully, you need to understand pods. A pod is the smallest deployable unit in Kubernetes.
What is a Pod?
A pod is a group of one or more containers that share:
Network namespace (same IP address)
Storage volumes
Lifecycle
Containers in the same pod can communicate via localhost and share data through volumes.
┌─────────────────────────────┐ │ Pod │ │ ┌─────────┐ ┌─────────┐ │ │ │ Container│ │ Container│ │ │ │ (App) │ │ (Sidecar)│ │ │ └─────────┘ └─────────┘ │ │ │ │ Shared: │ │ - IP address │ │ - Storage volumes │ │ - Network namespace │ └─────────────────────────────┘
Why Pods Instead of Direct Containers?
Pods allow closely related processes to co-locate and share resources. Common pod patterns:
Sidecar: Helper container (logging, monitoring) alongside main app
Ambassador: Proxy to external services
Adapter: Transform output from main container
Pod Lifecycle
| Phase | Description |
|---|---|
| Pending | Pod accepted, waiting for node |
| Running | Pod bound to node, containers running |
| Succeeded | All containers terminated successfully |
| Failed | All containers terminated with error |
| Unknown | Pod state unknown (node communication failure) |
Part 7: Common Kubernetes Objects
Deployment
Manages rolling updates and rollbacks for a set of pods.
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.24 ports: - containerPort: 80
Service
Provides stable network access to a set of pods.
apiVersion: v1 kind: Service metadata: name: nginx-service spec: selector: app: nginx ports: - port: 80 targetPort: 80 type: ClusterIP
ConfigMap
Stores non-sensitive configuration data.
apiVersion: v1 kind: ConfigMap metadata: name: app-config data: database.url: postgres://db:5432 log.level: info
Secret
Stores sensitive data (base64 encoded, not encrypted by default).
apiVersion: v1 kind: Secret metadata: name: db-secret type: Opaque data: password: c3VwZXJzZWNyZXQ= # base64 of "supersecret"
Part 8: Basic kubectl Commands
# Cluster information kubectl cluster-info kubectl get nodes kubectl get componentstatuses # Resources kubectl get pods kubectl get deployments kubectl get services kubectl get configmaps kubectl get secrets kubectl get all # Create resources kubectl create deployment nginx --image=nginx kubectl expose deployment nginx --port=80 --type=LoadBalancer # Apply from file kubectl apply -f manifest.yaml # Delete resources kubectl delete pod my-pod kubectl delete -f manifest.yaml # Debugging kubectl logs my-pod kubectl logs my-pod -c my-container kubectl exec -it my-pod -- /bin/bash kubectl describe pod my-pod
Real-World Scenarios
Scenario 1: Application Deployment
A company deploys a web application with 3 replicas. Kubernetes ensures 3 pods are always running.
Deployment:
apiVersion: apps/v1 kind: Deployment metadata: name: webapp spec: replicas: 3 selector: matchLabels: app: webapp template: metadata: labels: app: webapp spec: containers: - name: app image: myapp:latest ports: - containerPort: 8080
What happens:
API server receives deployment request
Controller manager creates replica set
Scheduler assigns pods to nodes
Kubelet on each node starts containers
Scenario 2: Node Failure
A worker node fails. The pods on that node are lost.
Kubernetes response:
Node controller detects node failure (no heartbeat)
After timeout (default 40 seconds), node marked unhealthy
Scheduler reschedules pods on healthy nodes
Kubelet on new nodes starts containers
Result: Application recovers automatically.
Scenario 3: Rolling Update
A new version of the application is deployed.
kubectl set image deployment/webapp app=myapp:v2
Kubernetes response:
Deployment controller creates new replica set
New pods start with v2 image
Health checks verify new pods are ready
Old pods are terminated gradually
If problems occur, update stops
Result: Zero-downtime update.
Summary
| Component | Purpose | Location |
|---|---|---|
| API Server | Cluster front door | Control Plane |
| etcd | Cluster state storage | Control Plane |
| Scheduler | Pod placement | Control Plane |
| Controller Manager | State reconciliation | Control Plane |
| Kubelet | Node agent | Worker Nodes |
| Container Runtime | Runs containers | Worker Nodes |
| Kube-proxy | Network rules | Worker Nodes |
| Pod | Group of containers | Worker Nodes |
Kubernetes is a complex system, but its core concepts are straightforward:
Control plane manages
Worker nodes run
Pods contain containers
Controllers maintain desired state
Services enable communication
Understanding this architecture is the foundation for all Kubernetes work.
Practice Questions
What are the main components of the Kubernetes control plane?
What is the role of the scheduler in Kubernetes?
Why does Kubernetes use pods instead of running containers directly?
What happens when a worker node fails?
How does Kubernetes ensure the desired number of pod replicates is maintained?
Learn More
Practice Kubernetes fundamentals with hands-on exercises in our interactive labs:
https://devops.trainwithsky.com/
Comments
Post a Comment