Core Kubernetes Objects: Pods, ReplicaSets, Deployments, and Namespaces
📅 Published: Feb 2026
⏱️ Estimated Reading Time: 18 minutes
🏷️ Tags: Kubernetes, Pods, ReplicaSets, Deployments, Namespaces, Container Orchestration
Introduction: Building Blocks of Kubernetes
Kubernetes objects are the persistent entities that represent your containerized applications. They describe what containers to run, how many replicas to maintain, what network access to provide, and how to manage updates.
Think of Kubernetes objects as blueprints. You write a YAML file describing your desired state, and Kubernetes works to make that state a reality.
This guide covers the four most essential Kubernetes objects: Pods, ReplicaSets, Deployments, and Namespaces. Understanding these objects is fundamental to using Kubernetes effectively.
Part 1: Pods - The Smallest Deployable Unit
What is a Pod?
A Pod is the smallest and simplest Kubernetes object. It represents a single instance of a running process in your cluster. A Pod contains one or more containers that share:
Network namespace (same IP address)
Storage volumes
Lifecycle
Containers within the same Pod can communicate via localhost and share data through volumes.
┌─────────────────────────────────┐ │ Pod │ │ ┌───────────┐ ┌───────────┐ │ │ │ Container │ │ Container │ │ │ │ (Main) │ │ (Sidecar) │ │ │ └───────────┘ └───────────┘ │ │ │ │ Shared: │ │ • IP address: 10.244.1.5 │ │ • Storage volumes │ │ • Network namespace │ └─────────────────────────────────┘
Why Pods Instead of Direct Containers?
Kubernetes does not run containers directly. It runs Pods. This abstraction allows:
Multi-container collaboration:
Two containers that need to work together can run in the same Pod. They can communicate over localhost and share files.
Sidecar pattern:
A helper container (logging, monitoring, proxy) runs alongside your main application container in the same Pod.
Resource sharing:
Containers in the same Pod share resources efficiently.
Pod Lifecycle Phases
| Phase | Description |
|---|---|
| Pending | Pod accepted by cluster, waiting for node or container creation |
| Running | Pod bound to node, all containers running |
| Succeeded | All containers terminated successfully (batch jobs) |
| Failed | All containers terminated with error |
| Unknown | Pod state unknown (node communication failure) |
Pod YAML Example
apiVersion: v1 kind: Pod metadata: name: nginx-pod labels: app: nginx environment: production spec: containers: - name: nginx image: nginx:1.24 ports: - containerPort: 80 resources: requests: memory: "128Mi" cpu: "250m" limits: memory: "256Mi" cpu: "500m"
Multi-Container Pod Example
apiVersion: v1 kind: Pod metadata: name: web-app-with-sidecar spec: containers: - name: web-app image: nginx ports: - containerPort: 80 volumeMounts: - name: logs mountPath: /var/log/nginx - name: log-shipper image: fluentd volumeMounts: - name: logs mountPath: /logs command: ["fluentd", "-c", "/fluentd/etc/fluent.conf"] volumes: - name: logs emptyDir: {}
Pod Operations
# Create a Pod kubectl run nginx --image=nginx # Create from YAML kubectl apply -f pod.yaml # List Pods kubectl get pods kubectl get pods -o wide kubectl get pods -l app=nginx # Describe Pod kubectl describe pod nginx-pod # View logs kubectl logs nginx-pod kubectl logs nginx-pod -c sidecar-container # Execute command kubectl exec -it nginx-pod -- /bin/bash # Delete Pod kubectl delete pod nginx-pod
Part 2: ReplicaSets - Maintaining Pod Count
What is a ReplicaSet?
A ReplicaSet ensures that a specified number of Pod replicas are running at all times. If a Pod crashes or is deleted, the ReplicaSet creates a new one.
Think of a ReplicaSet as a supervisor that constantly checks: "Are there enough Pods running?" If not, it creates more. If there are too many, it terminates extras.
┌─────────────────────────────────────────────────────────┐ │ ReplicaSet │ │ desired replicas: 3 │ │ │ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ │ │ Pod 1 │ │ Pod 2 │ │ Pod 3 │ │ │ │ (nginx) │ │ (nginx) │ │ (nginx) │ │ │ └─────────┘ └─────────┘ └─────────┘ │ │ │ │ If Pod 2 dies → ReplicaSet creates new Pod │ └─────────────────────────────────────────────────────────┘
Why ReplicaSets?
Self-healing: Automatically replaces failed Pods
Scalability: Easy to increase or decrease replica count
Reliability: Maintains minimum available Pods
Load distribution: Multiple Pods share traffic
ReplicaSet vs Pod
| Aspect | Pod | ReplicaSet |
|---|---|---|
| Purpose | Run a container | Maintain Pod count |
| Self-healing | No (if Pod dies, it's gone) | Yes (creates new Pod) |
| Scaling | Manual | Easy (change replica count) |
| Use case | Direct management not recommended | Production workloads |
ReplicaSet YAML Example
apiVersion: apps/v1 kind: ReplicaSet metadata: name: nginx-replicaset labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.24 ports: - containerPort: 80
How Selectors Work
The selector defines which Pods the ReplicaSet manages. The Pod template's labels must match the selector.
selector: matchLabels: app: nginx # Only Pods with this label tier: backend # AND this label (optional)
ReplicaSet Operations
# Create ReplicaSet kubectl apply -f replicaset.yaml # Get ReplicaSets kubectl get replicasets kubectl get rs # Scale ReplicaSet kubectl scale replicaset nginx-replicaset --replicas=5 # Edit ReplicaSet kubectl edit replicaset nginx-replicaset # Delete ReplicaSet (deletes all managed Pods) kubectl delete replicaset nginx-replicaset
Important Note
You rarely use ReplicaSets directly in production. Deployments manage ReplicaSets for you, adding features like rolling updates and rollbacks.
Part 3: Deployments - Managing Application Lifecycle
What is a Deployment?
A Deployment provides declarative updates for Pods and ReplicaSets. It is the standard way to manage stateless applications in Kubernetes.
Think of a Deployment as a higher-level controller that manages ReplicaSets and provides:
Rolling updates: Update Pods gradually with zero downtime
Rollbacks: Revert to previous versions
Scaling: Change number of replicas
Pause/resume: Control update process
┌─────────────────────────────────────────────────────────────┐ │ Deployment │ │ │ │ ┌─────────────────────────────────────────────────────┐ │ │ │ ReplicaSet (v1) │ │ │ │ ┌─────┐ ┌─────┐ ┌─────┐ │ │ │ │ │ Pod │ │ Pod │ │ Pod │ │ │ │ │ └─────┘ └─────┘ └─────┘ │ │ │ └─────────────────────────────────────────────────────┘ │ │ │ │ │ ▼ (rolling update) │ │ ┌─────────────────────────────────────────────────────┐ │ │ │ ReplicaSet (v2) │ │ │ │ ┌─────┐ ┌─────┐ ┌─────┐ │ │ │ │ │ Pod │ │ Pod │ │ Pod │ │ │ │ │ └─────┘ └─────┘ └─────┘ │ │ │ └─────────────────────────────────────────────────────┘ │ └─────────────────────────────────────────────────────────────┘
Deployment YAML Example
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.24 ports: - containerPort: 80 resources: requests: memory: "128Mi" cpu: "250m" limits: memory: "256Mi" cpu: "500m"
Deployment Update Strategies
Rolling Update (default)
Gradually replaces old Pods with new ones. Zero downtime.
spec: strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 # Extra Pods allowed during update maxUnavailable: 0 # Pods unavailable during update
Recreate
Terminates all old Pods, then creates new ones. Causes downtime.
spec: strategy: type: Recreate
Rolling Update Process
Step 1: New ReplicaSet created with updated image Step 2: New Pod starts, becomes ready Step 3: Old Pod terminated Step 4: Repeat until all Pods updated Desired: 3 replicas During update: 4 replicas total (3 old + 1 new) After update: 3 replicas (all new)
Deployment Operations
# Create Deployment kubectl create deployment nginx --image=nginx kubectl apply -f deployment.yaml # Get Deployments kubectl get deployments kubectl get deploy # Scale Deployment kubectl scale deployment nginx-deployment --replicas=5 # Update image (rolling update) kubectl set image deployment/nginx-deployment nginx=nginx:1.25 # Check rollout status kubectl rollout status deployment/nginx-deployment # Rollback to previous version kubectl rollout undo deployment/nginx-deployment # Rollback to specific revision kubectl rollout undo deployment/nginx-deployment --to-revision=2 # View rollout history kubectl rollout history deployment/nginx-deployment # Pause update kubectl rollout pause deployment/nginx-deployment # Resume update kubectl rollout resume deployment/nginx-deployment
Deployment Use Cases
Application deployment
Deploy web servers, API services, and backend applications.
Microservices
Each microservice gets its own Deployment.
Environment separation
Different Deployments for dev, staging, and production (often with different Namespaces).
Canary deployments
Run two Deployments with different versions, route partial traffic to new version.
Part 4: Namespaces - Virtual Clusters
What is a Namespace?
Namespaces provide a mechanism for isolating groups of resources within a single cluster. They create virtual clusters within the physical cluster.
Think of Namespaces as separate workspaces for different teams or projects. Resources in one Namespace cannot see resources in another Namespace by default.
┌─────────────────────────────────────────────────────────────┐ │ Kubernetes Cluster │ │ │ │ ┌───────────────────┐ ┌───────────────────┐ │ │ │ Namespace: dev │ │ Namespace: prod │ │ │ │ │ │ │ │ │ │ Pods, Services, │ │ Pods, Services, │ │ │ │ Deployments │ │ Deployments │ │ │ └───────────────────┘ └───────────────────┘ │ │ │ │ ┌───────────────────┐ ┌───────────────────┐ │ │ │ Namespace: kube- │ │ Namespace: │ │ │ │ system │ │ monitoring │ │ │ │ │ │ │ │ │ │ System │ │ Prometheus, │ │ │ │ components │ │ Grafana │ │ │ └───────────────────┘ └───────────────────┘ │ └─────────────────────────────────────────────────────────────┘
Why Use Namespaces?
| Reason | Benefit |
|---|---|
| Team isolation | Each team gets its own Namespace |
| Environment separation | dev, staging, prod in same cluster |
| Resource quotas | Limit CPU/memory per Namespace |
| Access control | RBAC policies per Namespace |
| Organization | Group related resources |
Default Namespaces
| Namespace | Purpose |
|---|---|
| default | Default location for resources without a Namespace |
| kube-system | Kubernetes system components |
| kube-public | Publicly readable resources |
| kube-node-lease | Node heartbeat information |
Namespace YAML Example
apiVersion: v1 kind: Namespace metadata: name: development labels: name: development team: backend
Using Namespaces
# Create Namespace kubectl create namespace development kubectl apply -f namespace.yaml # List Namespaces kubectl get namespaces kubectl get ns # Create resource in specific Namespace kubectl run nginx --image=nginx --namespace=development # List resources in Namespace kubectl get pods --namespace=development kubectl get all -n development # Switch default Namespace (using kubens tool) kubens development # Delete Namespace (deletes all resources inside) kubectl delete namespace development
Resource Quotas per Namespace
Limit resource consumption within a Namespace:
apiVersion: v1 kind: ResourceQuota metadata: name: dev-quota namespace: development spec: hard: requests.cpu: "4" requests.memory: "8Gi" limits.cpu: "8" limits.memory: "16Gi" pods: "20" services: "10"
Limit Ranges
Set default resource limits for Pods in a Namespace:
apiVersion: v1 kind: LimitRange metadata: name: dev-limits namespace: development spec: limits: - max: cpu: "1" memory: "1Gi" min: cpu: "100m" memory: "128Mi" default: cpu: "500m" memory: "512Mi" defaultRequest: cpu: "250m" memory: "256Mi" type: Container
Real-World Scenarios
Scenario 1: Web Application Deployment
A company needs to deploy a web application with 5 replicas, rolling updates, and zero downtime.
apiVersion: apps/v1 kind: Deployment metadata: name: webapp namespace: production spec: replicas: 5 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 0 selector: matchLabels: app: webapp template: metadata: labels: app: webapp spec: containers: - name: app image: myapp:1.0.0 ports: - containerPort: 8080 readinessProbe: httpGet: path: /health port: 8080 initialDelaySeconds: 5 periodSeconds: 10 livenessProbe: httpGet: path: /health port: 8080 initialDelaySeconds: 15 periodSeconds: 20 resources: requests: memory: "256Mi" cpu: "250m" limits: memory: "512Mi" cpu: "500m"
Scenario 2: Multi-Team Cluster
A cluster is shared by three teams: frontend, backend, and data. Each team needs isolation.
# Create Namespaces kubectl create namespace frontend kubectl create namespace backend kubectl create namespace data # Deploy team resources kubectl apply -f frontend-deployment.yaml -n frontend kubectl apply -f backend-deployment.yaml -n backend kubectl apply -f data-job.yaml -n data # Set resource quotas per team kubectl apply -f frontend-quota.yaml -n frontend kubectl apply -f backend-quota.yaml -n backend
Scenario 3: Rolling Update with Rollback
A team deploys version 2.0 of their application. It fails. They need to roll back.
# Deploy version 1.0 kubectl apply -f deployment.yaml # Update to version 2.0 kubectl set image deployment/webapp app=myapp:2.0 # Check status - failing kubectl rollout status deployment/webapp # Rollback immediately kubectl rollout undo deployment/webapp # Verify back to version 1.0 kubectl rollout status deployment/webapp
Summary
| Object | Purpose | When to Use |
|---|---|---|
| Pod | Run containers | Direct Pod management (rare) |
| ReplicaSet | Maintain Pod count | Usually via Deployment |
| Deployment | Application lifecycle | ALWAYS for stateless apps |
| Namespace | Resource isolation | Multi-team, multi-environment |
Hierarchy
Namespace
└── Deployment
└── ReplicaSet
└── Pod
└── ContainerBest Practices
Use Deployments, not bare Pods — Deployments provide self-healing and updates
Use Namespaces for isolation — Separate dev, staging, prod
Set resource requests and limits — Prevents resource contention
Use readiness and liveness probes — Ensures traffic only goes to healthy Pods
Label everything — Labels enable selection and organization
Practice Questions
What is the difference between a Pod and a Deployment?
Why would you use a ReplicaSet directly instead of a Deployment?
How does a rolling update work? What happens during the update process?
When would you create a new Namespace?
What happens to Pods when you delete a Deployment?
Learn More
Practice Kubernetes core objects with hands-on exercises in our interactive labs:
https://devops.trainwithsky.com/
Comments
Post a Comment