Skip to main content

Core Kubernetes Objects:

 Core Kubernetes Objects: Pods, ReplicaSets, Deployments, and Namespaces

📅 Published: Feb 2026
⏱️ Estimated Reading Time: 18 minutes
🏷️ Tags: Kubernetes, Pods, ReplicaSets, Deployments, Namespaces, Container Orchestration


Introduction: Building Blocks of Kubernetes

Kubernetes objects are the persistent entities that represent your containerized applications. They describe what containers to run, how many replicas to maintain, what network access to provide, and how to manage updates.

Think of Kubernetes objects as blueprints. You write a YAML file describing your desired state, and Kubernetes works to make that state a reality.

This guide covers the four most essential Kubernetes objects: Pods, ReplicaSets, Deployments, and Namespaces. Understanding these objects is fundamental to using Kubernetes effectively.


Part 1: Pods - The Smallest Deployable Unit

What is a Pod?

A Pod is the smallest and simplest Kubernetes object. It represents a single instance of a running process in your cluster. A Pod contains one or more containers that share:

  • Network namespace (same IP address)

  • Storage volumes

  • Lifecycle

Containers within the same Pod can communicate via localhost and share data through volumes.

text
┌─────────────────────────────────┐
│              Pod                │
│  ┌───────────┐ ┌───────────┐   │
│  │ Container │ │ Container │   │
│  │ (Main)    │ │ (Sidecar) │   │
│  └───────────┘ └───────────┘   │
│                                 │
│  Shared:                        │
│  • IP address: 10.244.1.5       │
│  • Storage volumes              │
│  • Network namespace            │
└─────────────────────────────────┘

Why Pods Instead of Direct Containers?

Kubernetes does not run containers directly. It runs Pods. This abstraction allows:

Multi-container collaboration:
Two containers that need to work together can run in the same Pod. They can communicate over localhost and share files.

Sidecar pattern:
A helper container (logging, monitoring, proxy) runs alongside your main application container in the same Pod.

Resource sharing:
Containers in the same Pod share resources efficiently.

Pod Lifecycle Phases

PhaseDescription
PendingPod accepted by cluster, waiting for node or container creation
RunningPod bound to node, all containers running
SucceededAll containers terminated successfully (batch jobs)
FailedAll containers terminated with error
UnknownPod state unknown (node communication failure)

Pod YAML Example

yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: nginx
    environment: production
spec:
  containers:
  - name: nginx
    image: nginx:1.24
    ports:
    - containerPort: 80
    resources:
      requests:
        memory: "128Mi"
        cpu: "250m"
      limits:
        memory: "256Mi"
        cpu: "500m"

Multi-Container Pod Example

yaml
apiVersion: v1
kind: Pod
metadata:
  name: web-app-with-sidecar
spec:
  containers:
  - name: web-app
    image: nginx
    ports:
    - containerPort: 80
    volumeMounts:
    - name: logs
      mountPath: /var/log/nginx

  - name: log-shipper
    image: fluentd
    volumeMounts:
    - name: logs
      mountPath: /logs
    command: ["fluentd", "-c", "/fluentd/etc/fluent.conf"]

  volumes:
  - name: logs
    emptyDir: {}

Pod Operations

bash
# Create a Pod
kubectl run nginx --image=nginx

# Create from YAML
kubectl apply -f pod.yaml

# List Pods
kubectl get pods
kubectl get pods -o wide
kubectl get pods -l app=nginx

# Describe Pod
kubectl describe pod nginx-pod

# View logs
kubectl logs nginx-pod
kubectl logs nginx-pod -c sidecar-container

# Execute command
kubectl exec -it nginx-pod -- /bin/bash

# Delete Pod
kubectl delete pod nginx-pod

Part 2: ReplicaSets - Maintaining Pod Count

What is a ReplicaSet?

A ReplicaSet ensures that a specified number of Pod replicas are running at all times. If a Pod crashes or is deleted, the ReplicaSet creates a new one.

Think of a ReplicaSet as a supervisor that constantly checks: "Are there enough Pods running?" If not, it creates more. If there are too many, it terminates extras.

text
┌─────────────────────────────────────────────────────────┐
│                     ReplicaSet                           │
│                 desired replicas: 3                      │
│                                                          │
│   ┌─────────┐    ┌─────────┐    ┌─────────┐            │
│   │  Pod 1  │    │  Pod 2  │    │  Pod 3  │            │
│   │ (nginx) │    │ (nginx) │    │ (nginx) │            │
│   └─────────┘    └─────────┘    └─────────┘            │
│                                                          │
│   If Pod 2 dies → ReplicaSet creates new Pod            │
└─────────────────────────────────────────────────────────┘

Why ReplicaSets?

  • Self-healing: Automatically replaces failed Pods

  • Scalability: Easy to increase or decrease replica count

  • Reliability: Maintains minimum available Pods

  • Load distribution: Multiple Pods share traffic

ReplicaSet vs Pod

AspectPodReplicaSet
PurposeRun a containerMaintain Pod count
Self-healingNo (if Pod dies, it's gone)Yes (creates new Pod)
ScalingManualEasy (change replica count)
Use caseDirect management not recommendedProduction workloads

ReplicaSet YAML Example

yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: nginx-replicaset
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.24
        ports:
        - containerPort: 80

How Selectors Work

The selector defines which Pods the ReplicaSet manages. The Pod template's labels must match the selector.

yaml
selector:
  matchLabels:
    app: nginx        # Only Pods with this label
    tier: backend     # AND this label (optional)

ReplicaSet Operations

bash
# Create ReplicaSet
kubectl apply -f replicaset.yaml

# Get ReplicaSets
kubectl get replicasets
kubectl get rs

# Scale ReplicaSet
kubectl scale replicaset nginx-replicaset --replicas=5

# Edit ReplicaSet
kubectl edit replicaset nginx-replicaset

# Delete ReplicaSet (deletes all managed Pods)
kubectl delete replicaset nginx-replicaset

Important Note

You rarely use ReplicaSets directly in production. Deployments manage ReplicaSets for you, adding features like rolling updates and rollbacks.


Part 3: Deployments - Managing Application Lifecycle

What is a Deployment?

A Deployment provides declarative updates for Pods and ReplicaSets. It is the standard way to manage stateless applications in Kubernetes.

Think of a Deployment as a higher-level controller that manages ReplicaSets and provides:

  • Rolling updates: Update Pods gradually with zero downtime

  • Rollbacks: Revert to previous versions

  • Scaling: Change number of replicas

  • Pause/resume: Control update process

text
┌─────────────────────────────────────────────────────────────┐
│                       Deployment                             │
│                                                              │
│  ┌─────────────────────────────────────────────────────┐    │
│  │              ReplicaSet (v1)                        │    │
│  │  ┌─────┐ ┌─────┐ ┌─────┐                           │    │
│  │  │ Pod │ │ Pod │ │ Pod │                           │    │
│  │  └─────┘ └─────┘ └─────┘                           │    │
│  └─────────────────────────────────────────────────────┘    │
│                          │                                   │
│                          ▼ (rolling update)                  │
│  ┌─────────────────────────────────────────────────────┐    │
│  │              ReplicaSet (v2)                        │    │
│  │  ┌─────┐ ┌─────┐ ┌─────┐                           │    │
│  │  │ Pod │ │ Pod │ │ Pod │                           │    │
│  │  └─────┘ └─────┘ └─────┘                           │    │
│  └─────────────────────────────────────────────────────┘    │
└─────────────────────────────────────────────────────────────┘

Deployment YAML Example

yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.24
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: "128Mi"
            cpu: "250m"
          limits:
            memory: "256Mi"
            cpu: "500m"

Deployment Update Strategies

Rolling Update (default)
Gradually replaces old Pods with new ones. Zero downtime.

yaml
spec:
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1        # Extra Pods allowed during update
      maxUnavailable: 0  # Pods unavailable during update

Recreate
Terminates all old Pods, then creates new ones. Causes downtime.

yaml
spec:
  strategy:
    type: Recreate

Rolling Update Process

text
Step 1: New ReplicaSet created with updated image
Step 2: New Pod starts, becomes ready
Step 3: Old Pod terminated
Step 4: Repeat until all Pods updated

Desired: 3 replicas
During update: 4 replicas total (3 old + 1 new)
After update: 3 replicas (all new)

Deployment Operations

bash
# Create Deployment
kubectl create deployment nginx --image=nginx
kubectl apply -f deployment.yaml

# Get Deployments
kubectl get deployments
kubectl get deploy

# Scale Deployment
kubectl scale deployment nginx-deployment --replicas=5

# Update image (rolling update)
kubectl set image deployment/nginx-deployment nginx=nginx:1.25

# Check rollout status
kubectl rollout status deployment/nginx-deployment

# Rollback to previous version
kubectl rollout undo deployment/nginx-deployment

# Rollback to specific revision
kubectl rollout undo deployment/nginx-deployment --to-revision=2

# View rollout history
kubectl rollout history deployment/nginx-deployment

# Pause update
kubectl rollout pause deployment/nginx-deployment

# Resume update
kubectl rollout resume deployment/nginx-deployment

Deployment Use Cases

Application deployment
Deploy web servers, API services, and backend applications.

Microservices
Each microservice gets its own Deployment.

Environment separation
Different Deployments for dev, staging, and production (often with different Namespaces).

Canary deployments
Run two Deployments with different versions, route partial traffic to new version.


Part 4: Namespaces - Virtual Clusters

What is a Namespace?

Namespaces provide a mechanism for isolating groups of resources within a single cluster. They create virtual clusters within the physical cluster.

Think of Namespaces as separate workspaces for different teams or projects. Resources in one Namespace cannot see resources in another Namespace by default.

text
┌─────────────────────────────────────────────────────────────┐
│                     Kubernetes Cluster                       │
│                                                              │
│  ┌───────────────────┐  ┌───────────────────┐              │
│  │   Namespace: dev  │  │ Namespace: prod   │              │
│  │                   │  │                   │              │
│  │  Pods, Services,  │  │  Pods, Services,  │              │
│  │  Deployments      │  │  Deployments      │              │
│  └───────────────────┘  └───────────────────┘              │
│                                                              │
│  ┌───────────────────┐  ┌───────────────────┐              │
│  │ Namespace: kube-  │  │ Namespace:        │              │
│  │ system            │  │ monitoring        │              │
│  │                   │  │                   │              │
│  │  System           │  │  Prometheus,      │              │
│  │  components       │  │  Grafana          │              │
│  └───────────────────┘  └───────────────────┘              │
└─────────────────────────────────────────────────────────────┘

Why Use Namespaces?

ReasonBenefit
Team isolationEach team gets its own Namespace
Environment separationdev, staging, prod in same cluster
Resource quotasLimit CPU/memory per Namespace
Access controlRBAC policies per Namespace
OrganizationGroup related resources

Default Namespaces

NamespacePurpose
defaultDefault location for resources without a Namespace
kube-systemKubernetes system components
kube-publicPublicly readable resources
kube-node-leaseNode heartbeat information

Namespace YAML Example

yaml
apiVersion: v1
kind: Namespace
metadata:
  name: development
  labels:
    name: development
    team: backend

Using Namespaces

bash
# Create Namespace
kubectl create namespace development
kubectl apply -f namespace.yaml

# List Namespaces
kubectl get namespaces
kubectl get ns

# Create resource in specific Namespace
kubectl run nginx --image=nginx --namespace=development

# List resources in Namespace
kubectl get pods --namespace=development
kubectl get all -n development

# Switch default Namespace (using kubens tool)
kubens development

# Delete Namespace (deletes all resources inside)
kubectl delete namespace development

Resource Quotas per Namespace

Limit resource consumption within a Namespace:

yaml
apiVersion: v1
kind: ResourceQuota
metadata:
  name: dev-quota
  namespace: development
spec:
  hard:
    requests.cpu: "4"
    requests.memory: "8Gi"
    limits.cpu: "8"
    limits.memory: "16Gi"
    pods: "20"
    services: "10"

Limit Ranges

Set default resource limits for Pods in a Namespace:

yaml
apiVersion: v1
kind: LimitRange
metadata:
  name: dev-limits
  namespace: development
spec:
  limits:
  - max:
      cpu: "1"
      memory: "1Gi"
    min:
      cpu: "100m"
      memory: "128Mi"
    default:
      cpu: "500m"
      memory: "512Mi"
    defaultRequest:
      cpu: "250m"
      memory: "256Mi"
    type: Container

Real-World Scenarios

Scenario 1: Web Application Deployment

A company needs to deploy a web application with 5 replicas, rolling updates, and zero downtime.

yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp
  namespace: production
spec:
  replicas: 5
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  selector:
    matchLabels:
      app: webapp
  template:
    metadata:
      labels:
        app: webapp
    spec:
      containers:
      - name: app
        image: myapp:1.0.0
        ports:
        - containerPort: 8080
        readinessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 10
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 15
          periodSeconds: 20
        resources:
          requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "512Mi"
            cpu: "500m"

Scenario 2: Multi-Team Cluster

A cluster is shared by three teams: frontend, backend, and data. Each team needs isolation.

bash
# Create Namespaces
kubectl create namespace frontend
kubectl create namespace backend
kubectl create namespace data

# Deploy team resources
kubectl apply -f frontend-deployment.yaml -n frontend
kubectl apply -f backend-deployment.yaml -n backend
kubectl apply -f data-job.yaml -n data

# Set resource quotas per team
kubectl apply -f frontend-quota.yaml -n frontend
kubectl apply -f backend-quota.yaml -n backend

Scenario 3: Rolling Update with Rollback

A team deploys version 2.0 of their application. It fails. They need to roll back.

bash
# Deploy version 1.0
kubectl apply -f deployment.yaml

# Update to version 2.0
kubectl set image deployment/webapp app=myapp:2.0

# Check status - failing
kubectl rollout status deployment/webapp

# Rollback immediately
kubectl rollout undo deployment/webapp

# Verify back to version 1.0
kubectl rollout status deployment/webapp

Summary

ObjectPurposeWhen to Use
PodRun containersDirect Pod management (rare)
ReplicaSetMaintain Pod countUsually via Deployment
DeploymentApplication lifecycleALWAYS for stateless apps
NamespaceResource isolationMulti-team, multi-environment

Hierarchy

text
Namespace
  └── Deployment
        └── ReplicaSet
              └── Pod
                    └── Container

Best Practices

  • Use Deployments, not bare Pods — Deployments provide self-healing and updates

  • Use Namespaces for isolation — Separate dev, staging, prod

  • Set resource requests and limits — Prevents resource contention

  • Use readiness and liveness probes — Ensures traffic only goes to healthy Pods

  • Label everything — Labels enable selection and organization


Practice Questions

  1. What is the difference between a Pod and a Deployment?

  2. Why would you use a ReplicaSet directly instead of a Deployment?

  3. How does a rolling update work? What happens during the update process?

  4. When would you create a new Namespace?

  5. What happens to Pods when you delete a Deployment?


Learn More

Practice Kubernetes core objects with hands-on exercises in our interactive labs:
https://devops.trainwithsky.com/

Comments

Popular posts from this blog

Introduction to Terraform – The Future of Infrastructure as Code

  Introduction to Terraform – The Future of Infrastructure as Code In today’s fast-paced DevOps world, managing infrastructure manually is outdated . This is where Terraform comes in—a powerful Infrastructure as Code (IaC) tool that allows you to define, provision, and manage cloud infrastructure efficiently . Whether you're working with AWS, Azure, Google Cloud, or on-premises servers , Terraform provides a declarative, automation-first approach to infrastructure deployment. Shape Your Future with AI & Infinite Knowledge...!! Read In-Depth Tech & Self-Improvement Blogs http://www.skyinfinitetech.com Watch Life-Changing Videos on YouTube https://www.youtube.com/@SkyInfinite-Learning Transform Your Skills, Business & Productivity – Join Us Today! In today’s digital-first world, agility and automation are no longer optional—they’re essential. Companies across the globe are rapidly shifting their operations to the cloud to keep up with the pace of innovatio...

📊 Monitoring & Logging in Kubernetes – Tools like Prometheus, Grafana, and Fluentd

  Monitoring & Logging in Kubernetes – Tools like Prometheus, Grafana, and Fluentd Monitoring and logging are essential for maintaining a healthy and well-performing Kubernetes cluster. In this guide, we’ll cover why monitoring is important, key monitoring tools like Prometheus and Grafana, and logging tools like Fluentd to help you gain visibility into your cluster’s performance and logs. Shape Your Future with AI & Infinite Knowledge...!! Want to Generate Text-to-Voice, Images & Videos? http://www.ai.skyinfinitetech.com Read In-Depth Tech & Self-Improvement Blogs http://www.skyinfinitetech.com Watch Life-Changing Videos on YouTube https://www.youtube.com/@SkyInfinite-Learning Transform Your Skills, Business & Productivity – Join Us Today! 🚀 Introduction In today’s fast-paced cloud-native environment, Kubernetes has emerged as the de-facto container orchestration platform. But deploying and managing applications in Kubernetes is just half the ba...

How to Use SKY TTS: The Complete, Step-by-Step Guide for 2025

 What is SKY TTS? SKY TTS  is a free, next-generation  AI audio creation platform  that brings together high-quality  Text-to-Speech ,  Speech-to-Text , and a full suite of professional  audio editing tools  in one seamless experience. Our vision is simple — to make advanced audio technology  free, accessible, and effortless  for everyone. From creators and educators to podcasters, developers, and businesses, SKY TTS helps users produce  studio-grade voice content  without expensive software or technical skills. With support for  70+ languages, natural voices, audio enhancement, waveform generation, and batch automation , SKY TTS has become a trusted all-in-one toolkit for modern digital audio workflows. Why Choose SKY TTS? Instant Conversion:  Enjoy rapid text-to-speech generation, even with large documents. Advanced Voice Settings:   Adjust speed, pitch, and style for a personalized listening experience. Multi-...