Friday, January 16, 2026

Linux for Containers & Cloud - Advanced DevOps Guide

Linux for Containers & Cloud - Advanced DevOps Guide

Linux for Containers & Cloud

Advanced Guide to Containerization, Orchestration, and Cloud Infrastructure

Namespaces & Cgroups Docker & Containers Container Networking Systemd in Containers Cloud CLI Tools AWS CLI Google Cloud Azure CLI

Why This Matters: Understanding Linux container fundamentals is essential for modern DevOps. Containers revolutionized application deployment by providing lightweight, isolated environments. Mastering namespaces, cgroups, and container networking will help you build scalable, secure, and efficient containerized applications.

1. Linux Namespaces & Cgroups Basics

Namespaces provide isolation for system resources, while cgroups control resource allocation. Together, they form the foundation of containerization.

Linux Container Architecture

Host Kernel (Single Linux Kernel)
Container 1
PID Namespace: Isolated
Network Namespace: Isolated
Mount Namespace: Isolated
Cgroups: CPU: 0.5, Memory: 512MB
Container 2
PID Namespace: Isolated
Network Namespace: Isolated
Mount Namespace: Isolated
Cgroups: CPU: 1.0, Memory: 1GB

Linux Namespaces

Namespace Types

# List available namespaces: $ ls -la /proc/$$/ns/ # Current process namespaces # Namespace types: # 1. PID (Process ID) - Isolated process tree # 2. NET (Network) - Network interfaces, routing # 3. MNT (Mount) - Filesystem mount points # 4. IPC (Inter-Process Communication) - SysV IPC, POSIX queues # 5. UTS (Unix Timesharing System) - Hostname, domain name # 6. USER (User) - User and group IDs # 7. Cgroup (Control Groups) - Cgroup root directory # 8. Time - System time # Create new namespace: $ unshare --pid --fork --mount-proc bash $ unshare --net bash

Working with Namespaces

# Create PID namespace: $ sudo unshare --pid --fork --mount-proc /bin/bash $ ps aux # Only shows processes in new namespace # Create network namespace: $ sudo ip netns add mynetns $ sudo ip netns exec mynetns ip link show $ sudo ip netns exec mynetns bash # Create user namespace: $ unshare --user --map-root-user bash $ id # Shows root in namespace, regular user outside # List namespaces on system: $ sudo lsns # List all namespaces $ sudo lsns -p $$ # Namespaces of current process # Join existing namespace: $ nsenter --target [PID] --net # Enter network namespace $ nsenter --target [PID] --pid --mount # Enter PID & mount namespaces

Control Groups (cgroups)

Cgroups v2 Basics

# Check cgroup version: $ stat -fc %T /sys/fs/cgroup/ # cgroup2fs = v2, tmpfs = v1 # Systemd with cgroups v2: $ systemd-cgls # Show cgroup hierarchy $ systemd-cgtop # Show cgroup resource usage # Create custom cgroup: $ sudo mkdir /sys/fs/cgroup/myapp $ echo $$ | sudo tee /sys/fs/cgroup/myapp/cgroup.procs # Cgroup controllers in v2: # - cpu: CPU time distribution # - memory: Memory usage limits #- io: I/O bandwidth #- pids: Process number limits #- rdma: RDMA resources

Resource Control with Cgroups

# CPU limiting: $ sudo cgcreate -g cpu:/limited $ echo 100000 | sudo tee /sys/fs/cgroup/cpu/limited/cpu.cfs_quota_us $ echo 100000 | sudo tee /sys/fs/cgroup/cpu/limited/cpu.cfs_period_us # Limits to 1 CPU core equivalent # Memory limiting: $ sudo cgcreate -g memory:/limited $ echo 100M | sudo tee /sys/fs/cgroup/memory/limited/memory.limit_in_bytes $ echo 50M | sudo tee /sys/fs/cgroup/memory/limited/memory.swap_limit_in_bytes # Add process to cgroup: $ echo $$ | sudo tee /sys/fs/cgroup/memory/limited/cgroup.procs # I/O limiting: $ sudo cgcreate -g blkio:/limited $ echo "8:0 1048576" | sudo tee /sys/fs/cgroup/blkio/limited/blkio.throttle.read_bps_device # Limit read to 1MB/s on device 8:0 # Process limiting: $ sudo cgcreate -g pids:/limited $ echo 100 | sudo tee /sys/fs/cgroup/pids/limited/pids.max

Creating a Simple Container from Scratch

#!/bin/bash # simple_container.sh - Manual container creation using namespaces & cgroups CONTAINER_ROOT="/tmp/container_$(date +%s)" CONTAINER_ID="cnt_$(hostname)_$(date +%s)" setup_container() { # Create container root filesystem mkdir -p "$CONTAINER_ROOT" # Extract minimal rootfs (requires debootstrap) sudo debootstrap --arch=amd64 focal "$CONTAINER_ROOT" http://archive.ubuntu.com/ubuntu/ # Create container entry script cat > "$CONTAINER_ROOT/init.sh" << 'EOF' #!/bin/bash # Container initialization script # Mount proc filesystem mount -t proc proc /proc # Set hostname hostname container-$(hostname) # Set up networking (simple loopback) ip link set lo up # Start shell exec /bin/bash EOF chmod +x "$CONTAINER_ROOT/init.sh" } run_container() { # Create namespaces and run container sudo unshare \ --pid \ --fork \ --mount \ --uts \ --ipc \ --net \ --cgroup \ --user \ --map-root-user \ bash -c " # Mount container root mount --bind "$CONTAINER_ROOT" "$CONTAINER_ROOT" mount --make-private "$CONTAINER_ROOT" # Change root (pivot_root) cd "$CONTAINER_ROOT" pivot_root . old_root # Unmount old root umount -l /old_root # Run container initialization exec /init.sh " } cleanup() { sudo rm -rf "$CONTAINER_ROOT" } # Main execution case "$1" in setup) setup_container ;; run) run_container ;; clean) cleanup ;; *) echo "Usage: $0 {setup|run|clean}" exit 1 ;; esac

2. Docker on Linux - Installation & Core Concepts

1
Image
Immutable template
2
Container
Running instance
3
Registry
Image storage
4
Dockerfile
Build instructions

Docker Installation & Configuration

Installation Methods

# Ubuntu/Debian: $ sudo apt update $ sudo apt install -y \ apt-transport-https \ ca-certificates \ curl \ gnupg \ lsb-release $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \ sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg $ echo \ "deb [arch=$(dpkg --print-architecture) \ signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] \ https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null $ sudo apt update $ sudo apt install -y docker-ce docker-ce-cli containerd.io # RHEL/CentOS/Rocky Linux: $ sudo yum install -y yum-utils $ sudo yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo $ sudo yum install -y docker-ce docker-ce-cli containerd.io # Using convenience script (for testing): $ curl -fsSL https://get.docker.com -o get-docker.sh $ sudo sh get-docker.sh # Add user to docker group: $ sudo usermod -aG docker $USER $ newgrp docker # Apply group changes

Docker Daemon Configuration

# Docker daemon configuration file: $ sudo cat /etc/docker/daemon.json # Sample configuration: { "data-root": "/var/lib/docker", "log-driver": "json-file", "log-opts": { "max-size": "10m", "max-file": "3" }, "storage-driver": "overlay2", "insecure-registries": [], "registry-mirrors": [ "https://mirror.gcr.io" ], "live-restore": true, "ipv6": false, "iptables": true, "userland-proxy": false, "cgroupdriver": "systemd" } # Reload Docker daemon: $ sudo systemctl daemon-reload $ sudo systemctl restart docker # Check Docker info: $ docker info $ docker version

Docker Core Operations

Image Management

# Search for images: $ docker search nginx $ docker search --filter "is-official=true" ubuntu # Pull images: $ docker pull ubuntu:20.04 $ docker pull nginx:alpine $ docker pull python:3.9-slim # List images: $ docker images $ docker image ls $ docker image ls --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}" # Image inspection: $ docker image inspect ubuntu:20.04 $ docker image history nginx:alpine # Clean up images: $ docker image prune # Remove dangling images $ docker image prune -a # Remove all unused images $ docker system df # Show disk usage # Save/load images: $ docker save ubuntu:20.04 -o ubuntu.tar $ docker load -i ubuntu.tar # Tag images: $ docker tag ubuntu:20.04 myregistry/ubuntu:latest

Container Lifecycle

# Run containers: $ docker run -it --name mycontainer ubuntu:20.04 bash $ docker run -d --name webserver -p 80:80 nginx:alpine $ docker run -d --name db \ -e MYSQL_ROOT_PASSWORD=secret \ -v mysql_data:/var/lib/mysql \ mysql:8.0 # Container operations: $ docker ps # Running containers $ docker ps -a # All containers $ docker start mycontainer $ docker stop mycontainer $ docker restart mycontainer $ docker pause mycontainer $ docker unpause mycontainer # Execute commands: $ docker exec -it mycontainer bash $ docker exec mycontainer ls -la $ docker exec mycontainer cat /etc/os-release # Logs and monitoring: $ docker logs mycontainer $ docker logs -f mycontainer # Follow logs $ docker logs --tail 100 mycontainer $ docker stats # Live resource usage $ docker top mycontainer # Container processes # Cleanup: $ docker rm mycontainer $ docker rm -f mycontainer # Force remove running $ docker container prune # Remove stopped containers

Dockerfile Best Practices

Production-ready Dockerfile

# Multi-stage build for Python application # Stage 1: Build environment FROM python:3.9-slim as builder # Install build dependencies RUN apt-get update && apt-get install -y \ gcc \ g++ \ libc-dev \ && rm -rf /var/lib/apt/lists/* WORKDIR /app # Copy requirements first for better caching COPY requirements.txt . RUN pip install --user --no-cache-dir -r requirements.txt # Copy application code COPY src/ . # Stage 2: Production image FROM python:3.9-slim # Runtime dependencies only RUN apt-get update && apt-get install -y \ curl \ && rm -rf /var/lib/apt/lists/* # Create non-root user RUN groupadd -r appuser && useradd -r -g appuser -s /bin/false appuser WORKDIR /app # Copy from builder stage COPY --from=builder /root/.local /root/.local COPY --from=builder /app . # Set PATH for Python packages ENV PATH=/root/.local/bin:$PATH # Set environment variables ENV PYTHONUNBUFFERED=1 \ PYTHONDONTWRITEBYTECODE=1 \ PORT=8080 # Change ownership to non-root user RUN chown -R appuser:appuser /app # Switch to non-root user USER appuser # Health check HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \ CMD curl -f http://localhost:$PORT/health || exit 1 # Expose port EXPOSE 8080 # Run application CMD ["python", "app.py"]

Docker Build & Optimization

# Build image with tags: $ docker build -t myapp:latest . $ docker build -t myapp:v1.0 -t myapp:latest . $ docker build -f Dockerfile.prod -t myapp:prod . # Build with build arguments: $ docker build \ --build-arg VERSION=1.0 \ --build-arg NODE_ENV=production \ -t myapp:latest . # Build using cache from another image: $ docker build --cache-from myapp:latest -t myapp:new . # Multi-platform builds: $ docker buildx create --use $ docker buildx build \ --platform linux/amd64,linux/arm64 \ -t myapp:multiarch \ --push . # Security scanning: $ docker scan myapp:latest $ docker build --secret id=mysecret,src=secret.txt . # Build optimization tips: # 1. Use .dockerignore to exclude unnecessary files # 2. Order commands from least to most frequently changing # 3. Use multi-stage builds # 4. Combine RUN commands # 5. Use specific version tags, not 'latest'

Docker Storage & Volumes

# Volume management: $ docker volume create myvolume $ docker volume ls $ docker volume inspect myvolume $ docker volume rm myvolume $ docker volume prune # Using volumes: $ docker run -d \ -v myvolume:/data \ --name app \ nginx:alpine $ docker run -d \ -v /host/path:/container/path \ --name app2 \ nginx:alpine # Bind mounts with options: $ docker run -d \ --mount type=bind,source=/host/path,target=/container/path,readonly \ --name app3 \ nginx:alpine # tmpfs mounts: $ docker run -d \ --tmpfs /tmp:size=100M,mode=1777 \ --name app4 \ nginx:alpine # Named volumes with drivers: $ docker volume create \ --driver local \ --opt type=nfs \ --opt o=addr=192.168.1.100,rw \ --opt device=:/path/to/share \ nfs-volume # Backup volume data: $ docker run --rm \ -v myvolume:/source \ -v $(pwd):/backup \ alpine \ tar czf /backup/backup.tar.gz -C /source .

3. Linux Networking for Containers

Container Networking Models

[Container A] ← bridge → [Container B]
[Docker Bridge (docker0)]
[Host Network Stack] ←→ [External Network]

Docker Network Drivers

Network Type Description Use Case Isolation
bridge Default network driver, private internal network Single host containers Container-level
host Remove network isolation, use host network Performance critical apps None
overlay Connect multiple Docker daemons Swarm clusters Swarm-level
macvlan Assign MAC addresses to containers Legacy applications MAC-level
none Disable all networking Security sensitive Complete
ipvlan Similar to macvlan without MAC addresses Network intensive apps IP-level

Network Management

Docker Network Operations

# List networks: $ docker network ls $ docker network inspect bridge # Create custom networks: $ docker network create mynetwork $ docker network create \ --driver bridge \ --subnet 172.20.0.0/16 \ --gateway 172.20.0.1 \ my-custom-network # Connect containers to networks: $ docker run -d --name web --network mynetwork nginx:alpine $ docker network connect mynetwork existing-container $ docker network disconnect mynetwork container # Network aliases (DNS): $ docker run -d \ --name database \ --network mynetwork \ --network-alias db \ --network-alias mysql \ mysql:8.0 # From another container in same network: # Can connect using: db, mysql, database # Remove networks: $ docker network rm mynetwork $ docker network prune # Remove unused networks # Port publishing: $ docker run -d -p 8080:80 nginx:alpine $ docker run -d -p 80:80 -p 443:443 nginx:alpine $ docker run -d -p 127.0.0.1:8080:80 nginx:alpine # Localhost only

Advanced Networking Features

# Custom DNS configuration: $ docker run -d \ --dns 8.8.8.8 \ --dns-search example.com \ --dns-opt timeout:2 \ nginx:alpine # Host networking: $ docker run -d --network host nginx:alpine # Container uses host's network stack directly # Macvlan network: $ docker network create -d macvlan \ --subnet=192.168.1.0/24 \ --gateway=192.168.1.1 \ --ip-range=192.168.1.192/27 \ -o parent=eth0 \ macvlan_net # IPvlan network: $ docker network create -d ipvlan \ --subnet=192.168.1.0/24 \ --gateway=192.168.1.1 \ -o ipvlan_mode=l2 \ -o parent=eth0 \ ipvlan_net # Overlay network (Swarm): $ docker network create -d overlay \ --attachable \ my-overlay-net # Network debugging: $ docker exec container cat /etc/hosts $ docker exec container cat /etc/resolv.conf $ docker exec container ip addr show $ docker exec container ping google.com

Linux Network Namespace Management

Manual Container Networking Setup

#!/bin/bash # manual_container_network.sh - Create container network manually create_container_network() { # Create network namespace sudo ip netns add cont1 # Create veth pair sudo ip link add veth0 type veth peer name veth1 # Move veth1 to container namespace sudo ip link set veth1 netns cont1 # Configure host side sudo ip addr add 10.0.0.1/24 dev veth0 sudo ip link set veth0 up # Configure container side sudo ip netns exec cont1 ip addr add 10.0.0.2/24 dev veth1 sudo ip netns exec cont1 ip link set veth1 up sudo ip netns exec cont1 ip link set lo up # Setup routing sudo ip netns exec cont1 ip route add default via 10.0.0.1 # Enable NAT on host echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward sudo iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -j MASQUERADE sudo iptables -A FORWARD -i veth0 -j ACCEPT sudo iptables -A FORWARD -o veth0 -j ACCEPT } test_network() { # Test connectivity from container echo "Testing container network..." sudo ip netns exec cont1 ping -c 3 10.0.0.1 sudo ip netns exec cont1 ping -c 3 8.8.8.8 } cleanup() { sudo ip netns del cont1 sudo ip link del veth0 2>/dev/null } case "$1" in create) create_container_network ;; test) test_network ;; clean) cleanup ;; *) echo "Usage: $0 {create|test|clean}" exit 1 ;; esac

4. Systemd inside Containers

Note: Running systemd inside containers requires special considerations. The container must run with specific privileges and mount points to function correctly.

Systemd-Enabled Container Images

Official Systemd Images

# Ubuntu with systemd: $ docker run -d \ --name ubuntu-systemd \ --tmpfs /tmp \ --tmpfs /run \ --tmpfs /run/lock \ --volume /sys/fs/cgroup:/sys/fs/cgroup:ro \ ubuntu:jammy # CentOS/Rocky Linux with systemd: $ docker run -d \ --name rocky-systemd \ --privileged \ --tmpfs /tmp \ --tmpfs /run \ --tmpfs /run/lock \ --volume /sys/fs/cgroup:/sys/fs/cgroup:ro \ rockylinux:9 # Debian with systemd: $ docker run -d \ --name debian-systemd \ --tmpfs /tmp \ --tmpfs /run \ --tmpfs /run/lock \ --volume /sys/fs/cgroup:/sys/fs/cgroup:ro \ debian:bullseye # Fedora with systemd: $ docker run -d \ --name fedora-systemd \ --privileged \ --tmpfs /tmp \ --tmpfs /run \ --tmpfs /run/lock \ --volume /sys/fs/cgroup:/sys/fs/cgroup:ro \ fedora:38

Custom Systemd Container Image

# Dockerfile for systemd-enabled container FROM ubuntu:22.04 # Install systemd and basic utilities RUN apt-get update && apt-get install -y \ systemd \ systemd-sysv \ dbus \ sudo \ curl \ wget \ && apt-get clean \ && rm -rf /var/lib/apt/lists/* # Create necessary directories RUN mkdir -p /run/systemd/system # Remove getty services (not needed in containers) RUN rm -f /lib/systemd/system/multi-user.target.wants/* \ /etc/systemd/system/*.wants/* \ /lib/systemd/system/local-fs.target.wants/* \ /lib/systemd/system/sockets.target.wants/*udev* \ /lib/systemd/system/sockets.target.wants/*initctl* \ /lib/systemd/system/sysinit.target.wants/systemd-tmpfiles-setup* \ /lib/systemd/system/systemd-update-utmp* # Enable container-friendly services RUN systemctl enable systemd-journald.service # Create init script COPY init.sh /init.sh RUN chmod +x /init.sh # Set init as entrypoint ENTRYPOINT ["/init.sh"] # init.sh content: #!/bin/bash # Mount necessary filesystems mount -t tmpfs tmpfs /tmp mount -t tmpfs tmpfs /run mkdir -p /run/lock # Start systemd exec /lib/systemd/systemd --system --unit=multi-user.target

Running Systemd in Containers

Container Configuration

# Basic systemd container with podman (recommended): $ podman run -d \ --name systemd-container \ --systemd=always \ --tmpfs /tmp \ --tmpfs /run \ --tmpfs /run/lock \ --volume /sys/fs/cgroup:/sys/fs/cgroup:ro \ ubuntu:22.04 # Docker with systemd (requires privileges): $ docker run -d \ --name systemd-app \ --privileged \ --tmpfs /tmp \ --tmpfs /run \ --tmpfs /run/lock \ --volume /sys/fs/cgroup:/sys/fs/cgroup:ro \ --stop-timeout 30 \ systemd-image:latest # Minimal privileges approach: $ docker run -d \ --name minimal-systemd \ --cap-add SYS_ADMIN \ --tmpfs /tmp \ --tmpfs /run \ --tmpfs /run/lock \ --volume /sys/fs/cgroup:/sys/fs/cgroup:ro \ ubuntu:22.04 # With custom systemd unit: $ docker run -d \ --name custom-service \ --privileged \ --tmpfs /tmp \ --tmpfs /run \ --tmpfs /run/lock \ --volume /sys/fs/cgroup:/sys/fs/cgroup:ro \ --volume ./myapp.service:/etc/systemd/system/myapp.service \ ubuntu:22.04 # Environment variables for systemd: $ docker run -d \ --name env-systemd \ --privileged \ --tmpfs /tmp \ --tmpfs /run \ --tmpfs /run/lock \ --volume /sys/fs/cgroup:/sys/fs/cgroup:ro \ -e container=docker \ ubuntu:22.04

Systemd Operations Inside Containers

# Enter container and check systemd: $ docker exec -it systemd-container /bin/bash # Inside container: $ systemctl status # Check systemd status $ systemctl list-units # List all units $ journalctl -f # Follow journal logs # Manage services: $ systemctl start nginx $ systemctl stop nginx $ systemctl restart nginx $ systemctl enable nginx # Enable on boot $ systemctl disable nginx # Disable on boot # Check service logs: $ journalctl -u nginx.service $ journalctl -u nginx.service --since "1 hour ago" $ journalctl -u nginx.service -f # Follow logs # Systemd timers (cron replacement): $ systemctl list-timers $ systemctl status backup.timer # Systemd analyze: $ systemd-analyze time # Boot time analysis $ systemd-analyze blame # Service startup times # From host, manage container systemd: $ docker exec systemd-container systemctl status nginx $ docker exec systemd-container journalctl -u nginx

Systemd Unit Files for Containers

Production Systemd Service Unit

# /etc/systemd/system/myapp.service [Unit] Description=My Application Service Documentation=https://docs.example.com After=network.target docker.service Requires=docker.service Wants=network-online.target After=network-online.target [Service] Type=exec User=appuser Group=appuser WorkingDirectory=/opt/myapp EnvironmentFile=/etc/myapp/env.conf # Security hardening NoNewPrivileges=true PrivateTmp=true PrivateDevices=true ProtectSystem=strict ProtectHome=true ProtectKernelTunables=true ProtectKernelModules=true ProtectControlGroups=true RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6 RestrictNamespaces=true RestrictRealtime=true RestrictSUIDSGID=true MemoryDenyWriteExecute=true LockPersonality=true SystemCallFilter=@system-service SystemCallErrorNumber=EPERM CapabilityBoundingSet=CAP_NET_BIND_SERVICE # Resource limits LimitNOFILE=65536 LimitNPROC=4096 LimitCORE=infinity CPUQuota=200% MemoryLimit=1G # Logging StandardOutput=journal StandardError=journal SyslogIdentifier=myapp # Container-specific Delegate=yes NotifyAccess=all ExecStartPre=/usr/bin/docker pull myapp:latest ExecStart=/usr/bin/docker run \ --name myapp \ --rm \ --network host \ --volume /data:/data:ro \ --env-file /etc/myapp/env.conf \ myapp:latest ExecStop=/usr/bin/docker stop myapp ExecStopPost=/usr/bin/docker rm myapp TimeoutStopSec=30 [Install] WantedBy=multi-user.target

5. Cloud CLI Tools Mastery

AWS CLI

# Installation: $ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" $ unzip awscliv2.zip $ sudo ./aws/install # Configuration: $ aws configure $ aws configure --profile prod # S3 operations: $ aws s3 ls $ aws s3 cp file.txt s3://mybucket/

Google Cloud CLI

# Installation: $ curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-cli-linux-x86_64.tar.gz $ tar -xf google-cloud-cli-linux-x86_64.tar.gz $ ./google-cloud-sdk/install.sh # Configuration: $ gcloud init $ gcloud config configurations create prod # Auth: $ gcloud auth login $ gcloud auth application-default login

Azure CLI

# Installation: $ curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash # Alternative: $ pip install azure-cli # Configuration: $ az login $ az account set --subscription "My Subscription" # List resources: $ az vm list $ az storage account list

AWS CLI Deep Dive

EC2 & Compute Services

# EC2 instances: $ aws ec2 describe-instances $ aws ec2 run-instances \ --image-id ami-0c55b159cbfafe1f0 \ --count 1 \ --instance-type t3.micro \ --key-name my-key-pair $ aws ec2 start-instances --instance-ids i-1234567890abcdef0 $ aws ec2 stop-instances --instance-ids i-1234567890abcdef0 $ aws ec2 terminate-instances --instance-ids i-1234567890abcdef0 # ECS (Elastic Container Service): $ aws ecs list-clusters $ aws ecs list-services --cluster my-cluster $ aws ecs describe-tasks --cluster my-cluster --tasks task-id # ECR (Elastic Container Registry): $ aws ecr create-repository --repository-name my-app $ aws ecr get-login-password | docker login --username AWS --password-stdin 123456789012.dkr.ecr.region.amazonaws.com $ docker tag my-app:latest 123456789012.dkr.ecr.region.amazonaws.com/my-app:latest $ docker push 123456789012.dkr.ecr.region.amazonaws.com/my-app:latest # Lambda functions: $ aws lambda list-functions $ aws lambda invoke --function-name my-function response.json

Networking & Storage

# VPC and networking: $ aws ec2 describe-vpcs $ aws ec2 describe-subnets $ aws ec2 describe-security-groups $ aws ec2 authorize-security-group-ingress \ --group-id sg-12345678 \ --protocol tcp \ --port 22 \ --cidr 0.0.0.0/0 # S3 operations: $ aws s3 mb s3://my-new-bucket $ aws s3 cp file.txt s3://my-bucket/ $ aws s3 sync ./dist s3://my-bucket/ $ aws s3 ls s3://my-bucket --recursive $ aws s3 rm s3://my-bucket/file.txt # RDS databases: $ aws rds describe-db-instances $ aws rds create-db-instance \ --db-instance-identifier mydb \ --db-instance-class db.t3.micro \ --engine mysql \ --master-username admin \ --master-user-password secret123 # CloudWatch logs: $ aws logs describe-log-groups $ aws logs tail /aws/lambda/my-function --follow

Google Cloud CLI

Compute & Container Services

# Compute Engine: $ gcloud compute instances list $ gcloud compute instances create my-instance \ --zone us-central1-a \ --machine-type e2-micro \ --image-family debian-11 \ --image-project debian-cloud $ gcloud compute instances start my-instance --zone us-central1-a $ gcloud compute instances stop my-instance --zone us-central1-a # GKE (Google Kubernetes Engine): $ gcloud container clusters list $ gcloud container clusters get-credentials my-cluster --zone us-central1-a $ gcloud container clusters create my-cluster \ --num-nodes 3 \ --zone us-central1-a \ --machine-type e2-medium # Cloud Run: $ gcloud run deploy my-service \ --image gcr.io/my-project/my-app \ --platform managed \ --region us-central1 # Cloud Functions: $ gcloud functions deploy my-function \ --runtime python39 \ --trigger-http \ --entry-point hello_world

Storage & Networking

# Cloud Storage: $ gsutil ls $ gsutil mb gs://my-new-bucket $ gsutil cp file.txt gs://my-bucket/ $ gsutil rsync -r ./dist gs://my-bucket/ # Cloud SQL: $ gcloud sql instances list $ gcloud sql instances create my-instance \ --database-version POSTGRES_13 \ --tier db-f1-micro \ --region us-central1 # Networking: $ gcloud compute networks list $ gcloud compute firewall-rules list $ gcloud compute firewall-rules create allow-http \ --allow tcp:80 \ --target-tags http-server # IAM & Security: $ gcloud iam service-accounts list $ gcloud iam service-accounts create my-sa \ --display-name "My Service Account" $ gcloud projects add-iam-policy-binding my-project \ --member serviceAccount:my-sa@my-project.iam.gserviceaccount.com \ --role roles/storage.admin # Container Registry: $ gcloud auth configure-docker $ docker tag my-app:latest gcr.io/my-project/my-app:latest $ docker push gcr.io/my-project/my-app:latest

Azure CLI

Compute & Container Services

# Virtual Machines: $ az vm list $ az vm create \ --resource-group myResourceGroup \ --name myVM \ --image UbuntuLTS \ --admin-username azureuser \ --generate-ssh-keys $ az vm start --resource-group myResourceGroup --name myVM $ az vm stop --resource-group myResourceGroup --name myVM # AKS (Azure Kubernetes Service): $ az aks list $ az aks get-credentials --resource-group myResourceGroup --name myAKSCluster $ az aks create \ --resource-group myResourceGroup \ --name myAKSCluster \ --node-count 3 \ --generate-ssh-keys # Container Instances: $ az container list $ az container create \ --resource-group myResourceGroup \ --name mycontainer \ --image mcr.microsoft.com/azuredocs/aci-helloworld \ --ports 80 # App Service: $ az webapp list $ az webapp create \ --resource-group myResourceGroup \ --plan myAppServicePlan \ --name my-unique-app-name \ --runtime "PYTHON|3.9"

Storage & Database

# Storage Accounts: $ az storage account list $ az storage account create \ --name mystorageaccount \ --resource-group myResourceGroup \ --location eastus \ --sku Standard_LRS # Blob Storage: $ az storage container list --account-name mystorageaccount $ az storage blob upload \ --account-name mystorageaccount \ --container-name mycontainer \ --name blob.txt \ --file ./blob.txt # SQL Database: $ az sql server list $ az sql db list \ --resource-group myResourceGroup \ --server myserver # Cosmos DB: $ az cosmosdb list $ az cosmosdb sql database create \ --account-name mycosmosaccount \ --name myDatabase \ --resource-group myResourceGroup # Container Registry: $ az acr list $ az acr create \ --resource-group myResourceGroup \ --name myregistry \ --sku Basic $ az acr login --name myregistry $ docker tag my-app:latest myregistry.azurecr.io/my-app:latest $ docker push myregistry.azurecr.io/my-app:latest

Multi-Cloud Container Deployment Script

#!/bin/bash # multi_cloud_deploy.sh - Deploy container to multiple clouds deploy_aws() { local image="$1" local tag="$2" echo "Deploying to AWS ECR..." # Login to ECR aws ecr get-login-password --region us-east-1 | \ docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-east-1.amazonaws.com # Tag and push docker tag "$image:$tag" 123456789012.dkr.ecr.us-east-1.amazonaws.com/"$image:$tag" docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/"$image:$tag" # Update ECS service aws ecs update-service \ --cluster my-cluster \ --service my-service \ --force-new-deployment } deploy_gcp() { local image="$1" local tag="$2" echo "Deploying to Google Cloud..." # Configure Docker gcloud auth configure-docker # Tag and push docker tag "$image:$tag" gcr.io/my-project/"$image:$tag" docker push gcr.io/my-project/"$image:$tag" # Deploy to Cloud Run gcloud run deploy my-service \ --image gcr.io/my-project/"$image:$tag" \ --platform managed \ --region us-central1 } deploy_azure() { local image="$1" local tag="$2" echo "Deploying to Azure..." # Login to ACR az acr login --name myregistry # Tag and push docker tag "$image:$tag" myregistry.azurecr.io/"$image:$tag" docker push myregistry.azurecr.io/"$image:$tag" # Update AKS deployment kubectl set image deployment/my-app my-app=myregistry.azurecr.io/"$image:$tag" } main() { local image="$1" local tag="${2:-latest}" local clouds="${3:-aws,gcp,azure}" # Build image docker build -t "$image:$tag" . IFS=',' read -ra CLOUD_ARRAY <<< "$clouds" for cloud in "${CLOUD_ARRAY[@]}"; do case "$cloud" in aws) deploy_aws "$image" "$tag" ;; gcp) deploy_gcp "$image" "$tag" ;; azure) deploy_azure "$image" "$tag" ;; *) echo "Unknown cloud provider: $cloud" ;; esac done } main "$@"

Container & Cloud Command Reference

Linux Container Fundamentals

$ unshare --pid --fork --mount-proc bash $ ip netns add mynetns $ nsenter --target [PID] --net $ cgcreate -g cpu:/limited $ systemd-cgls

Docker Essentials

$ docker build -t myapp:latest . $ docker run -d -p 80:80 nginx:alpine $ docker exec -it container bash $ docker-compose up -d $ docker system prune -a

Container Networking

$ docker network create mynet $ docker network ls $ docker network inspect bridge $ iptables -t nat -L -n $ brctl show docker0

Systemd in Containers

$ docker run --privileged --tmpfs /tmp --tmpfs /run $ systemctl status $ journalctl -f $ systemd-analyze blame $ podman run --systemd=always

AWS CLI

$ aws configure $ aws ec2 describe-instances $ aws s3 ls $ aws ecr get-login-password $ aws lambda invoke

Google Cloud CLI

$ gcloud init $ gcloud compute instances list $ gcloud container clusters get-credentials $ gsutil ls $ gcloud auth configure-docker

Azure CLI

$ az login $ az vm list $ az aks get-credentials $ az storage account list $ az acr login

Container & Cloud Best Practices

Security

  • Always use non-root users in containers
  • Scan images for vulnerabilities regularly
  • Limit container capabilities (--cap-drop)
  • Use read-only filesystems where possible
  • Implement network policies

Performance

  • Use multi-stage builds for smaller images
  • Implement resource limits (CPU, memory)
  • Use .dockerignore to exclude unnecessary files
  • Leverage build cache effectively
  • Monitor container resource usage

Operations

  • Use infrastructure as code (Terraform, CloudFormation)
  • Implement proper logging and monitoring
  • Use container orchestration (Kubernetes, ECS)
  • Implement CI/CD pipelines
  • Regularly update base images
≈100ms
Container Startup

Optimized containers start in milliseconds

10-50MB
Image Size

Alpine-based images are extremely small

95%+
Resource Efficiency

Better utilization than VMs

Getting Started Checklist

1

Learn Linux Fundamentals

Master namespaces, cgroups, and basic Linux commands

2

Install Docker

Set up Docker on your local machine and learn basic commands

3

Create Simple Containers

Build and run basic applications in containers

4

Learn Container Networking

Understand bridge, host, and overlay networks

5

Choose a Cloud Provider

Pick AWS, GCP, or Azure and learn their CLI tools

6

Deploy to Cloud

Push containers to cloud registries and deploy

No comments:

Post a Comment

Linux for Containers & Cloud - Advanced DevOps Guide

Linux for Containers & Cloud - Advanced DevOps Guide Linux for Containers & ...