Linux for Containers & Cloud: The Modern Infrastructure Guide
Master the Linux foundations that power containers and cloud computing in today's DevOps landscape.
📅 Published: Feb 2026
⏱️ Estimated Reading Time: 20 minutes
🏷️ Tags: Containers, Docker, Kubernetes, Cloud Computing, Linux Internals, DevOps
📦 Namespaces & Cgroups Basics: The Building Blocks of Containers
What are Containers Really?
Think of containers as "lightweight virtual machines" but they're actually just isolated processes on your Linux system. The magic happens through two Linux kernel features: Namespaces and Control Groups (cgroups).
Simple analogy:
Physical Server = Apartment building
Virtual Machine = Individual apartment with its own kitchen, bathroom, walls
Container = Just your bedroom (shares kitchen/bathroom but has your own space)
Namespaces: Process Isolation
Namespaces create separate views of system resources for different processes. It's like giving each process its own "personal reality" of the system.
Types of namespaces:
PID namespace = Private process ID space (can't see others' processes)
Network namespace = Private network stack (own IP, ports, routing)
Mount namespace = Private filesystem view
UTS namespace = Private hostname and domain name
IPC namespace = Private inter-process communication
User namespace = Private user/group IDs
Cgroup namespace = Private cgroup hierarchy
# Check namespaces of a process ls -la /proc/$$/ns/ # $$ = current process ID # Output shows namespace IDs: # cgroup -> cgroup:[4026531835] # ipc -> ipc:[4026531839] # mnt -> mnt:[4026531840] # net -> net:[4026531957] # pid -> pid:[4026531836] # pid_for_children -> pid:[4026531836] # user -> user:[4026531837] # uts -> uts:[4026531838] # Create a new namespace unshare --fork --pid --mount-proc bash # Now you're in a new PID namespace ps aux # Only shows processes in this namespace # Create network namespace sudo ip netns add mynet sudo ip netns exec mynet ip addr show # Shows empty network in the namespace
Control Groups (cgroups): Resource Limits
While namespaces provide isolation, cgroups provide resource limiting. Think of cgroups as "resource quotas" for processes.
What cgroups control:
CPU usage (how much processor time)
Memory (how much RAM)
I/O (disk read/write limits)
Network (bandwidth limits)
Processes (how many can be created)
# Check if cgroups are mounted mount | grep cgroup # Usually mounted at /sys/fs/cgroup/ # View cgroup hierarchy ls /sys/fs/cgroup/ # Common controllers: # cpu, cpuacct - CPU time accounting # memory - Memory limits # blkio | Block I/O limits # devices - Device access control # freezer - Pause/resume processes # net_cls | Network classification # pids - Process number limits # Create a cgroup for CPU limiting sudo mkdir /sys/fs/cgroup/cpu/myapp echo "100000" | sudo tee /sys/fs/cgroup/cpu/myapp/cpu.cfs_quota_us # 100000 = 100ms out of 1000ms (10% CPU) # Add a process to the cgroup echo $$ | sudo tee /sys/fs/cgroup/cpu/myapp/cgroup.procs # Now this shell is limited to 10% CPU # Create memory limit cgroup sudo mkdir /sys/fs/cgroup/memory/myapp echo "100M" | sudo tee /sys/fs/cgroup/memory/myapp/memory.limit_in_bytes # Clean up sudo rmdir /sys/fs/cgroup/cpu/myapp sudo rmdir /sys/fs/cgroup/memory/myapp
Putting It Together: A Simple Container
#!/bin/bash # simple-container.sh # Create container directory CONTAINER_ID=$(uuidgen | cut -d'-' -f1) CONTAINER_DIR="/tmp/container-$CONTAINER_ID" mkdir -p $CONTAINER_DIR # Create container filesystem (using busybox as example) # Download busybox if not available if ! command -v busybox &> /dev/null; then curl -L https://busybox.net/downloads/binaries/1.31.0-defconfig-multiarch-musl/busybox-x86_64 -o /tmp/busybox chmod +x /tmp/busybox BUSYBOX=/tmp/busybox else BUSYBOX=$(which busybox) fi # Create minimal filesystem mkdir -p $CONTAINER_DIR/{bin,proc,dev,etc} cp $BUSYBOX $CONTAINER_DIR/bin/ for cmd in sh ls ps; do ln -s /bin/busybox $CONTAINER_DIR/bin/$cmd done # Create passwd file echo "root:x:0:0:root:/root:/bin/sh" > $CONTAINER_DIR/etc/passwd # Create the container using unshare sudo unshare \ --fork \ --pid \ --mount \ --uts \ --ipc \ --net \ --mount-proc=$CONTAINER_DIR/proc \ chroot $CONTAINER_DIR /bin/sh
What this script does:
Creates isolated filesystem
Uses
unshareto create namespacesUses
chrootto change root directoryRuns a shell in the container
This is essentially what Docker does (but Docker adds layers, networking, images, etc.).
🐳 Docker on Linux (Installation & Basics)
What is Docker?
Docker is a container platform that makes it easy to create, deploy, and run applications in containers. It's like a package manager for applications and their dependencies.
Docker vs Virtual Machines:
Virtual Machine: Docker Container: +-----------------+ +-----------------+ | App A | | App A | | Bins/Libs | | Bins/Libs | | Guest OS | +-----------------+ | Hypervisor | | Docker | | Host OS | | Host OS | | Hardware | | Hardware | +-----------------+ +-----------------+
Installing Docker on Linux
Ubuntu/Debian:
# 1. Remove old versions sudo apt remove docker docker-engine docker.io containerd runc # 2. Install prerequisites sudo apt update sudo apt install \ ca-certificates \ curl \ gnupg \ lsb-release # 3. Add Docker's official GPG key sudo mkdir -p /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg # 4. Setup repository echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null # 5. Install Docker Engine sudo apt update sudo apt install docker-ce docker-ce-cli containerd.io docker-compose-plugin # 6. Verify installation sudo docker run hello-world
CentOS/RHEL:
# 1. Remove old versions sudo yum remove docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-latest-logrotate \ docker-logrotate \ docker-engine # 2. Setup repository sudo yum install -y yum-utils sudo yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo # 3. Install Docker sudo yum install docker-ce docker-ce-cli containerd.io docker-compose-plugin # 4. Start Docker sudo systemctl start docker sudo systemctl enable docker # 5. Verify sudo docker run hello-world
Post-Installation Setup
# Add your user to docker group (so you don't need sudo) sudo usermod -aG docker $USER # Log out and back in for changes to take effect # Configure Docker to start on boot sudo systemctl enable docker.service sudo systemctl enable containerd.service # Configure Docker daemon (optional) sudo mkdir -p /etc/docker sudo nano /etc/docker/daemon.json
/etc/docker/daemon.json example:
{ "log-driver": "json-file", "log-opts": { "max-size": "10m", "max-file": "3" }, "storage-driver": "overlay2", "data-root": "/var/lib/docker", "exec-opts": ["native.cgroupdriver=systemd"], "iptables": true, "live-restore": true }
# Restart Docker after configuration sudo systemctl restart docker
Basic Docker Commands
# Check Docker version docker version docker info # Run a container docker run hello-world docker run -it ubuntu bash # Interactive terminal docker run -d nginx # Run in background (detached) # List containers docker ps # Running containers docker ps -a # All containers (including stopped) docker ps -q # Just container IDs # Stop/start containers docker stop container_name docker start container_name docker restart container_name # Remove containers docker rm container_name docker rm $(docker ps -aq) # Remove all stopped containers docker container prune # Remove all stopped containers # View logs docker logs container_name docker logs -f container_name # Follow logs (like tail -f) # Execute commands in running container docker exec -it container_name bash docker exec container_name ls -la # Inspect container docker inspect container_name docker inspect --format='{{.NetworkSettings.IPAddress}}' container_name # Resource usage docker stats # Live resource usage docker stats --no-stream # One-time snapshot
Working with Docker Images
# List images docker images docker image ls # Pull images from Docker Hub docker pull ubuntu:latest docker pull nginx:alpine docker pull python:3.9-slim # Search for images docker search nginx docker search --filter "is-official=true" python # Remove images docker rmi image_name docker image prune # Remove unused images docker image prune -a # Remove all unused images # Build an image from Dockerfile docker build -t myapp:latest . # Tag an image docker tag myapp:latest myregistry.com/myapp:v1.0 # Push to registry docker push myregistry.com/myapp:v1.0 # Save/load images (for backup/transfer) docker save myapp:latest > myapp.tar docker load < myapp.tar
Docker Networking Basics
# List networks docker network ls # Inspect a network docker network inspect bridge # Create a custom network docker network create mynetwork docker network create --subnet=172.20.0.0/16 mynetwork # Run container on specific network docker run --network=mynetwork nginx # Connect/disconnect containers from networks docker network connect mynetwork container_name docker network disconnect mynetwork container_name # Port mapping (host:container) docker run -p 8080:80 nginx # Map host 8080 → container 80 docker run -p 80:80 -p 443:443 nginx # Multiple ports # Remove networks docker network rm mynetwork docker network prune # Remove unused networks
Docker Volumes: Persistent Storage
# List volumes docker volume ls # Create a volume docker volume create mydata # Inspect volume docker volume inspect mydata # Use volume in container docker run -v mydata:/data ubuntu # or (mount syntax) docker run --mount source=mydata,target=/data ubuntu # Bind mount (host directory → container) docker run -v /host/path:/container/path ubuntu docker run --mount type=bind,source=/host/path,target=/container/path ubuntu # Remove volumes docker volume rm mydata docker volume prune # Remove unused volumes # Backup volume docker run --rm -v mydata:/data -v $(pwd):/backup ubuntu \ tar czf /backup/backup.tar.gz -C /data .
Docker Compose: Multi-Container Applications
docker-compose.yml example:
version: '3.8' services: web: image: nginx:alpine ports: - "80:80" volumes: - ./html:/usr/share/nginx/html depends_on: - app networks: - app-network app: build: ./app environment: - DATABASE_URL=postgres://user:pass@db:5432/mydb volumes: - ./app:/app networks: - app-network db: image: postgres:13 environment: POSTGRES_PASSWORD: secret POSTGRES_DB: mydb volumes: - db-data:/var/lib/postgresql/data networks: - app-network volumes: db-data: networks: app-network: driver: bridge
# Docker Compose commands docker-compose up # Start all services docker-compose up -d # Start in background docker-compose down # Stop and remove docker-compose ps # List services docker-compose logs # View logs docker-compose logs -f | Follow logs docker-compose exec service bash # Execute in service docker-compose build # Build images docker-compose pull # Pull images
🌐 Linux Networking for Containers
Understanding Container Networking Models
# Docker networking modes docker run --network=bridge nginx # Default (isolated network) docker run --network=host nginx | Shares host network (fastest) docker run --network=none nginx # No network (isolated) docker run --network=container:other_container nginx # Share network with other container
Bridge Networking (Default)
# Default bridge network docker network inspect bridge # Create custom bridge docker network create --driver bridge my-bridge docker network create --driver bridge \ --subnet=172.20.0.0/16 \ --gateway=172.20.0.1 \ my-bridge # Containers on bridge can communicate via IP docker run --network=my-bridge --name=container1 nginx docker run --network=my-bridge --name=container2 alpine ping container1 # Port publishing docker run -p 8080:80 --name web nginx # Internally: iptables rule forwards host:8080 → container:80
Network Namespace Inspection
# Find container's network namespace CONTAINER_PID=$(docker inspect -f '{{.State.Pid}}' container_name) sudo ls -la /proc/$CONTAINER_PID/ns/ # Enter container's network namespace sudo nsenter -t $CONTAINER_PID -n ip addr show sudo nsenter -t $CONTAINER_PID -n netstat -tulpn sudo nsenter -t $CONTAINER_PID -n ping 8.8.8.8 # Create veth pair (virtual ethernet) sudo ip link add veth0 type veth peer name veth1 # Connect container to custom network manually sudo ip link set veth1 netns $CONTAINER_PID sudo nsenter -t $CONTAINER_PID -n ip link set veth1 name eth1 sudo nsenter -t $CONTAINER_PID -n ip addr add 192.168.1.2/24 dev eth1 sudo nsenter -t $CONTAINER_PID -n ip link set eth1 up
Network Troubleshooting
# Check container IP docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name # Test connectivity between containers docker exec container1 ping container2_ip docker exec container1 curl http://container2_ip:80 # Check iptables rules (Docker manages these) sudo iptables -L -n -v sudo iptables -t nat -L -n -v # NAT rules for port forwarding # Check DNS resolution in container docker exec container cat /etc/resolv.conf docker exec container nslookup google.com # Network statistics docker exec container netstat -s docker exec container ss -tulpn
Advanced Container Networking
# Macvlan network (containers get real IP on physical network) docker network create -d macvlan \ --subnet=192.168.1.0/24 \ --gateway=192.168.1.1 \ -o parent=eth0 \ my-macvlan # IPv6 support docker network create --ipv6 \ --subnet=2001:db8::/64 \ --gateway=2001:db8::1 \ my-ipv6-network # Overlay network (for Docker Swarm across multiple hosts) docker network create -d overlay my-overlay # Custom DNS docker run --dns=8.8.8.8 --dns=8.8.4.4 nginx docker run --dns-search=mydomain.local nginx # Hostname and domain docker run --hostname=mycontainer --domainname=mydomain.local nginx
⚙️ Systemd inside Containers
Should You Run systemd in Containers?
Traditional wisdom: Containers should run one process
Modern practice: Some applications need systemd (especially legacy apps)
Running systemd in Docker
# Dockerfile for systemd container cat > Dockerfile << 'EOF' FROM ubuntu:22.04 # Install systemd RUN apt update && apt install -y systemd systemd-sysv # Remove unnecessary services RUN systemctl mask \ systemd-remount-fs.service \ dev-hugepages.mount \ sys-fs-fuse-connections.mount \ systemd-logind.service # Create init script RUN echo '#!/bin/bash' > /usr/local/bin/init.sh && \ echo 'exec /lib/systemd/systemd' >> /usr/local/bin/init.sh && \ chmod +x /usr/local/bin/init.sh # Set entrypoint ENTRYPOINT ["/usr/local/bin/init.sh"] EOF # Build and run docker build -t systemd-container . docker run -d \ --name systemd-test \ --tmpfs /tmp \ --tmpfs /run \ --tmpfs /run/lock \ -v /sys/fs/cgroup:/sys/fs/cgroup:ro \ --cap-add SYS_ADMIN \ systemd-container # Enter container and check systemd docker exec -it systemd-test systemctl status
Best Practices for systemd Containers
# 1. Use --tmpfs for volatile directories docker run --tmpfs /tmp --tmpfs /run --tmpfs /run/lock ... # 2. Mount cgroups read-only docker run -v /sys/fs/cgroup:/sys/fs/cgroup:ro ... # 3. Add necessary capabilities docker run --cap-add SYS_ADMIN --cap-add SYS_PTRACE ... # 4. Use init system (tini or dumb-init) as PID 1 # Dockerfile: ENTRYPOINT ["/sbin/dumb-init", "--", "systemd"] # 5. Keep container minimal # Only install systemd and necessary services
systemd-nspawn: Container Native to systemd
# systemd-nspawn is systemd's native container tool # Create container directory sudo debootstrap focal /var/lib/machines/ubuntu-focal # Boot container sudo systemd-nspawn -D /var/lib/machines/ubuntu-focal # Boot with network sudo systemd-nspawn -D /var/lib/machines/ubuntu-focal --network-bridge=br0 # Boot with specific command sudo systemd-nspawn -D /var/lib/machines/ubuntu-focal /bin/bash # Manage with machinectl sudo machinectl list sudo machinectl start ubuntu-focal sudo machinectl login ubuntu-focal sudo machinectl poweroff ubuntu-focal
Integrating Containers with Host systemd
# Create systemd service to manage Docker container sudo nano /etc/systemd/system/myapp-container.service
/etc/systemd/system/myapp-container.service:
[Unit] Description=MyApp Docker Container Requires=docker.service After=docker.service [Service] Restart=always ExecStartPre=-/usr/bin/docker stop myapp ExecStartPre=-/usr/bin/docker rm myapp ExecStartPre=/usr/bin/docker pull myapp:latest ExecStart=/usr/bin/docker run --name myapp \ -p 80:80 \ -v /data:/app/data \ myapp:latest ExecStop=/usr/bin/docker stop myapp ExecStopPost=-/usr/bin/docker rm myapp [Install] WantedBy=multi-user.target
# Enable and start sudo systemctl daemon-reload sudo systemctl enable myapp-container sudo systemctl start myapp-container sudo systemctl status myapp-container # View logs sudo journalctl -u myapp-container -f
☁️ Cloud CLI Tools (AWS CLI, gcloud, az)
AWS CLI: Amazon Web Services
Installation:
# Ubuntu/Debian sudo apt update sudo apt install awscli # Or using pip pip3 install awscli --upgrade --user # Or using official bundle curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" unzip awscliv2.zip sudo ./aws/install # Configure aws configure # Enter: AWS Access Key ID, Secret Access Key, Region, Output format # Multiple profiles aws configure --profile dev aws configure --profile prod
Essential Commands:
# EC2 (Virtual Machines) aws ec2 describe-instances aws ec2 run-instances --image-id ami-12345 --instance-type t2.micro --count 1 aws ec2 start-instances --instance-ids i-12345 aws ec2 stop-instances --instance-ids i-12345 aws ec2 terminate-instances --instance-ids i-12345 # S3 (Object Storage) aws s3 ls aws s3 mb s3://my-bucket aws s3 cp file.txt s3://my-bucket/ aws s3 sync ./local-folder s3://my-bucket/folder/ aws s3 presign s3://my-bucket/file.txt --expires-in 3600 # IAM (Identity and Access Management) aws iam list-users aws iam create-user --user-name newuser aws iam create-access-key --user-name newuser aws iam attach-user-policy --user-name newuser --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess # CloudWatch (Monitoring) aws cloudwatch list-metrics aws cloudwatch get-metric-statistics \ --namespace AWS/EC2 \ --metric-name CPUUtilization \ --dimensions Name=InstanceId,Value=i-12345 \ --start-time 2024-02-10T00:00:00Z \ --end-time 2024-02-10T23:59:59Z \ --period 3600 \ --statistics Average # Lambda (Serverless Functions) aws lambda list-functions aws lambda invoke --function-name my-function output.txt aws lambda update-function-code --function-name my-function --zip-file fileb://function.zip # ECS/EKS (Containers) aws ecs list-clusters aws ecs list-services --cluster my-cluster aws eks list-clusters aws eks describe-cluster --name my-cluster
Advanced Usage:
# Using profiles export AWS_PROFILE=prod aws ec2 describe-instances # Assume role aws sts assume-role \ --role-arn arn:aws:iam::123456789012:role/AdminRole \ --role-session-name "AdminSession" # With temporary credentials export AWS_ACCESS_KEY_ID=ASIA... export AWS_SECRET_ACCESS_KEY=... export AWS_SESSION_TOKEN=... # JMESPath queries (filter output) aws ec2 describe-instances --query 'Reservations[].Instances[].{ID:InstanceId,Type:InstanceType,State:State.Name}' aws ec2 describe-instances --query 'Reservations[].Instances[?State.Name==`running`].InstanceId' # Output formats aws ec2 describe-instances --output json aws ec2 describe-instances --output text aws ec2 describe-instances --output table aws ec2 describe-instances --output yaml # Pagination for large results aws ec2 describe-instances --max-items 10 aws s3api list-objects-v2 --bucket my-bucket --page-size 100 # Generate CloudFormation template from existing resources aws cloudformation get-template-summary --stack-name my-stack
Google Cloud CLI (gcloud)
Installation:
# Add Google Cloud SDK repository echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - sudo apt update sudo apt install google-cloud-sdk # Initialize gcloud init gcloud auth login gcloud config set project my-project-id
Essential Commands:
# Compute Engine (VMs) gcloud compute instances list gcloud compute instances create my-instance --zone=us-central1-a --machine-type=e2-micro gcloud compute instances start my-instance --zone=us-central1-a gcloud compute instances stop my-instance --zone=us-central1-a gcloud compute ssh my-instance --zone=us-central1-a # Kubernetes (GKE) gcloud container clusters list gcloud container clusters create my-cluster --num-nodes=3 --zone=us-central1-a gcloud container clusters get-credentials my-cluster --zone=us-central1-a kubectl get nodes # After getting credentials # Cloud Storage gsutil ls gsutil mb gs://my-bucket gsutil cp file.txt gs://my-bucket/ gsutil rsync -r ./local-folder gs://my-bucket/folder/ # IAM gcloud iam service-accounts list gcloud iam service-accounts create my-sa --display-name="My Service Account" gcloud projects add-iam-policy-binding my-project \ --member="serviceAccount:my-sa@my-project.iam.gserviceaccount.com" \ --role="roles/storage.admin" # Cloud Functions gcloud functions list gcloud functions deploy my-function --runtime=python39 --trigger-http --allow-unauthenticated gcloud functions call my-function --data='{"message":"hello"}' # App Engine gcloud app deploy app.yaml gcloud app browse gcloud app logs tail -s default
Azure CLI (az)
Installation:
# Ubuntu/Debian curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash # Or using apt sudo apt update sudo apt install azure-cli # Initialize az login az account set --subscription "My Subscription"
Essential Commands:
# Virtual Machines az vm list az vm create \ --resource-group myRG \ --name myVM \ --image UbuntuLTS \ --admin-username azureuser \ --generate-ssh-keys az vm start --resource-group myRG --name myVM az vm stop --resource-group myRG --name myVM az vm deallocate --resource-group myRG --name myVM # Resource Groups az group list az group create --name myRG --location eastus az group delete --name myRG --yes # AKS (Kubernetes) az aks list az aks create \ --resource-group myRG \ --name myAKS \ --node-count 3 \ --generate-ssh-keys az aks get-credentials --resource-group myRG --name myAKS # Storage az storage account list az storage account create \ --name mystorageaccount \ --resource-group myRG \ --location eastus \ --sku Standard_LRS az storage container create \ --account-name mystorageaccount \ --name mycontainer az storage blob upload \ --account-name mystorageaccount \ --container-name mycontainer \ --name myfile.txt \ --file ./myfile.txt # Web Apps az webapp list az webapp create \ --resource-group myRG \ --plan myAppServicePlan \ --name myUniqueAppName \ --runtime "PYTHON|3.9" az webapp deployment source config-zip \ --resource-group myRG \ --name myUniqueAppName \ --src ./app.zip
Cloud CLI Common Patterns
# 1. Scripting deployments #!/bin/bash # deploy.sh set -e # AWS Example echo "Deploying to AWS..." aws ec2 run-instances \ --image-id ami-0c55b159cbfafe1f0 \ --instance-type t2.micro \ --key-name my-key \ --security-group-ids sg-12345 \ --subnet-id subnet-67890 \ --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=web-server}]' # 2. Infrastructure as Code with CLI #!/bin/bash # infrastructure.sh # Create VPC VPC_ID=$(aws ec2 create-vpc --cidr-block 10.0.0.0/16 --query 'Vpc.VpcId' --output text) # Create Subnet SUBNET_ID=$(aws ec2 create-subnet --vpc-id $VPC_ID --cidr-block 10.0.1.0/24 --query 'Subnet.SubnetId' --output text) # Create Security Group SG_ID=$(aws ec2 create-security-group --group-name web-sg --description "Web Security Group" --vpc-id $VPC_ID --query 'GroupId' --output text) # 3. Cross-cloud operations #!/bin/bash # multi-cloud-backup.sh # Backup to AWS S3 aws s3 sync /data/backups/ s3://my-backup-bucket/$(date +%Y%m%d)/ # Also backup to Google Cloud Storage gsutil -m rsync -r /data/backups/ gs://my-gcp-backup/$(date +%Y%m%d)/ # And Azure Blob Storage az storage blob upload-batch \ --account-name mystorage \ --destination mycontainer \ --source /data/backups/ # 4. Cloud monitoring script #!/bin/bash # cloud-monitor.sh echo "=== Cloud Resource Monitoring ===" echo "Time: $(date)" echo echo "AWS Resources:" aws ec2 describe-instances --query 'Reservations[].Instances[].{ID:InstanceId,Type:InstanceType,State:State.Name}' --output table echo echo "GCP Resources:" gcloud compute instances list --format="table(name,zone,machine_type,status)" echo echo "Azure Resources:" az vm list --query '[].{Name:name,Location:location,Size:hardwareProfile.vmSize,Status:powerState}' --output table # 5. Cost checking aws ce get-cost-and-usage \ --time-period Start=2024-02-01,End=2024-02-10 \ --granularity MONTHLY \ --metrics "BlendedCost" "UnblendedCost" "UsageQuantity" gcloud billing accounts list
Cloud Authentication Best Practices
# 1. Use IAM roles/service accounts, not root keys # AWS: Assume roles # GCP: Use service account keys # Azure: Use managed identities # 2. Store credentials securely # Use AWS SSO, gcloud auth application-default, az login # 3. Rotate credentials regularly # Script to rotate keys: #!/bin/bash # rotate-keys.sh # Create new key NEW_KEY=$(aws iam create-access-key --user-name my-user) # Update applications with new key # (Update environment variables/config files) # Wait for applications to pick up new key sleep 60 # Delete old key aws iam delete-access-key --user-name my-user --access-key-id OLD_KEY_ID # 4. Use credential helpers for containers # In Dockerfile: RUN apt-get update && apt-get install -y awscli RUN aws configure set credential_source Ec2InstanceMetadata # 5. Environment-specific configurations # ~/.aws/config [profile dev] region = us-east-1 output = json [profile prod] region = us-west-2 output = text role_arn = arn:aws:iam::123456789012:role/AdminRole source_profile = dev
📋 Quick Reference Cheat Sheet
| Tool/Concept | Command | Purpose |
|---|---|---|
| Namespaces | lsns, unshare | Process isolation |
| Cgroups | cgcreate, cgset | Resource limits |
| Docker run | docker run -d nginx | Start container |
| Docker build | docker build -t myapp . | Build image |
| Docker network | docker network create | Create network |
| Docker compose | docker-compose up | Multi-container |
| AWS CLI | aws ec2 describe-instances | AWS management |
| GCloud CLI | gcloud compute instances list | GCP management |
| Azure CLI | az vm list | Azure management |
| Cloud auth | aws configure, gcloud init, az login | Authentication |
| Container debug | docker exec -it container bash | Enter container |
| Port mapping | docker run -p 80:80 nginx | Expose ports |
| Volume mount | docker run -v /host:/container | Persistent storage |
🚀 Practice Exercises
Exercise 1: Build and Run a Custom Container
# 1. Create Dockerfile cat > Dockerfile << 'EOF' FROM alpine:latest RUN apk add --no-cache nodejs npm WORKDIR /app COPY package*.json ./ RUN npm install COPY . . EXPOSE 3000 CMD ["node", "app.js"] EOF # 2. Create simple Node.js app cat > app.js << 'EOF' const http = require('http'); const server = http.createServer((req, res) => { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello from Container!\n'); }); server.listen(3000, () => { console.log('Server running on port 3000'); }); EOF cat > package.json << 'EOF' {"name":"myapp","version":"1.0.0"} EOF # 3. Build and run docker build -t mynodeapp . docker run -d -p 8080:3000 --name myapp mynodeapp curl http://localhost:8080
Exercise 2: Multi-Container Application
# docker-compose.yml for WordPress cat > docker-compose.yml << 'EOF' version: '3.8' services: db: image: mysql:5.7 volumes: - db_data:/var/lib/mysql environment: MYSQL_ROOT_PASSWORD: rootpass MYSQL_DATABASE: wordpress MYSQL_USER: wordpress MYSQL_PASSWORD: wordpress wordpress: depends_on: - db image: wordpress:latest ports: - "8000:80" environment: WORDPRESS_DB_HOST: db:3306 WORDPRESS_DB_USER: wordpress WORDPRESS_DB_PASSWORD: wordpress WORDPRESS_DB_NAME: wordpress volumes: - wp_data:/var/www/html volumes: db_data: wp_data: EOF # Start the stack docker-compose up -d # Access at http://localhost:8000
Exercise 3: Cloud Resource Script
#!/bin/bash # cloud-resources-report.sh echo "=== Cloud Resources Report ===" echo "Generated: $(date)" echo # Check AWS if command -v aws &> /dev/null; then echo "AWS Resources:" echo "EC2 Instances:" aws ec2 describe-instances --query 'Reservations[].Instances[].{ID:InstanceId,Type:InstanceType,State:State.Name}' --output table 2>/dev/null || echo " Not configured" echo fi # Check GCP if command -v gcloud &> /dev/null; then echo "GCP Resources:" echo "Compute Instances:" gcloud compute instances list --format="table(name,zone,machine_type,status)" 2>/dev/null || echo " Not configured" echo fi # Check Azure if command -v az &> /dev/null; then echo "Azure Resources:" echo "Virtual Machines:" az vm list --query '[].{Name:name,Location:location,Size:hardwareProfile.vmSize,Status:powerState}' --output table 2>/dev/null || echo " Not configured" echo fi echo "=== Report Complete ==="
Exercise 4: Container Resource Limits
# 1. Run container with limits docker run -d \ --name limited-container \ --cpus="0.5" \ # 50% of one CPU core --memory="100m" \ # 100MB memory limit --memory-swap="200m" \ # 200MB total (RAM + swap) --blkio-weight="100" \ # I/O weight (100-1000) alpine sleep 3600 # 2. Check limits docker inspect limited-container | grep -A5 -B5 "Cpu\|Memory\|Blkio" # 3. Test limits docker exec limited-container stress --cpu 1 --timeout 30s # Watch resource usage docker stats limited-container # 4. Update limits on running container docker update --cpus="1.0" --memory="200m" limited-container
🔗 Master Containers & Cloud with Hands-on Labs
Containers and cloud computing are essential modern skills. Practical experience is the best way to learn.
👉 Practice containerization, cloud operations, and Linux internals with real projects at:
https://devops.trainwithsky.com/
Our platform provides:
Real Docker and Kubernetes environments
Cloud provider sandboxes (AWS, GCP, Azure)
Container networking exercises
Production deployment scenarios
Expert-guided container security labs
Frequently Asked Questions
Q: Should I use Docker or Podman?
A: Docker is more established with better ecosystem. Podman is daemonless and rootless. For production, evaluate both based on your needs.
Q: Can containers see each other's processes?
A: By default, no (PID namespace isolation). But you can share namespaces if needed.
Q: How do I choose between AWS, GCP, and Azure?
A: AWS is market leader with most services. GCP excels at data/ML. Azure integrates well with Microsoft ecosystem. Often companies use multiple.
Q: Should containers run as root?
A: No! Always use non-root users: USER nobody in Dockerfile or docker run --user 1000.
Q: How do I debug container networking issues?
A: Use docker exec container_name ip addr, docker exec container_name ping, check iptables rules, and inspect network namespace.
Q: What's the difference between Docker and containerd?
A: Docker uses containerd internally. containerd is lower-level container runtime that Kubernetes can use directly.
Q: How do I secure container images?
A: Scan for vulnerabilities (Trivy, Clair), sign images (Notary), use minimal base images, and regularly update.
Need help with containers, cloud, or Linux internals? Share your specific challenge in the comments below! 💬
Comments
Post a Comment