Monday, January 19, 2026

Linux Interview & DevOps Scenarios

Linux Interview & DevOps Practice Guide

Complete Linux Interview & DevOps Practice Guide

Pro Tip: Practice these commands in a Linux VM or Docker container. Set up a lab environment to experiment safely.

Linux Interview Questions & Answers

1. What is the difference between hard links and symbolic links? Easy

# Create hard link (shares same inode number): ln original.txt hardlink.txt # Create symbolic link (different inode, points to path): ln -s original.txt symlink.txt # Verify with ls -li: ls -li

Key Differences:

Hard LinkSymbolic Link
Same inode numberDifferent inode number
Can't link directoriesCan link directories
Works only within same filesystemCan cross filesystems
If original deleted, link still worksIf original deleted, link breaks

2. Explain Linux boot process Medium

  1. BIOS/UEFI: Hardware initialization, runs POST
  2. Bootloader (GRUB): Loads kernel and initramfs
  3. Kernel: Initializes hardware, mounts root filesystem
  4. Init Process: systemd (PID 1) starts services
  5. Runlevel/Target: Multi-user.target (normal boot)
  6. Login: Display manager or terminal login
# Check boot time: systemd-analyze systemd-analyze blame # See service startup times

3. How to find which process is using a specific port? Easy

# Method 1: Using netstat netstat -tulpn | grep :80 # Method 2: Using ss (modern replacement) ss -tulpn | grep :80 # Method 3: Using lsof lsof -i :80 # Method 4: Using fuser fuser 80/tcp

4. What is swap space and when is it used? Medium

Answer: Swap is disk space used as virtual memory when RAM is full. It prevents OOM killer from terminating processes.

# Check swap usage: free -h swapon --show # Create swap file: fallocate -l 1G /swapfile chmod 600 /swapfile mkswap /swapfile swapon /swapfile # Make permanent in /etc/fstab: echo '/swapfile none swap sw 0 0' >> /etc/fstab

5. Explain process states in Linux Medium

StateCodeDescription
RunningRCurrently executing
SleepingSWaiting for event
Uninterruptible SleepDWaiting for I/O (can't be killed)
StoppedTStopped by signal (Ctrl+Z)
ZombieZTerminated but parent hasn't reaped
# View process states: ps aux # Look at STAT column ps -eo pid,stat,comm | grep -E 'D|Z' # Find problematic processes

6. What are runlevels in Linux? Medium

RunlevelSystemd TargetPurpose
0poweroff.targetShutdown
1rescue.targetSingle user mode
3multi-user.targetMulti-user, no GUI
5graphical.targetMulti-user with GUI
6reboot.targetReboot
# Check current runlevel: runlevel systemctl get-default # Change runlevel: init 3 # Switch to runlevel 3 systemctl isolate multi-user.target

7. Explain Linux file permissions in detail Easy

# r=read(4), w=write(2), x=execute(1) # Example: chmod 755 = rwxr-xr-x # Owner: rwx (7), Group: r-x (5), Others: r-x (5) ls -la file.txt # Output: -rwxr-xr-x 1 user group 1024 Jan 1 10:00 file.txt # Change permissions: chmod 644 file.txt # rw-r--r-- chmod +x script.sh # Add execute permission chmod u=rwx,g=rx,o=r file.txt # Change ownership: chown user:group file.txt chown -R user:group /dir # Recursive

8. How does SSH key authentication work? Medium

Answer: Public-private key pair. Public key on server, private key on client.

# Generate SSH key pair: ssh-keygen -t rsa -b 4096 -C "your_email@example.com" # Copy public key to server: ssh-copy-id user@server_ip # Test connection: ssh user@server_ip # SSH config file (~/.ssh/config): Host myserver HostName server_ip User username IdentityFile ~/.ssh/id_rsa Port 22

Practical Scenarios & Solutions

Scenario 1: Server is slow - performance troubleshooting

Symptoms: High load average, slow response, timeouts.

# Step 1: Check load average (1, 5, 15 minutes): uptime # If load > CPU cores, system is overloaded # Step 2: Check CPU usage: top htop # Better alternative mpstat -P ALL 1 5 # Per-core statistics # Step 3: Check memory: free -h vmstat 1 5 ps aux --sort=-%mem | head -10 # Step 4: Check disk I/O: iostat -x 1 5 iotop -o # Top I/O processes # Step 5: Check network: iftop -n nethogs # Per-process network # Step 6: Check for too many processes: ps aux | wc -l pstree # Process tree

Scenario 2: Disk full - emergency cleanup

Symptoms: "No space left on device" errors.

# Step 1: Find which partition is full: df -h df -i # Check inode usage # Step 2: Find large files/directories: du -ahx / 2>/dev/null | sort -rh | head -20 ncdu / # Interactive disk usage analyzer # Step 3: Check for deleted files still open: lsof | grep deleted # Restart process holding deleted files # Step 4: Clear package cache: apt clean # Debian/Ubuntu yum clean all # RHEL/CentOS dnf clean all # Fedora # Step 5: Clear old logs: journalctl --vacuum-size=200M find /var/log -name "*.log" -mtime +30 -delete # Step 6: Clear Docker resources: docker system prune -a docker volume prune # Step 7: Clear temporary files: rm -rf /tmp/* rm -rf /var/tmp/*

Scenario 3: Service won't start - debugging steps

# Step 1: Check service status: systemctl status nginx systemctl --failed # All failed services # Step 2: Check logs: journalctl -u nginx --no-pager -n 100 journalctl -u nginx --since "1 hour ago" tail -f /var/log/nginx/error.log # Step 3: Test configuration: nginx -t # Nginx apachectl configtest # Apache sshd -t # SSH # Step 4: Check dependencies: systemctl list-dependencies nginx # Step 5: Check ports in use: ss -tulpn | grep :80 lsof -i :80 # Step 6: Check SELinux/AppArmor: getenforce # SELinux status sestatus aa-status # AppArmor status # Step 7: Check file permissions: ls -la /var/www/ ls -Z /var/www/ # SELinux context

Scenario 4: Network connectivity issues

# Step 1: Check basic connectivity: ping 8.8.8.8 # Test internet ping gateway_ip # Test local network ping google.com # Test DNS resolution # Step 2: Check DNS: nslookup google.com dig google.com cat /etc/resolv.conf # Step 3: Check routing: ip route route -n traceroute google.com mtr google.com # Continuous traceroute # Step 4: Check network configuration: ip addr show ifconfig -a cat /etc/netplan/*.yaml # Ubuntu 18.04+ # Step 5: Check firewall: iptables -L -n -v ufw status verbose # Ubuntu firewall-cmd --list-all # firewalld # Step 6: Check services: systemctl status NetworkManager systemctl status networking systemctl status systemd-networkd

DevOps Real-World Use Cases

Use Case 1: Complete CI/CD Pipeline

# Jenkinsfile pipeline example: pipeline { agent any stages { stage('Checkout') { steps { checkout scm } } stage('Build') { steps { sh 'docker build -t myapp:$BUILD_NUMBER .' } } stage('Test') { steps { sh 'docker run myapp:$BUILD_NUMBER npm test' sh 'docker run myapp:$BUILD_NUMBER npm run lint' } } stage('Security Scan') { steps { sh 'trivy image myapp:$BUILD_NUMBER' } } stage('Deploy to Staging') { steps { sh 'docker tag myapp:$BUILD_NUMBER registry/staging:latest' sh 'docker push registry/staging:latest' sh 'kubectl set image deployment/myapp-staging myapp=registry/staging:latest' } } stage('Deploy to Production') { when { branch 'main' } steps { input message: 'Deploy to production?' sh 'docker tag myapp:$BUILD_NUMBER registry/production:latest' sh 'docker push registry/production:latest' sh 'kubectl set image deployment/myapp-prod myapp=registry/production:latest' } } } post { success { emailext( subject: "Build Successful: ${env.JOB_NAME}", body: "Build ${env.BUILD_NUMBER} was successful", to: 'team@example.com' ) } failure { emailext( subject: "Build Failed: ${env.JOB_NAME}", body: "Build ${env.BUILD_NUMBER} failed", to: 'team@example.com' ) } } }

Use Case 2: Infrastructure as Code with Terraform

# main.tf - Complete AWS infrastructure provider "aws" { region = "us-east-1" } # VPC resource "aws_vpc" "main" { cidr_block = "10.0.0.0/16" enable_dns_hostnames = true tags = { Name = "main-vpc" } } # Subnets resource "aws_subnet" "public" { count = 2 vpc_id = aws_vpc.main.id cidr_block = cidrsubnet(aws_vpc.main.cidr_block, 8, count.index) availability_zone = element(["us-east-1a", "us-east-1b"], count.index) tags = { Name = "public-subnet-${count.index}" } } # Security Group resource "aws_security_group" "web" { name = "web-sg" vpc_id = aws_vpc.main.id ingress { from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { from_port = 443 to_port = 443 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } } # EC2 Instance resource "aws_instance" "web" { count = 2 ami = "ami-0c55b159cbfafe1f0" instance_type = "t3.micro" subnet_id = aws_subnet.public[count.index].id vpc_security_group_ids = [aws_security_group.web.id] user_data = <<-EOF #!/bin/bash apt-get update apt-get install -y nginx systemctl start nginx systemctl enable nginx EOF tags = { Name = "web-server-${count.index}" } } # Load Balancer resource "aws_lb" "web" { name = "web-lb" internal = false load_balancer_type = "application" security_groups = [aws_security_group.web.id] subnets = aws_subnet.public[*].id } # Outputs output "load_balancer_dns" { value = aws_lb.web.dns_name } output "instance_ips" { value = aws_instance.web[*].public_ip }

Use Case 3: Docker Compose for Development

# docker-compose.yml - Full stack application version: '3.8' services: # Database db: image: postgres:13 environment: POSTGRES_DB: myapp POSTGRES_USER: admin POSTGRES_PASSWORD: secret volumes: - postgres_data:/var/lib/postgresql/data ports: - "5432:5432" networks: - app-network healthcheck: test: ["CMD-SHELL", "pg_isready -U admin"] interval: 10s timeout: 5s retries: 5 # Redis Cache redis: image: redis:6-alpine command: redis-server --requirepass secret volumes: - redis_data:/data ports: - "6379:6379" networks: - app-network # Application app: build: . environment: DATABASE_URL: postgres://admin:secret@db:5432/myapp REDIS_URL: redis://:secret@redis:6379 volumes: - ./app:/app - /app/node_modules ports: - "3000:3000" depends_on: db: condition: service_healthy redis: condition: service_started networks: - app-network restart: unless-stopped # Nginx Reverse Proxy nginx: image: nginx:alpine volumes: - ./nginx.conf:/etc/nginx/nginx.conf - ./ssl:/etc/nginx/ssl ports: - "80:80" - "443:443" depends_on: - app networks: - app-network # Monitoring prometheus: image: prom/prometheus volumes: - ./prometheus.yml:/etc/prometheus/prometheus.yml - prometheus_data:/prometheus ports: - "9090:9090" networks: - app-network grafana: image: grafana/grafana environment: GF_SECURITY_ADMIN_PASSWORD: admin volumes: - grafana_data:/var/lib/grafana ports: - "3001:3000" networks: - app-network networks: app-network: driver: bridge volumes: postgres_data: redis_data: prometheus_data: grafana_data:

Troubleshooting Scenarios

1. Kubernetes Pod Issues

# Check pod status: kubectl get pods kubectl describe pod pod-name # Check logs: kubectl logs pod-name kubectl logs pod-name -c container-name # Multi-container pod kubectl logs --previous pod-name # Previous instance # Debug pod: kubectl exec -it pod-name -- sh kubectl exec -it pod-name -c container-name -- sh # Common issues: kubectl get events --sort-by=.metadata.creationTimestamp kubectl get pods -o wide # See node allocation kubectl get svc # Check services # Check resource limits: kubectl describe pod pod-name | grep -A 10 Limits # Check Persistent Volumes: kubectl get pv kubectl get pvc

2. Docker Container Issues

# Check running containers: docker ps docker ps -a # All containers including stopped # Check logs: docker logs container-name docker logs --tail 100 -f container-name # Inspect container: docker inspect container-name docker stats container-name # Resource usage # Debug container: docker exec -it container-name sh docker exec -it container-name bash # Check Docker daemon: systemctl status docker journalctl -u docker --no-pager -n 50 # Clean up resources: docker system df # Disk usage docker system prune -a # Remove unused

3. Database Performance Issues

# MySQL/MariaDB: mysql -e "SHOW PROCESSLIST;" mysql -e "SHOW ENGINE INNODB STATUS\G" mysql -e "SHOW VARIABLES LIKE '%max_connections%';" mysql -e "SHOW STATUS LIKE 'Threads_connected';" # PostgreSQL: psql -c "SELECT * FROM pg_stat_activity;" psql -c "SELECT * FROM pg_stat_user_tables;" psql -c "SELECT pid, query FROM pg_stat_activity WHERE state = 'active';" # Check slow queries: # MySQL: SHOW VARIABLES LIKE 'slow_query_log'; # PostgreSQL: log_min_duration_statement = 1000 # Check locks: mysql -e "SHOW OPEN TABLES WHERE In_use > 0;" psql -c "SELECT * FROM pg_locks;"

Hands-on Practice Tasks

Task 1: Complete System Monitoring Script

#!/bin/bash # system_monitor.sh - Comprehensive system monitoring # Configuration LOG_FILE="/var/log/system_monitor.log" ALERT_EMAIL="admin@example.com" ALERT_THRESHOLD_CPU=80 ALERT_THRESHOLD_MEMORY=90 ALERT_THRESHOLD_DISK=85 # Get system metrics CPU_USAGE=$(top -bn1 | grep "Cpu(s)" | awk '{print $2}' | cut -d'%' -f1) MEMORY_USAGE=$(free | awk '/Mem/{printf("%.2f"), $3/$2*100}') DISK_USAGE=$(df / | awk 'NR==2 {print $5}' | sed 's/%//') LOAD_AVERAGE=$(uptime | awk -F'load average:' '{print $2}' | xargs) UPTIME=$(uptime -p) TIMESTAMP=$(date "+%Y-%m-%d %H:%M:%S") # Log metrics echo "[$TIMESTAMP] CPU: ${CPU_USAGE}% | Memory: ${MEMORY_USAGE}% | Disk: ${DISK_USAGE}% | Load: ${LOAD_AVERAGE} | Uptime: ${UPTIME}" >> $LOG_FILE # Check thresholds and send alerts send_alert() { local subject=$1 local message=$2 echo "[$TIMESTAMP] ALERT: $subject - $message" >> $LOG_FILE echo "$message" | mail -s "$subject" $ALERT_EMAIL } # CPU alert if (( $(echo "$CPU_USAGE > $ALERT_THRESHOLD_CPU" | bc -l) )); then send_alert "High CPU Usage" "CPU usage is ${CPU_USAGE}% (threshold: ${ALERT_THRESHOLD_CPU}%)" fi # Memory alert if (( $(echo "$MEMORY_USAGE > $ALERT_THRESHOLD_MEMORY" | bc -l) )); then send_alert "High Memory Usage" "Memory usage is ${MEMORY_USAGE}% (threshold: ${ALERT_THRESHOLD_MEMORY}%)" fi # Disk alert if [ $DISK_USAGE -gt $ALERT_THRESHOLD_DISK ]; then send_alert "High Disk Usage" "Disk usage is ${DISK_USAGE}% (threshold: ${ALERT_THRESHOLD_DISK}%)" fi # Check running processes echo -e "\nTop 5 CPU processes:" >> $LOG_FILE ps aux --sort=-%cpu | head -6 >> $LOG_FILE echo -e "\nTop 5 Memory processes:" >> $LOG_FILE ps aux --sort=-%mem | head -6 >> $LOG_FILE # Check disk space by partition echo -e "\nDisk usage by partition:" >> $LOG_FILE df -h >> $LOG_FILE # Check network connections echo -e "\nNetwork connections (ESTABLISHED):" >> $LOG_FILE netstat -an | grep ESTABLISHED | wc -l >> $LOG_FILE # Rotate log if too large LOG_SIZE=$(wc -c < $LOG_FILE) if [ $LOG_SIZE -gt 10485760 ]; then # 10MB mv $LOG_FILE $LOG_FILE.old touch $LOG_FILE fi
# Add to crontab to run every 5 minutes: crontab -e # Add: */5 * * * * /path/to/system_monitor.sh

Task 2: Dockerize a Multi-Service Application

# Directory structure: mkdir -p myapp/{app,nginx,mysql} cd myapp # 1. Create Python app (app/app.py): from flask import Flask, jsonify import mysql.connector import redis import os app = Flask(__name__) # Database configuration db_config = { 'host': os.getenv('DB_HOST', 'db'), 'user': os.getenv('DB_USER', 'root'), 'password': os.getenv('DB_PASSWORD', 'password'), 'database': os.getenv('DB_NAME', 'mydb') } # Redis configuration redis_client = redis.Redis( host=os.getenv('REDIS_HOST', 'redis'), port=int(os.getenv('REDIS_PORT', 6379)), decode_responses=True ) @app.route('/') def home(): return jsonify({ 'status': 'ok', 'service': 'flask-app', 'timestamp': datetime.now().isoformat() }) @app.route('/health') def health(): try: # Test database connection conn = mysql.connector.connect(**db_config) conn.close() # Test Redis connection redis_client.ping() return jsonify({'status': 'healthy'}), 200 except Exception as e: return jsonify({'status': 'unhealthy', 'error': str(e)}), 500 @app.route('/cache//') def cache(key, value): redis_client.set(key, value) return jsonify({'key': key, 'value': value}) if __name__ == '__main__': app.run(host='0.0.0.0', port=5000, debug=True) # 2. Create requirements.txt: Flask==2.3.2 mysql-connector-python==8.0.33 redis==4.5.5 # 3. Create Dockerfile (app/Dockerfile): FROM python:3.9-slim WORKDIR /app COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt COPY . . CMD ["python", "app.py"] # 4. Create nginx configuration (nginx/nginx.conf): events { worker_connections 1024; } http { upstream flask_app { server app:5000; } server { listen 80; location / { proxy_pass http://flask_app; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } location /health { proxy_pass http://flask_app/health; } } } # 5. Create docker-compose.yml: version: '3.8' services: db: image: mysql:8.0 environment: MYSQL_ROOT_PASSWORD: password MYSQL_DATABASE: mydb volumes: - mysql_data:/var/lib/mysql ports: - "3306:3306" redis: image: redis:7-alpine ports: - "6379:6379" app: build: ./app environment: DB_HOST: db DB_USER: root DB_PASSWORD: password DB_NAME: mydb REDIS_HOST: redis depends_on: - db - redis ports: - "5000:5000" nginx: image: nginx:alpine volumes: - ./nginx/nginx.conf:/etc/nginx/nginx.conf ports: - "80:80" depends_on: - app volumes: mysql_data: # 6. Build and run: docker-compose up --build

Task 3: Kubernetes Deployment with Auto-scaling

# 1. Create namespace: kubectl create namespace myapp # 2. Create deployment.yaml: apiVersion: apps/v1 kind: Deployment metadata: name: webapp namespace: myapp labels: app: webapp spec: replicas: 3 selector: matchLabels: app: webapp template: metadata: labels: app: webapp spec: containers: - name: webapp image: nginx:latest ports: - containerPort: 80 resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" livenessProbe: httpGet: path: / port: 80 initialDelaySeconds: 30 periodSeconds: 10 readinessProbe: httpGet: path: / port: 80 initialDelaySeconds: 5 periodSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: webapp-service namespace: myapp spec: selector: app: webapp ports: - port: 80 targetPort: 80 type: LoadBalancer --- apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: webapp-hpa namespace: myapp spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: webapp minReplicas: 2 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 80 # 3. Create configmap.yaml (for configuration): apiVersion: v1 kind: ConfigMap metadata: name: app-config namespace: myapp data: APP_ENV: "production" LOG_LEVEL: "info" MAX_CONNECTIONS: "100" # 4. Create secret.yaml (for sensitive data): apiVersion: v1 kind: Secret metadata: name: app-secrets namespace: myapp type: Opaque data: database-password: cGFzc3dvcmQxMjM= # base64 encoded api-key: YXBpLWtleS1zZWNyZXQ= # 5. Create ingress.yaml (for routing): apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: webapp-ingress namespace: myapp annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: myapp.example.com http: paths: - path: / pathType: Prefix backend: service: name: webapp-service port: number: 80 # 6. Apply all configurations: kubectl apply -f deployment.yaml kubectl apply -f configmap.yaml kubectl apply -f secret.yaml kubectl apply -f ingress.yaml # 7. Monitor deployment: kubectl get all -n myapp kubectl describe hpa webapp-hpa -n myapp kubectl logs deployment/webapp -n myapp kubectl top pods -n myapp

Essential Linux Commands Cheatsheet

# System Information $ uname -a # Kernel version $ hostname # System hostname $ uptime # System uptime $ cat /etc/os-release # OS version $ lscpu # CPU information $ lsblk # Block devices $ lspci # PCI devices $ lsusb # USB devices # Process Management $ ps aux # All processes $ top # Interactive process viewer $ htop # Better top (install first) $ kill -9 PID # Force kill process $ killall process_name # Kill all processes by name $ pkill pattern # Kill by pattern $ nice -n 10 command # Run with low priority $ renice 15 PID # Change priority # Networking $ ip addr show # Network interfaces $ ip route # Routing table $ ss -tulpn # Open ports (modern) $ netstat -tulpn # Open ports (traditional) $ traceroute host # Network path $ mtr host # Better traceroute $ dig domain.com # DNS lookup $ nslookup domain.com # DNS lookup $ curl -I url # HTTP headers $ wget url # Download file $ scp file user@host:/path # Secure copy $ rsync -avz source dest # Synchronize files # Disk Operations $ df -h # Disk space $ du -sh * # Directory sizes $ fdisk -l # Partition table $ mount # Mounted filesystems $ umount /path # Unmount $ fsck /dev/sda1 # Filesystem check $ badblocks /dev/sda # Check for bad blocks $ smartctl -a /dev/sda # SMART data # File Operations $ find / -name "*.log" # Find files $ grep -r "text" /dir # Search text $ awk '{print $1}' file # Process columns $ sed 's/old/new/g' file # Replace text $ sort file # Sort lines $ uniq file # Remove duplicates $ cut -d: -f1 file # Extract columns $ tar -czf archive.tar.gz dir # Create tar $ tar -xzf archive.tar.gz # Extract tar $ zip -r archive.zip dir # Create zip $ unzip archive.zip # Extract zip # User Management $ whoami # Current user $ who # Logged in users $ w # Who and what they're doing $ last # Last logins $ passwd username # Change password $ useradd username # Add user $ usermod -aG group user # Add user to group $ userdel username # Delete user $ groupadd groupname # Add group $ groups username # User groups # Package Management $ apt update # Ubuntu/Debian update $ apt upgrade # Ubuntu/Debian upgrade $ apt install package # Ubuntu/Debian install $ yum update # RHEL/CentOS update $ yum install package # RHEL/CentOS install $ dnf update # Fedora update $ dnf install package # Fedora install $ snap install package # Snap packages $ pip install package # Python packages $ npm install package # Node.js packages # Service Management (systemd) $ systemctl start service $ systemctl stop service $ systemctl restart service $ systemctl reload service $ systemctl status service $ systemctl enable service $ systemctl disable service $ systemctl daemon-reload $ journalctl -u service # Service logs $ journalctl -f # Follow logs $ journalctl --since "1 hour ago" $ journalctl --boot # Current boot logs
Practice Exercise: Set up a Linux VM (VirtualBox) or use Docker containers to practice these commands and scenarios. Break things intentionally and learn how to fix them!

No comments:

Post a Comment

Linux Interview & DevOps Scenarios

Linux Interview & DevOps Practice Guide Complete Linux Interview & DevOps Practice Guide Quick Navigation: ...