Saturday, January 31, 2026

Git Basics Commands: Essential Commands

Git Basics Commands: Essential Commands Every Developer Must Know

Git Basics Commands: Essential Commands Every Developer Must Know

🎯 Learning Strategy: This guide covers the 20% of Git commands you'll use 80% of the time. Master these fundamentals before moving to advanced topics.

Repository Management Commands

Repository Basics

A Git repository (or repo) is a collection of files and their complete history. These commands help you create, clone, and manage repositories.

git init

Initializes a new Git repository in the current directory.

📌 Use when: Starting a new project from scratch

What happens: Creates a hidden .git directory containing all repository metadata.

Basic Usage
git init
git init -b main
Initialize with specific branch name (sets default branch to main)
git init --bare
Initialize in bare mode (for central repositories/servers)
git init --quiet
Initialize without verbose output
git clone

Creates a local copy of an existing remote repository.

📌 Use when: Working with existing projects from GitHub, GitLab, etc.

What happens: Downloads entire repository including all history and branches.

Basic Usage
git clone https://github.com/user/repository.git
git clone repo-url project-name
Clone repository into a specific directory name
git clone -b develop repo-url
Clone specific branch only (not the default branch)
git clone --depth 1 repo-url
Shallow clone (only recent history, saves disk space)
git clone --recursive repo-url
Clone repository with all submodules
🎮 Practice Exercise 1:
1. Create a new directory: mkdir practice-git
2. Navigate into it: cd practice-git
3. Initialize Git: git init
4. Verify with: ls -la (should see .git folder)
💡 Pro Tip: Use git init when starting new projects from scratch. Use git clone when you want to contribute to or use existing projects. Most beginners start by cloning repositories to learn from existing code.

git status: Your Git Dashboard

Understanding Repository State

git status is your most frequently used Git command. It shows the current state of your working directory and staging area.

Basic Usage
git status

Common git status Output Scenarios

Clean Working Directory
On branch main Your branch is up to date with 'origin/main'. nothing to commit, working tree clean
Untracked Files Present
On branch main Untracked files: (use "git add <file>..." to include in what will be committed) newfile.txt config.yml nothing added to commit but untracked files present
Modified Files Not Staged
On branch main Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git restore <file>..." to discard changes in working directory) modified: README.md modified: src/index.js no changes added to commit
Files Ready to Commit
On branch main Changes to be committed: (use "git restore --staged <file>..." to unstage) new file: script.js modified: index.html deleted: oldfile.txt
git status -s
Short status format (compact output with status codes)
git status -sb
Short status with branch information
git status --ignored
Show ignored files in output
git status -uno
Ignore untracked files in output (cleaner view)
✅ Best Practice: Run git status before and after every Git command. It helps you understand what's happening and prevents mistakes. Make it a habit - it's the Git equivalent of looking both ways before crossing.

Core Workflow Commands: Add & Commit

The Git Workflow Cycle

Files move through three states: Working Directory → Staging Area → Repository. git add moves files from working directory to staging area. git commit moves files from staging area to repository.

git add: Staging Changes

Moves changes from working directory to staging area.

Basic Usage
git add filename.txt
git add .
Stage all changes in current directory and subdirectories
git add -u
Stage all tracked files (modified/deleted only)
git add -A
Stage all files (new + modified + deleted)
git add *.js
Stage all JavaScript files using wildcard
git add -p
Interactive staging (review each change)
git add --no-ignore-removal
Stage removed files as well
git commit: Saving Snapshots

Creates permanent snapshot of staged changes.

Basic Usage
git commit -m "Add login functionality"
git commit -am "Quick update"
Stage all tracked files and commit (skip git add)
git commit --amend
Modify the most recent commit (change message or add files)
git commit --no-verify
Skip pre-commit and commit-msg hooks
git commit --allow-empty
Create empty commit (useful for CI/CD triggers)
git commit -v
Show diff in commit message editor
git commit --dry-run
Show what would be committed without actually committing
⚠️ Important Note about git commit -am:
git commit -am only stages and commits tracked files. It will NOT stage new (untracked) files. Use git add . first for new files. Also, the -am flag only works when you already have tracked files with changes.

Commit Message Guidelines

📝 Effective Commit Message Format:
1. Subject line (50 chars max): Imperative mood
2. Blank line: Separates subject from body
3. Body (72 chars per line): Explain WHY, not what
4. Footer: Reference issues, breaking changes
# Example of good commit message structure Add user profile picture upload - Implement file validation for jpg, png formats - Add progress bar for upload feedback - Create responsive image cropper component - Handle file size limit (max 5MB) Fixes #123 Related to #456
🎮 Practice Exercise 2:
1. Create a file: echo "# My Project" > README.md
2. Check status: git status
3. Stage file: git add README.md
4. Check status again: git status
5. Commit: git commit -m "Add project README"
6. View history: git log --oneline

Viewing History & Differences

Inspecting Changes

git log shows commit history, while git diff and git show help you examine specific changes in detail.

git log: Viewing History

Displays commit history with various formatting options.

Basic Usage
git log
git log --oneline
One line per commit (compact view)
git log --graph --all --oneline
Graph visualization with all branches
git log -10
Show last 10 commits only
git log -p
Show patch (diff) for each commit
git log --stat
Show statistics (files changed, insertions/deletions)
git log --since="2024-01-01"
Show commits since specific date
git log --grep="bug"
Search commits containing "bug" in message
git log --author="John"
Show commits by specific author
git diff: Comparing Changes

Shows differences between files, commits, or branches.

Basic Usage
git diff
git diff --staged
Compare staged changes with last commit
git diff HEAD~3
Compare with 3 commits ago
git diff main..feature
Compare two branches
git diff --name-only
Show only file names, not content
git diff --word-diff
Word-level differences (easier to read)
git diff --cached
Alias for --staged (older Git versions)
git diff HEAD
Compare working directory with last commit
git diff abc123 def456 -- README.md
Compare specific file between two commits
git show: Examining Commits

Displays detailed information about specific commits.

Basic Usage
git show abc123def
git show HEAD
Show latest commit details
git show --stat
Show statistics only (files changed count)
git show --name-only
Show only file names in commit
git show abc123:README.md
Show specific file version from commit
git show --pretty=fuller
Show detailed commit information
git show HEAD~2:src/
Show directory contents from 2 commits ago
🔍 Debugging Tip: Use git diff to see what you're about to commit. Use git log -p to find when a bug was introduced. Use git show to examine specific suspicious commits.

.gitignore & .gitkeep Files

Controlling What Git Tracks

.gitignore tells Git which files to ignore (not track). .gitkeep is a convention to track empty directories.

Creating .gitignore

Create and manage files to exclude from version control.

Basic Usage
touch .gitignore
echo "*.log" >> .gitignore
Add pattern to ignore all log files
git check-ignore -v filename
Check why a file is being ignored
git ls-files --others --ignored
List all ignored files
git rm --cached filename
Remove file from Git but keep locally
.gitkeep Usage

Tracking empty directories in Git (convention, not command).

Creating Empty Directories
mkdir -p logs images uploads touch logs/.gitkeep images/.gitkeep uploads/.gitkeep
git add logs/.gitkeep
Stage .gitkeep file to track directory
echo "*" > dir/.gitignore
Alternative: create .gitignore to keep directory
echo "!.gitignore" >> dir/.gitignore
Add exception to keep .gitignore file itself
find . -name ".gitkeep" -type f
Find all .gitkeep files in project

Common .gitignore Patterns

# ========================================== # COMMON .gitignore PATTERNS # ========================================== # Operating system files .DS_Store Thumbs.db desktop.ini *.swp *.swo *~ # IDE and editor files .vscode/ .idea/ *.swp *.swo *.sublime-* # Dependency directories node_modules/ vendor/ __pycache__/ *.pyc *.pyo .pytest_cache/ # Build outputs dist/ build/ *.exe *.dll *.so *.dylib # Environment variables .env .env.local .env.development.local .env.test.local .env.production.local .secrets # Logs and databases *.log npm-debug.log* yarn-debug.log* yarn-error.log* *.sqlite *.db # Temporary files *.tmp *.temp temp/ tmp/ # Coverage reports coverage/ *.lcov htmlcov/ # Package manager files package-lock.json yarn.lock pnpm-lock.yaml
⚠️ Important: .gitignore only affects untracked files. If you've already committed a file, adding it to .gitignore won't remove it from Git. Use git rm --cached filename to stop tracking previously committed files.

.gitignore Pattern Syntax

Pattern Meaning Example
*.log All files ending with .log app.log, error.log
temp/ Entire directory and its contents temp/file.txt, temp/sub/file.txt
!important.log Exception (don't ignore this file) important.log (kept, others ignored)
/debug.log Only at root level, not subdirectories debug.log (root), not src/debug.log
debug/*.log All .log files in debug directory debug/app.log, debug/error.log
debug/**/*.log All .log files in debug and subdirectories debug/file.log, debug/sub/file.log
🎮 Practice Exercise 3:
1. Create .gitignore: touch .gitignore
2. Add patterns: echo "*.log\nnode_modules/\n.env" > .gitignore
3. Create empty directory: mkdir data
4. Add .gitkeep: touch data/.gitkeep
5. Stage and commit: git add . && git commit -m "Add .gitignore and directories"

Git Basics Cheat Sheet

Quick Reference Guide

Copy and keep this cheat sheet handy while you're learning Git.

Repository Setup
git init
Initialize new repository
git clone [url]
Clone existing repository
git remote add origin [url]
Add remote repository
Status & Info
git status
Check repository status
git status -s
Short status format
git log --oneline
Compact commit history
Basic Workflow
git add [file]
Stage specific file
git add .
Stage all changes
git commit -m "[msg]"
Commit staged changes
git commit -am "[msg]"
Stage tracked files and commit
Viewing Changes
git diff
Show unstaged changes
git diff --staged
Show staged changes
git show
Show latest commit
git log -p
History with changes

Common Workflow Patterns

Daily Development Workflow
git status # Check current state git add . # Stage all changes git commit -m "Description" # Commit changes git push # Push to remote
Careful Review Workflow
git status # Check what changed git diff # Review unstaged changes git add -p # Interactively stage changes git diff --staged # Review staged changes git commit # Commit with editor
Debugging & Investigation
git log --oneline -20 # Recent commits git show [commit-hash] # Examine specific commit git diff [hash1] [hash2] # Compare two commits git blame filename # Who changed what line
🚀 Next Steps: Once you've mastered these basic commands, learn: 1. Branching (git branch, git checkout, git merge)
2. Remote operations (git push, git pull, git fetch)
3. Undoing changes (git reset, git revert, git restore)
4. Stashing (git stash) for temporary saves
💪 Keep Practicing: The key to Git mastery is regular practice. Use these commands daily. Make mistakes in a practice repository. Try breaking things and fixing them. Every expert was once a beginner!

Monday, January 26, 2026

Git Fundamentals & Version Control Master Guide

Git Fundamentals: Complete Beginner's Guide to Version Control

Git Fundamentals: Complete Beginner's Guide to Version Control

📚 Learning Tip: Create a practice folder on your computer and follow along with the commands in this guide. Hands-on practice is the best way to learn Git!

What is Git? Understanding Version Control

Definition

Git is a distributed version control system that helps developers track changes in their code over time. It was created by Linus Torvalds in 2005 to manage the Linux kernel development.

Key Characteristics of Git

Git is different from traditional file systems because it remembers every change you make to your code. Unlike saving files with names like "project_final_v2.doc", Git keeps a complete history intelligently.

Feature Description Benefit
Distributed Every developer has complete repository copy Work offline, no single point of failure
Fast Most operations are performed locally No network delays for daily work
Branching Easy creation and merging of branches Work on features independently
Integrity Uses SHA-1 hashes for data verification Data cannot be corrupted unnoticed
# Check if Git is installed on your system git --version

This command displays the installed Git version on your system. It confirms that Git is properly installed and shows which version you're running. The output will look like "git version 2.34.1" or similar.

💡 Did You Know? Git gets its name from British English slang meaning "unpleasant person". Linus Torvalds said he's an egotistical bastard and named all his projects after himself. First Linux, now Git.

Why Version Control is Essential for Developers

The Problem Without Version Control

Before version control systems, developers faced several challenges: Files were saved with confusing names, collaboration was difficult, and recovering previous versions was nearly impossible.

❌ Common Problems Without Version Control:
1. Files named: final.doc, final_v2.doc, final_really_final.doc
2. No record of who changed what and when
3. Team members overwrite each other's work
4. Can't revert to working version after breaking changes
5. No backup if computer crashes

Benefits of Using Git

Git solves all these problems by providing a systematic way to track changes. It's like having a time machine for your code – you can go back to any point in your project's history.

Benefit How Git Helps Real-World Example
History Tracking Every change recorded with details Find when a bug was introduced
Collaboration Multiple people can work simultaneously Team projects, open source contributions
Experimentation Branches for trying new features Test new ideas without breaking main code
Backup Every clone is complete backup Recover project if laptop is lost
# View complete commit history git log --oneline

This command shows a condensed version of your project's history. Each line represents one commit with its unique ID and message. It helps you understand what changes were made and when.

✅ Pro Tip: Even if you work alone, use Git! It serves as both version control and backup system for your projects. You'll thank yourself when you accidentally delete important code.

Centralized vs Distributed Version Control Systems

What is Centralized VCS?

Centralized Version Control Systems (CVCS) like SVN or CVS have one central server that stores all versions of files. Developers check out files from this server, make changes, and check them back in.

Centralized VCS Workflow:
1. Connect to central server
2. Check out latest files
3. Make changes locally
4. Check changes back to server
5. Other developers update from server

What is Distributed VCS?

Distributed Version Control Systems (DVCS) like Git or Mercurial give every developer a complete copy of the repository. Each copy has full history, and changes can be shared between repositories.

Distributed VCS Workflow:
1. Clone complete repository locally
2. Work offline, make commits locally
3. Share changes with others when ready
4. Pull changes from others when needed

Comparison Table

Aspect Centralized VCS (SVN) Distributed VCS (Git)
Repository Single central server Every developer has full copy
Network Required for most operations Only needed for sharing changes
Speed Slower (depends on network) Faster (local operations)
Backup Single point of failure Every clone is backup
Branching Complex and slow Simple and fast
🤔 Why Choose Git?
Git's distributed nature makes it ideal for modern development: remote work, open source collaboration, and working with unreliable internet connections. Most companies and open source projects now use Git.

Understanding Git Architecture: The Three States

The Three Main Areas in Git

Git has a unique architecture with three main areas where files can reside. Understanding these areas is crucial to using Git effectively.

🎯 The Three States:
1. Working Directory: Your actual project files
2. Staging Area (Index): Prepared changes ready to commit
3. Git Repository: Committed changes stored permanently

1. Working Directory

The working directory is your project folder where you edit files. These are the files you see in your file explorer or IDE. Changes here are not yet tracked by Git.

# Check status of working directory git status

This command shows which files are modified, which are staged, and which are not tracked by Git. It's your main tool for understanding what's happening in your working directory.

2. Staging Area (Index)

The staging area is like a preparation area for commits. You add changes from your working directory to the staging area when you're ready to save them as a commit.

# Add file to staging area git add filename.txt

This command takes changes from your working directory and adds them to the staging area. Files in the staging area are ready to be committed.

3. Git Repository

The Git repository (in the .git directory) stores all committed changes. Once you commit changes from the staging area, they become permanent parts of your project's history.

# Commit staged changes to repository git commit -m "Add new feature"

This command takes all changes in the staging area and creates a permanent snapshot in the repository. Each commit has a unique ID, author information, and timestamp.

.git Directory Structure

Every Git repository has a hidden .git directory that contains all the metadata and object database. Understanding its structure helps you understand how Git works internally.

.git/ ├── HEAD ├── config ├── index ├── objects/ │ ├── 00/ │ ├── 01/ │ └── ... ├── refs/ │ ├── heads/ │ └── tags/ └── hooks/
File/Folder Purpose
HEAD Points to current branch reference
config Repository-specific configuration
index Staging area binary file
objects/ All Git objects (commits, trees, blobs)
refs/heads/ Branch pointers
refs/tags/ Tag pointers
hooks/ Scripts that run on Git events
🔍 Understanding Git Internally:
Git stores data as objects: blobs (file contents), trees (directory structures), and commits (snapshots). Each object has a SHA-1 hash that uniquely identifies it. This design makes Git extremely efficient at storing project history.

Installing and Configuring Git

Installation on Different Operating Systems

Git is available for all major operating systems. The installation process varies slightly depending on your OS.

For Windows Users

# Download Git for Windows from official website # https://git-scm.com/download/win # Run the installer with default settings

The Git for Windows installer includes Git Bash, which provides a Unix-like command line environment. It also integrates with Windows Explorer for easy access.

For macOS Users

# Install using Homebrew (recommended) brew install git # Or download from official website # https://git-scm.com/download/mac

Homebrew is a package manager for macOS that makes installing and updating Git easy. If you don't have Homebrew, you can download the installer from the official website.

For Linux Users

# Ubuntu/Debian based systems sudo apt update sudo apt install git # RHEL/CentOS based systems sudo yum install git # Fedora sudo dnf install git

Linux distributions include Git in their package repositories. Use your distribution's package manager to install it. The commands above work for the most common Linux distributions.

Essential Git Configuration

After installing Git, you need to configure it with your identity. This information is included with every commit you make.

# Set your name (appears in commits) git config --global user.name "Your Name" # Set your email (appears in commits) git config --global user.email "your.email@example.com"

These commands set your name and email globally (for all repositories on your computer). Git uses this information to identify who made each commit. This is required before you can make any commits.

# Set default text editor git config --global core.editor "code --wait" # Enable color output git config --global color.ui auto # Set default branch name to main git config --global init.defaultBranch main

These additional configurations improve your Git experience. Setting the editor lets you use VS Code (or your preferred editor) for commit messages. Color output makes Git commands easier to read.

# Create useful aliases git config --global alias.co checkout git config --global alias.br branch git config --global alias.ci commit git config --global alias.st status

Aliases create shortcuts for common Git commands. After setting these aliases, you can type "git st" instead of "git status", saving you time and keystrokes.

# View all configurations git config --list

This command displays all your Git configuration settings. It's useful to verify that your settings are correct, especially after initial setup.

⚙️ Configuration Levels:
Git has three configuration levels:
1. System: /etc/gitconfig (affects all users)
2. Global: ~/.gitconfig (affects all your repositories)
3. Local: .git/config (affects only current repository)

Git Workflow: From Beginner to Pro

The Basic Git Workflow Cycle

Git follows a simple but powerful workflow that you'll use repeatedly. Understanding this cycle is key to using Git effectively.

🔄 Standard Git Workflow:
1. Modify files in working directory
2. Stage changes (git add)
3. Commit changes (git commit)
4. Repeat as needed
5. Share with others (git push)

Step-by-Step Workflow Example

Let's walk through a complete example of using Git for a simple project. Follow along in your own practice folder.

Step 1: Initialize a Repository

# Create a new directory for your project mkdir my-first-project cd my-first-project # Initialize Git repository git init

The "git init" command creates a new Git repository in the current directory. It creates the .git folder that will store all version control information. You only need to run this once per project.

Step 2: Create and Track Files

# Create a new file echo "# My First Git Project" > README.md # Check repository status git status

After creating a file, use "git status" to see its current state. The file will appear as "untracked" because Git doesn't know about it yet. Untracked files are not included in version control.

Step 3: Stage Changes

# Add file to staging area git add README.md # Check status again git status

The "git add" command moves files from the working directory to the staging area. Files in the staging area are ready to be committed. You can add multiple files before committing.

Step 4: Commit Changes

# Commit staged changes with a message git commit -m "Add README file"

The "git commit" command creates a permanent snapshot of all staged changes. The -m flag lets you add a commit message directly. Good commit messages explain WHY the change was made.

Step 5: View History

# View commit history git log --oneline

This command shows your commit history in a compact format. Each commit has a unique hash (like a1b2c3d) and your commit message. You can see who made each commit and when.

Common Workflow Patterns

Different teams use different Git workflows depending on their needs. Here are the most common patterns used in industry.

Workflow Description Best For
Centralized Everyone works on main branch Small teams, simple projects
Feature Branch Each feature in separate branch Most teams, medium projects
Gitflow Strict branching with release cycles Large teams, enterprise projects
Forking Contributors fork main repository Open source projects
🚀 Recommended for Beginners:
Start with the Feature Branch workflow. Create a new branch for each feature or bug fix. This keeps your main branch stable while allowing experimentation. It's simple but powerful enough for most projects.

Essential Git Commands Every Developer Should Know

Getting Started Commands

# Initialize new repository git init

Creates a new Git repository in the current directory. This is the first command you run when starting a new project. It creates the .git folder that stores all version control data.

# Clone existing repository git clone https://github.com/user/repository.git

Downloads an existing repository from a remote server (like GitHub). This creates a local copy with full history. Use this when you want to work on an existing project.

# Check repository status git status

Shows the current state of your working directory and staging area. It tells you which files are modified, staged, or untracked. Run this frequently to understand what's happening.

Basic Workflow Commands

# Stage specific file git add filename.txt

Adds a specific file to the staging area. Only staged files will be included in the next commit. You can stage multiple files before committing.

# Stage all changes git add .

Stages all modified and new files in the current directory. The dot means "current directory". Use with caution to avoid staging unwanted files.

# Commit staged changes git commit -m "Descriptive message"

Creates a commit with all staged changes. The -m flag lets you add a commit message. Write clear messages that explain WHY you made changes.

Viewing History

# View commit history git log

Shows detailed commit history with author, date, and commit message. Press 'q' to exit the log view. Use this to understand project history.

# Compact history view git log --oneline

Shows commit history in one line per commit. Includes short commit hash and message. Useful for quick overview of project history.

# Visual branch history git log --graph --all --oneline

Shows commit history with branch visualization. The --graph flag adds ASCII art showing branches. --all shows all branches, not just current one.

Branching Commands

# List all branches git branch

Shows all local branches in your repository. The current branch is marked with an asterisk (*). Use this to see available branches.

# Create new branch git branch feature-branch

Creates a new branch with the specified name. This doesn't switch to the new branch. The new branch starts from current commit.

# Switch to branch git checkout branch-name

Switches to the specified branch. Your working directory updates to show files from that branch. Make sure to commit or stash changes before switching.

# Create and switch to new branch git checkout -b new-feature

Creates a new branch and switches to it immediately. This is the most common way to start working on a new feature. The -b flag means "create branch".

Remote Operations

# Add remote repository git remote add origin https://github.com/user/repo.git

Connects your local repository to a remote repository (like GitHub). "origin" is the conventional name for the main remote. You only need to do this once per repository.

# Push commits to remote git push origin main

Uploads your local commits to the remote repository. "origin" is the remote name, "main" is the branch name. This shares your work with others.

# Pull updates from remote git pull origin main

Downloads changes from remote and merges them into your local branch. This updates your local repository with others' work. Run this regularly to stay up-to-date.

Undoing Changes

# Unstage file (keep changes) git reset HEAD filename.txt

Removes a file from the staging area but keeps changes in working directory. Useful if you accidentally staged wrong file. Changes remain in your working directory.

# Discard working directory changes git checkout -- filename.txt

Discards all changes to a file in working directory. This reverts file to last committed state. Use with caution - changes cannot be recovered.

# Create undo commit git revert commit-hash

Creates a new commit that undoes changes from a specific commit. This is the safest way to undo changes. It doesn't rewrite history, just adds new commit.

📝 Practice Exercise:
1. Create a practice folder
2. Initialize Git repository
3. Create a README.md file
4. Stage and commit it
5. Make changes, stage, commit again
6. View history with git log
7. Create and switch to a new branch
8. Practice makes perfect!

Next Steps in Your Git Journey

Congratulations! You've learned the fundamentals of Git. You now understand what Git is, why it's important, how it works internally, and the basic commands.

What to Learn Next

🚀 Recommended Learning Path:
1. .gitignore files: Tell Git which files to ignore
2. Merge conflicts: How to resolve when changes conflict
3. GitHub/GitLab: Hosting repositories online
4. Pull Requests: Code review workflow
5. Git hooks: Automate tasks with scripts
6. Advanced merging: Rebase, cherry-pick, etc.

Best Practices to Remember

  • Commit often with clear messages
  • Write commit messages in imperative mood ("Add feature" not "Added feature")
  • Keep commits focused on one logical change
  • Pull before you push to avoid conflicts
  • Use branches for new features
  • Never force push to shared branches
💪 Keep Practicing:
The best way to learn Git is to use it daily. Start using Git for all your projects, even small ones. Make mistakes in practice repositories where it doesn't matter. Soon, Git will become second nature!

Remember: Every expert was once a beginner. Don't get discouraged if Git seems confusing at first. Keep practicing, and soon you'll wonder how you ever worked without it.

Monday, January 19, 2026

Linux Interview & DevOps Scenarios

Linux Interview & DevOps Practice Guide

Complete Linux Interview & DevOps Practice Guide

Pro Tip: Practice these commands in a Linux VM or Docker container. Set up a lab environment to experiment safely.

Linux Interview Questions & Answers

1. What is the difference between hard links and symbolic links? Easy

# Create hard link (shares same inode number): ln original.txt hardlink.txt # Create symbolic link (different inode, points to path): ln -s original.txt symlink.txt # Verify with ls -li: ls -li

Key Differences:

Hard LinkSymbolic Link
Same inode numberDifferent inode number
Can't link directoriesCan link directories
Works only within same filesystemCan cross filesystems
If original deleted, link still worksIf original deleted, link breaks

2. Explain Linux boot process Medium

  1. BIOS/UEFI: Hardware initialization, runs POST
  2. Bootloader (GRUB): Loads kernel and initramfs
  3. Kernel: Initializes hardware, mounts root filesystem
  4. Init Process: systemd (PID 1) starts services
  5. Runlevel/Target: Multi-user.target (normal boot)
  6. Login: Display manager or terminal login
# Check boot time: systemd-analyze systemd-analyze blame # See service startup times

3. How to find which process is using a specific port? Easy

# Method 1: Using netstat netstat -tulpn | grep :80 # Method 2: Using ss (modern replacement) ss -tulpn | grep :80 # Method 3: Using lsof lsof -i :80 # Method 4: Using fuser fuser 80/tcp

4. What is swap space and when is it used? Medium

Answer: Swap is disk space used as virtual memory when RAM is full. It prevents OOM killer from terminating processes.

# Check swap usage: free -h swapon --show # Create swap file: fallocate -l 1G /swapfile chmod 600 /swapfile mkswap /swapfile swapon /swapfile # Make permanent in /etc/fstab: echo '/swapfile none swap sw 0 0' >> /etc/fstab

5. Explain process states in Linux Medium

StateCodeDescription
RunningRCurrently executing
SleepingSWaiting for event
Uninterruptible SleepDWaiting for I/O (can't be killed)
StoppedTStopped by signal (Ctrl+Z)
ZombieZTerminated but parent hasn't reaped
# View process states: ps aux # Look at STAT column ps -eo pid,stat,comm | grep -E 'D|Z' # Find problematic processes

6. What are runlevels in Linux? Medium

RunlevelSystemd TargetPurpose
0poweroff.targetShutdown
1rescue.targetSingle user mode
3multi-user.targetMulti-user, no GUI
5graphical.targetMulti-user with GUI
6reboot.targetReboot
# Check current runlevel: runlevel systemctl get-default # Change runlevel: init 3 # Switch to runlevel 3 systemctl isolate multi-user.target

7. Explain Linux file permissions in detail Easy

# r=read(4), w=write(2), x=execute(1) # Example: chmod 755 = rwxr-xr-x # Owner: rwx (7), Group: r-x (5), Others: r-x (5) ls -la file.txt # Output: -rwxr-xr-x 1 user group 1024 Jan 1 10:00 file.txt # Change permissions: chmod 644 file.txt # rw-r--r-- chmod +x script.sh # Add execute permission chmod u=rwx,g=rx,o=r file.txt # Change ownership: chown user:group file.txt chown -R user:group /dir # Recursive

8. How does SSH key authentication work? Medium

Answer: Public-private key pair. Public key on server, private key on client.

# Generate SSH key pair: ssh-keygen -t rsa -b 4096 -C "your_email@example.com" # Copy public key to server: ssh-copy-id user@server_ip # Test connection: ssh user@server_ip # SSH config file (~/.ssh/config): Host myserver HostName server_ip User username IdentityFile ~/.ssh/id_rsa Port 22

Practical Scenarios & Solutions

Scenario 1: Server is slow - performance troubleshooting

Symptoms: High load average, slow response, timeouts.

# Step 1: Check load average (1, 5, 15 minutes): uptime # If load > CPU cores, system is overloaded # Step 2: Check CPU usage: top htop # Better alternative mpstat -P ALL 1 5 # Per-core statistics # Step 3: Check memory: free -h vmstat 1 5 ps aux --sort=-%mem | head -10 # Step 4: Check disk I/O: iostat -x 1 5 iotop -o # Top I/O processes # Step 5: Check network: iftop -n nethogs # Per-process network # Step 6: Check for too many processes: ps aux | wc -l pstree # Process tree

Scenario 2: Disk full - emergency cleanup

Symptoms: "No space left on device" errors.

# Step 1: Find which partition is full: df -h df -i # Check inode usage # Step 2: Find large files/directories: du -ahx / 2>/dev/null | sort -rh | head -20 ncdu / # Interactive disk usage analyzer # Step 3: Check for deleted files still open: lsof | grep deleted # Restart process holding deleted files # Step 4: Clear package cache: apt clean # Debian/Ubuntu yum clean all # RHEL/CentOS dnf clean all # Fedora # Step 5: Clear old logs: journalctl --vacuum-size=200M find /var/log -name "*.log" -mtime +30 -delete # Step 6: Clear Docker resources: docker system prune -a docker volume prune # Step 7: Clear temporary files: rm -rf /tmp/* rm -rf /var/tmp/*

Scenario 3: Service won't start - debugging steps

# Step 1: Check service status: systemctl status nginx systemctl --failed # All failed services # Step 2: Check logs: journalctl -u nginx --no-pager -n 100 journalctl -u nginx --since "1 hour ago" tail -f /var/log/nginx/error.log # Step 3: Test configuration: nginx -t # Nginx apachectl configtest # Apache sshd -t # SSH # Step 4: Check dependencies: systemctl list-dependencies nginx # Step 5: Check ports in use: ss -tulpn | grep :80 lsof -i :80 # Step 6: Check SELinux/AppArmor: getenforce # SELinux status sestatus aa-status # AppArmor status # Step 7: Check file permissions: ls -la /var/www/ ls -Z /var/www/ # SELinux context

Scenario 4: Network connectivity issues

# Step 1: Check basic connectivity: ping 8.8.8.8 # Test internet ping gateway_ip # Test local network ping google.com # Test DNS resolution # Step 2: Check DNS: nslookup google.com dig google.com cat /etc/resolv.conf # Step 3: Check routing: ip route route -n traceroute google.com mtr google.com # Continuous traceroute # Step 4: Check network configuration: ip addr show ifconfig -a cat /etc/netplan/*.yaml # Ubuntu 18.04+ # Step 5: Check firewall: iptables -L -n -v ufw status verbose # Ubuntu firewall-cmd --list-all # firewalld # Step 6: Check services: systemctl status NetworkManager systemctl status networking systemctl status systemd-networkd

DevOps Real-World Use Cases

Use Case 1: Complete CI/CD Pipeline

# Jenkinsfile pipeline example: pipeline { agent any stages { stage('Checkout') { steps { checkout scm } } stage('Build') { steps { sh 'docker build -t myapp:$BUILD_NUMBER .' } } stage('Test') { steps { sh 'docker run myapp:$BUILD_NUMBER npm test' sh 'docker run myapp:$BUILD_NUMBER npm run lint' } } stage('Security Scan') { steps { sh 'trivy image myapp:$BUILD_NUMBER' } } stage('Deploy to Staging') { steps { sh 'docker tag myapp:$BUILD_NUMBER registry/staging:latest' sh 'docker push registry/staging:latest' sh 'kubectl set image deployment/myapp-staging myapp=registry/staging:latest' } } stage('Deploy to Production') { when { branch 'main' } steps { input message: 'Deploy to production?' sh 'docker tag myapp:$BUILD_NUMBER registry/production:latest' sh 'docker push registry/production:latest' sh 'kubectl set image deployment/myapp-prod myapp=registry/production:latest' } } } post { success { emailext( subject: "Build Successful: ${env.JOB_NAME}", body: "Build ${env.BUILD_NUMBER} was successful", to: 'team@example.com' ) } failure { emailext( subject: "Build Failed: ${env.JOB_NAME}", body: "Build ${env.BUILD_NUMBER} failed", to: 'team@example.com' ) } } }

Use Case 2: Infrastructure as Code with Terraform

# main.tf - Complete AWS infrastructure provider "aws" { region = "us-east-1" } # VPC resource "aws_vpc" "main" { cidr_block = "10.0.0.0/16" enable_dns_hostnames = true tags = { Name = "main-vpc" } } # Subnets resource "aws_subnet" "public" { count = 2 vpc_id = aws_vpc.main.id cidr_block = cidrsubnet(aws_vpc.main.cidr_block, 8, count.index) availability_zone = element(["us-east-1a", "us-east-1b"], count.index) tags = { Name = "public-subnet-${count.index}" } } # Security Group resource "aws_security_group" "web" { name = "web-sg" vpc_id = aws_vpc.main.id ingress { from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { from_port = 443 to_port = 443 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } } # EC2 Instance resource "aws_instance" "web" { count = 2 ami = "ami-0c55b159cbfafe1f0" instance_type = "t3.micro" subnet_id = aws_subnet.public[count.index].id vpc_security_group_ids = [aws_security_group.web.id] user_data = <<-EOF #!/bin/bash apt-get update apt-get install -y nginx systemctl start nginx systemctl enable nginx EOF tags = { Name = "web-server-${count.index}" } } # Load Balancer resource "aws_lb" "web" { name = "web-lb" internal = false load_balancer_type = "application" security_groups = [aws_security_group.web.id] subnets = aws_subnet.public[*].id } # Outputs output "load_balancer_dns" { value = aws_lb.web.dns_name } output "instance_ips" { value = aws_instance.web[*].public_ip }

Use Case 3: Docker Compose for Development

# docker-compose.yml - Full stack application version: '3.8' services: # Database db: image: postgres:13 environment: POSTGRES_DB: myapp POSTGRES_USER: admin POSTGRES_PASSWORD: secret volumes: - postgres_data:/var/lib/postgresql/data ports: - "5432:5432" networks: - app-network healthcheck: test: ["CMD-SHELL", "pg_isready -U admin"] interval: 10s timeout: 5s retries: 5 # Redis Cache redis: image: redis:6-alpine command: redis-server --requirepass secret volumes: - redis_data:/data ports: - "6379:6379" networks: - app-network # Application app: build: . environment: DATABASE_URL: postgres://admin:secret@db:5432/myapp REDIS_URL: redis://:secret@redis:6379 volumes: - ./app:/app - /app/node_modules ports: - "3000:3000" depends_on: db: condition: service_healthy redis: condition: service_started networks: - app-network restart: unless-stopped # Nginx Reverse Proxy nginx: image: nginx:alpine volumes: - ./nginx.conf:/etc/nginx/nginx.conf - ./ssl:/etc/nginx/ssl ports: - "80:80" - "443:443" depends_on: - app networks: - app-network # Monitoring prometheus: image: prom/prometheus volumes: - ./prometheus.yml:/etc/prometheus/prometheus.yml - prometheus_data:/prometheus ports: - "9090:9090" networks: - app-network grafana: image: grafana/grafana environment: GF_SECURITY_ADMIN_PASSWORD: admin volumes: - grafana_data:/var/lib/grafana ports: - "3001:3000" networks: - app-network networks: app-network: driver: bridge volumes: postgres_data: redis_data: prometheus_data: grafana_data:

Troubleshooting Scenarios

1. Kubernetes Pod Issues

# Check pod status: kubectl get pods kubectl describe pod pod-name # Check logs: kubectl logs pod-name kubectl logs pod-name -c container-name # Multi-container pod kubectl logs --previous pod-name # Previous instance # Debug pod: kubectl exec -it pod-name -- sh kubectl exec -it pod-name -c container-name -- sh # Common issues: kubectl get events --sort-by=.metadata.creationTimestamp kubectl get pods -o wide # See node allocation kubectl get svc # Check services # Check resource limits: kubectl describe pod pod-name | grep -A 10 Limits # Check Persistent Volumes: kubectl get pv kubectl get pvc

2. Docker Container Issues

# Check running containers: docker ps docker ps -a # All containers including stopped # Check logs: docker logs container-name docker logs --tail 100 -f container-name # Inspect container: docker inspect container-name docker stats container-name # Resource usage # Debug container: docker exec -it container-name sh docker exec -it container-name bash # Check Docker daemon: systemctl status docker journalctl -u docker --no-pager -n 50 # Clean up resources: docker system df # Disk usage docker system prune -a # Remove unused

3. Database Performance Issues

# MySQL/MariaDB: mysql -e "SHOW PROCESSLIST;" mysql -e "SHOW ENGINE INNODB STATUS\G" mysql -e "SHOW VARIABLES LIKE '%max_connections%';" mysql -e "SHOW STATUS LIKE 'Threads_connected';" # PostgreSQL: psql -c "SELECT * FROM pg_stat_activity;" psql -c "SELECT * FROM pg_stat_user_tables;" psql -c "SELECT pid, query FROM pg_stat_activity WHERE state = 'active';" # Check slow queries: # MySQL: SHOW VARIABLES LIKE 'slow_query_log'; # PostgreSQL: log_min_duration_statement = 1000 # Check locks: mysql -e "SHOW OPEN TABLES WHERE In_use > 0;" psql -c "SELECT * FROM pg_locks;"

Hands-on Practice Tasks

Task 1: Complete System Monitoring Script

#!/bin/bash # system_monitor.sh - Comprehensive system monitoring # Configuration LOG_FILE="/var/log/system_monitor.log" ALERT_EMAIL="admin@example.com" ALERT_THRESHOLD_CPU=80 ALERT_THRESHOLD_MEMORY=90 ALERT_THRESHOLD_DISK=85 # Get system metrics CPU_USAGE=$(top -bn1 | grep "Cpu(s)" | awk '{print $2}' | cut -d'%' -f1) MEMORY_USAGE=$(free | awk '/Mem/{printf("%.2f"), $3/$2*100}') DISK_USAGE=$(df / | awk 'NR==2 {print $5}' | sed 's/%//') LOAD_AVERAGE=$(uptime | awk -F'load average:' '{print $2}' | xargs) UPTIME=$(uptime -p) TIMESTAMP=$(date "+%Y-%m-%d %H:%M:%S") # Log metrics echo "[$TIMESTAMP] CPU: ${CPU_USAGE}% | Memory: ${MEMORY_USAGE}% | Disk: ${DISK_USAGE}% | Load: ${LOAD_AVERAGE} | Uptime: ${UPTIME}" >> $LOG_FILE # Check thresholds and send alerts send_alert() { local subject=$1 local message=$2 echo "[$TIMESTAMP] ALERT: $subject - $message" >> $LOG_FILE echo "$message" | mail -s "$subject" $ALERT_EMAIL } # CPU alert if (( $(echo "$CPU_USAGE > $ALERT_THRESHOLD_CPU" | bc -l) )); then send_alert "High CPU Usage" "CPU usage is ${CPU_USAGE}% (threshold: ${ALERT_THRESHOLD_CPU}%)" fi # Memory alert if (( $(echo "$MEMORY_USAGE > $ALERT_THRESHOLD_MEMORY" | bc -l) )); then send_alert "High Memory Usage" "Memory usage is ${MEMORY_USAGE}% (threshold: ${ALERT_THRESHOLD_MEMORY}%)" fi # Disk alert if [ $DISK_USAGE -gt $ALERT_THRESHOLD_DISK ]; then send_alert "High Disk Usage" "Disk usage is ${DISK_USAGE}% (threshold: ${ALERT_THRESHOLD_DISK}%)" fi # Check running processes echo -e "\nTop 5 CPU processes:" >> $LOG_FILE ps aux --sort=-%cpu | head -6 >> $LOG_FILE echo -e "\nTop 5 Memory processes:" >> $LOG_FILE ps aux --sort=-%mem | head -6 >> $LOG_FILE # Check disk space by partition echo -e "\nDisk usage by partition:" >> $LOG_FILE df -h >> $LOG_FILE # Check network connections echo -e "\nNetwork connections (ESTABLISHED):" >> $LOG_FILE netstat -an | grep ESTABLISHED | wc -l >> $LOG_FILE # Rotate log if too large LOG_SIZE=$(wc -c < $LOG_FILE) if [ $LOG_SIZE -gt 10485760 ]; then # 10MB mv $LOG_FILE $LOG_FILE.old touch $LOG_FILE fi
# Add to crontab to run every 5 minutes: crontab -e # Add: */5 * * * * /path/to/system_monitor.sh

Task 2: Dockerize a Multi-Service Application

# Directory structure: mkdir -p myapp/{app,nginx,mysql} cd myapp # 1. Create Python app (app/app.py): from flask import Flask, jsonify import mysql.connector import redis import os app = Flask(__name__) # Database configuration db_config = { 'host': os.getenv('DB_HOST', 'db'), 'user': os.getenv('DB_USER', 'root'), 'password': os.getenv('DB_PASSWORD', 'password'), 'database': os.getenv('DB_NAME', 'mydb') } # Redis configuration redis_client = redis.Redis( host=os.getenv('REDIS_HOST', 'redis'), port=int(os.getenv('REDIS_PORT', 6379)), decode_responses=True ) @app.route('/') def home(): return jsonify({ 'status': 'ok', 'service': 'flask-app', 'timestamp': datetime.now().isoformat() }) @app.route('/health') def health(): try: # Test database connection conn = mysql.connector.connect(**db_config) conn.close() # Test Redis connection redis_client.ping() return jsonify({'status': 'healthy'}), 200 except Exception as e: return jsonify({'status': 'unhealthy', 'error': str(e)}), 500 @app.route('/cache//') def cache(key, value): redis_client.set(key, value) return jsonify({'key': key, 'value': value}) if __name__ == '__main__': app.run(host='0.0.0.0', port=5000, debug=True) # 2. Create requirements.txt: Flask==2.3.2 mysql-connector-python==8.0.33 redis==4.5.5 # 3. Create Dockerfile (app/Dockerfile): FROM python:3.9-slim WORKDIR /app COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt COPY . . CMD ["python", "app.py"] # 4. Create nginx configuration (nginx/nginx.conf): events { worker_connections 1024; } http { upstream flask_app { server app:5000; } server { listen 80; location / { proxy_pass http://flask_app; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } location /health { proxy_pass http://flask_app/health; } } } # 5. Create docker-compose.yml: version: '3.8' services: db: image: mysql:8.0 environment: MYSQL_ROOT_PASSWORD: password MYSQL_DATABASE: mydb volumes: - mysql_data:/var/lib/mysql ports: - "3306:3306" redis: image: redis:7-alpine ports: - "6379:6379" app: build: ./app environment: DB_HOST: db DB_USER: root DB_PASSWORD: password DB_NAME: mydb REDIS_HOST: redis depends_on: - db - redis ports: - "5000:5000" nginx: image: nginx:alpine volumes: - ./nginx/nginx.conf:/etc/nginx/nginx.conf ports: - "80:80" depends_on: - app volumes: mysql_data: # 6. Build and run: docker-compose up --build

Task 3: Kubernetes Deployment with Auto-scaling

# 1. Create namespace: kubectl create namespace myapp # 2. Create deployment.yaml: apiVersion: apps/v1 kind: Deployment metadata: name: webapp namespace: myapp labels: app: webapp spec: replicas: 3 selector: matchLabels: app: webapp template: metadata: labels: app: webapp spec: containers: - name: webapp image: nginx:latest ports: - containerPort: 80 resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" livenessProbe: httpGet: path: / port: 80 initialDelaySeconds: 30 periodSeconds: 10 readinessProbe: httpGet: path: / port: 80 initialDelaySeconds: 5 periodSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: webapp-service namespace: myapp spec: selector: app: webapp ports: - port: 80 targetPort: 80 type: LoadBalancer --- apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: webapp-hpa namespace: myapp spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: webapp minReplicas: 2 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 80 # 3. Create configmap.yaml (for configuration): apiVersion: v1 kind: ConfigMap metadata: name: app-config namespace: myapp data: APP_ENV: "production" LOG_LEVEL: "info" MAX_CONNECTIONS: "100" # 4. Create secret.yaml (for sensitive data): apiVersion: v1 kind: Secret metadata: name: app-secrets namespace: myapp type: Opaque data: database-password: cGFzc3dvcmQxMjM= # base64 encoded api-key: YXBpLWtleS1zZWNyZXQ= # 5. Create ingress.yaml (for routing): apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: webapp-ingress namespace: myapp annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: myapp.example.com http: paths: - path: / pathType: Prefix backend: service: name: webapp-service port: number: 80 # 6. Apply all configurations: kubectl apply -f deployment.yaml kubectl apply -f configmap.yaml kubectl apply -f secret.yaml kubectl apply -f ingress.yaml # 7. Monitor deployment: kubectl get all -n myapp kubectl describe hpa webapp-hpa -n myapp kubectl logs deployment/webapp -n myapp kubectl top pods -n myapp

Essential Linux Commands Cheatsheet

# System Information $ uname -a # Kernel version $ hostname # System hostname $ uptime # System uptime $ cat /etc/os-release # OS version $ lscpu # CPU information $ lsblk # Block devices $ lspci # PCI devices $ lsusb # USB devices # Process Management $ ps aux # All processes $ top # Interactive process viewer $ htop # Better top (install first) $ kill -9 PID # Force kill process $ killall process_name # Kill all processes by name $ pkill pattern # Kill by pattern $ nice -n 10 command # Run with low priority $ renice 15 PID # Change priority # Networking $ ip addr show # Network interfaces $ ip route # Routing table $ ss -tulpn # Open ports (modern) $ netstat -tulpn # Open ports (traditional) $ traceroute host # Network path $ mtr host # Better traceroute $ dig domain.com # DNS lookup $ nslookup domain.com # DNS lookup $ curl -I url # HTTP headers $ wget url # Download file $ scp file user@host:/path # Secure copy $ rsync -avz source dest # Synchronize files # Disk Operations $ df -h # Disk space $ du -sh * # Directory sizes $ fdisk -l # Partition table $ mount # Mounted filesystems $ umount /path # Unmount $ fsck /dev/sda1 # Filesystem check $ badblocks /dev/sda # Check for bad blocks $ smartctl -a /dev/sda # SMART data # File Operations $ find / -name "*.log" # Find files $ grep -r "text" /dir # Search text $ awk '{print $1}' file # Process columns $ sed 's/old/new/g' file # Replace text $ sort file # Sort lines $ uniq file # Remove duplicates $ cut -d: -f1 file # Extract columns $ tar -czf archive.tar.gz dir # Create tar $ tar -xzf archive.tar.gz # Extract tar $ zip -r archive.zip dir # Create zip $ unzip archive.zip # Extract zip # User Management $ whoami # Current user $ who # Logged in users $ w # Who and what they're doing $ last # Last logins $ passwd username # Change password $ useradd username # Add user $ usermod -aG group user # Add user to group $ userdel username # Delete user $ groupadd groupname # Add group $ groups username # User groups # Package Management $ apt update # Ubuntu/Debian update $ apt upgrade # Ubuntu/Debian upgrade $ apt install package # Ubuntu/Debian install $ yum update # RHEL/CentOS update $ yum install package # RHEL/CentOS install $ dnf update # Fedora update $ dnf install package # Fedora install $ snap install package # Snap packages $ pip install package # Python packages $ npm install package # Node.js packages # Service Management (systemd) $ systemctl start service $ systemctl stop service $ systemctl restart service $ systemctl reload service $ systemctl status service $ systemctl enable service $ systemctl disable service $ systemctl daemon-reload $ journalctl -u service # Service logs $ journalctl -f # Follow logs $ journalctl --since "1 hour ago" $ journalctl --boot # Current boot logs
Practice Exercise: Set up a Linux VM (VirtualBox) or use Docker containers to practice these commands and scenarios. Break things intentionally and learn how to fix them!

Git Basics Commands: Essential Commands

Git Basics Commands: Essential Commands Every Developer Must Know Git Basics Commands: Essential Commands Every Develop...