Saturday, November 29, 2025

Real-World Terraform Project Structure and Patterns

Real-World Terraform Project Structure and Patterns - DevOps Preparation
Terraform Project Structure Best Practices Enterprise Modules Patterns

Real-World Terraform Project Structure and Patterns

Published on: November 24, 2023 | Author: DevOps Engineering Team

Discover enterprise-grade Terraform project structures, scalable patterns, and battle-tested practices for managing complex infrastructure as code.

Project Structure Patterns

Choose the right structure based on your organization's size and complexity.

Project Structure Explorer

Select a structure pattern to see detailed implementation:

Select a structure pattern to see detailed implementation...

🏗️ Monolithic Structure

Best for: Small projects, single environment

📁 my-project/
📄 main.tf
📄 variables.tf
📄 outputs.tf
📄 terraform.tfvars
📄 providers.tf

🌳 Environment per Folder

Best for: Multiple environments, shared modules

📁 infrastructure/
📁 modules/
📁 environments/
📁 dev/
📁 staging/
📁 production/

Module Design Principles

Create reusable, maintainable Terraform modules.

Standard Module Structure

📁 modules/vpc/
📄 main.tf # Primary resources
📄 variables.tf # Input variables
📄 outputs.tf # Output values
📄 versions.tf # Version constraints
📄 README.md # Documentation

Composite Module Example

# modules/eks-cluster/main.tf
module "vpc" {
  source = "../vpc"
  
  name               = var.cluster_name
  cidr_block         = var.vpc_cidr
  availability_zones = var.availability_zones
}

module "eks" {
  source = "../eks"
  
  cluster_name    = var.cluster_name
  vpc_id          = module.vpc.vpc_id
  subnet_ids      = module.vpc.private_subnet_ids
  node_group_name = var.node_group_name
}

Module Design Best Practices

  • Keep modules focused and single-purpose
  • Use semantic versioning for modules
  • Provide comprehensive documentation
  • Include examples in each module
  • Validate inputs with conditionals

Environment Strategies

Manage multiple environments effectively with Terraform.

✅ Workspace Benefits

  • Single codebase for all environments
  • Easy to create new environments
  • Consistent configuration
  • Simplified CI/CD pipelines

❌ Workspace Challenges

  • State isolation complexity
  • Limited environment-specific customization
  • Risk of cross-environment contamination
  • Harder to manage different configurations
Approach Workspaces Folder per Environment Branch per Environment
State Management Single backend, multiple states Separate backends Separate backends
Code Reuse High Medium (via modules) Low
Environment Isolation Medium High High
Complexity Low Medium High

State Management

Secure and scalable Terraform state management strategies.

🔐 Remote State Configuration

# terraform.tf
terraform {
  required_version = ">= 1.5.0"
  
  backend "s3" {
    bucket         = "my-company-tf-state"
    key            = "production/network/terraform.tfstate"
    region         = "us-east-1"
    encrypt        = true
    dynamodb_table = "terraform-state-lock"
  }
  
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}

# providers.tf
provider "aws" {
  region = var.aws_region
  
  default_tags {
    tags = {
      Environment = var.environment
      Project     = var.project_name
      ManagedBy   = "terraform"
    }
  }
}

🏗️ State Isolation Pattern

📁 infrastructure/
📁 global/
📄 backend.hcl
📄 terraform.tf
📁 regional/
📁 us-east-1/
📁 eu-west-1/
📁 environments/
📁 dev/
📁 staging/
📁 production/

Best Practices

Enterprise-grade practices for scalable Terraform codebases.

📝 Naming Conventions

# Resources: {project}_{component}_{name}
resource "aws_s3_bucket" "myapp_logs_primary" {}

# Variables: descriptive with type
variable "database_instance_count" {
  type    = number
  default = 2
}

# Outputs: {component}_{attribute}
output "vpc_main_id" {
  value = aws_vpc.main.id
}

🔧 Configuration Management

# Use tfvars for environment config
# dev.tfvars
environment = "dev"
instance_type = "t3.micro"
database_size = 10

# production.tfvars  
environment = "production"
instance_type = "m5.large"
database_size = 100

# Apply with:
terraform apply -var-file=dev.tfvars

📚 Code Organization

# Separate files by concern:
# - providers.tf    # Provider configuration
# - main.tf         # Primary resources
# - variables.tf    # Input variables  
# - outputs.tf      # Output values
# - data.tf         # Data sources
# - locals.tf       # Local values
# - versions.tf     # Version constraints

🛡️ Security & Compliance

# versions.tf - Pin versions
terraform {
  required_version = "~> 1.5"
  
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}

# Use validation blocks
variable "instance_type" {
  type        = string
  description = "EC2 instance type"
  
  validation {
    condition     = contains(["t3.micro", "t3.small"], var.instance_type)
    error_message = "Invalid instance type."
  }
}

Avoid These Common Mistakes

  • ❌ Hardcoding values instead of using variables
  • ❌ Not pinning provider and Terraform versions
  • ❌ Mixing environments in single state
  • ❌ Not using remote state with locking
  • ❌ Creating overly complex modules
  • ❌ Ignoring security scanning tools

Migration Patterns

Strategies for evolving your Terraform project structure.

1

Monolithic to Modular

# Before: Everything in main.tf
resource "aws_vpc" "main" { ... }
resource "aws_subnet" "public" { ... }
resource "aws_instance" "web" { ... }

# After: Extract to modules
module "network" {
  source = "./modules/network"
  # ... inputs
}

module "compute" {
  source = "./modules/compute"
  # ... inputs
}
2

State Migration

# Move resources to new state
# 1. Pull current state
terraform state pull > current.tfstate

# 2. Move specific resource
terraform state mv \\
  'aws_instance.web' \\
  'module.compute.aws_instance.web'

# 3. For complex migrations, use:
# - terraform import
# - Multiple state moves
# - State file editing (carefully!)
3

Backend Migration

# Update backend configuration
terraform {
  backend "s3" {
    # New backend config
    bucket = "new-state-bucket"
    key    = "path/to/state"
    region = "us-west-2"
  }
}

# Then run:
terraform init -migrate-state

# Confirm migration when prompted
# Verify state is accessible

Enterprise Project Structure Example

📁 terraform-enterprise/
📁 modules/
📁 networking/
📁 compute/
📁 database/
📁 security/
📁 environments/
📁 dev/
📁 us-east-1/
📁 eu-west-1/
📁 staging/
📁 production/
📁 global/
📁 iam/
📁 dns/
📄 README.md
📄 versions.tf
📄 .terraform-version

This is Part 13 of The Ultimate Terraform Mastery Series.

Next: Advanced Terraform Patterns →

Integrating Terraform into CI/CD Pipelines

Integrating Terraform into CI/CD Pipelines - DevOps Preparation
Terraform CI/CD Jenkins GitHub Actions GitLab CI Azure DevOps Automation

Integrating Terraform into CI/CD Pipelines

Published on: November 17, 2023 | Author: DevOps Engineering Team

Learn to automate Terraform deployments with robust CI/CD pipelines. Implement security, testing, and promotion workflows across multiple platforms.

Pipeline Design Patterns

Choose the right pipeline architecture for your Terraform workflows.

Code Commit
Validation
Plan
Security Scan
Apply (Manual)
Verify

🚀 Trunk-based Development

Ideal for: Small teams, rapid iteration

main branch → Auto-plan
PR merge → Auto-apply (staging)
Manual promotion to production

🏢 GitFlow Strategy

Ideal for: Enterprise, multiple environments

feature/* → Develop → Plan
develop → Staging → Apply
main → Production → Apply

Pipeline Design Principles

✅ Separate plan and apply stages ✅ Manual approval for production ✅ Comprehensive testing ✅ Security scanning ✅ Rollback capabilities

CI Platform Implementations

Terraform integration examples for popular CI/CD platforms.

🐙 GitHub Actions

# .github/workflows/terraform.yml
name: 'Terraform'
on:
  push:
    branches: [ main ]
  pull_request:

jobs:
  terraform:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      
      - uses: hashicorp/setup-terraform@v2
        with:
          terraform_version: 1.5.0
      
      - name: Terraform Format
        run: terraform fmt -check
      
      - name: Terraform Init
        run: terraform init
      
      - name: Terraform Validate
        run: terraform validate
      
      - name: Terraform Plan
        run: terraform plan
        env:
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

🦊 GitLab CI

# .gitlab-ci.yml
stages:
  - validate
  - plan
  - apply

terraform:validate:
  stage: validate
  image: hashicorp/terraform:latest
  script:
    - terraform init
    - terraform validate
    - terraform fmt -check

terraform:plan:
  stage: plan
  image: hashicorp/terraform:latest
  script:
    - terraform init
    - terraform plan -out=planfile
  artifacts:
    paths:
      - planfile
  only:
    - merge_requests

terraform:apply:
  stage: apply
  image: hashicorp/terraform:latest
  script:
    - terraform init
    - terraform apply -auto-approve planfile
  when: manual
  only:
    - main

🔷 Azure DevOps

# azure-pipelines.yml
trigger:
  branches:
    include:
    - main

pool:
  vmImage: 'ubuntu-latest'

steps:
- task: TerraformInstaller@1
  inputs:
    terraformVersion: '1.5.0'

- task: TerraformTaskV4@4
  inputs:
    provider: 'azurerm'
    command: 'init'
    workingDirectory: '$(System.DefaultWorkingDirectory)'

- task: TerraformTaskV4@4
  inputs:
    provider: 'azurerm'
    command: 'validate'
    workingDirectory: '$(System.DefaultWorkingDirectory)'

- task: TerraformTaskV4@4
  inputs:
    provider: 'azurerm'
    command: 'plan'
    workingDirectory: '$(System.DefaultWorkingDirectory)'
    environmentServiceName: 'Azure-Service-Connection'

⚙️ Jenkins Pipeline

// Jenkinsfile
pipeline {
    agent any
    environment {
        AWS_ACCESS_KEY_ID = credentials('aws-access-key')
        AWS_SECRET_ACCESS_KEY = credentials('aws-secret-key')
    }
    stages {
        stage('Checkout') {
            steps { checkout scm }
        }
        stage('Terraform Init') {
            steps {
                sh 'terraform init'
            }
        }
        stage('Terraform Validate') {
            steps {
                sh 'terraform validate'
            }
        }
        stage('Terraform Plan') {
            steps {
                sh 'terraform plan -out=tfplan'
            }
        }
        stage('Terraform Apply') {
            when {
                branch 'main'
            }
            steps {
                input message: 'Apply Terraform?', ok: 'Apply'
                sh 'terraform apply tfplan'
            }
        }
    }
}

Security Best Practices

Secure your Terraform pipelines and infrastructure.

🔐 Secrets Management

# Never store secrets in code
# Use CI/CD secret stores:

# GitHub Actions:
env:
  AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY }}

# GitLab CI:
variables:
  AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY}

# Jenkins:
environment {
  AWS_ACCESS_KEY = credentials('aws-key')
}

# Azure DevOps:
- task: AzureKeyVault@1
  inputs:
    azureSubscription: '$(azureSubscription)'
    KeyVaultName: '$(keyVaultName)'

🛡️ Infrastructure Security

# Integrate security scanning
- name: TFSec Security Scan
  run: |
    docker run --rm -v "$(pwd):/src" \
    aquasec/tfsec /src

- name: Checkov Scan
  run: |
    pip install checkov
    checkov -d .

- name: Terraform Compliance
  run: |
    docker run --rm -v "$(pwd):/target" \
    eerkunt/terraform-compliance

Security Considerations

🔒 Use minimal privilege IAM roles 🔒 Enable state encryption 🔒 Scan for secrets in code 🔒 Implement branch protection 🔒 Require code reviews

Automated Testing

Implement comprehensive testing in your Terraform pipelines.

1

Syntax and Validation Testing

# Pre-commit hooks or pipeline steps
terraform validate
terraform fmt -check -recursive
tflint --enable-rule=aws_instance_invalid_type
tfsec --exclude-downloaded-modules
2

Unit Testing with Terratest

// terraform_test.go
package test

import (
    "testing"
    "github.com/gruntwork-io/terratest/modules/terraform"
)

func TestTerraform(t *testing.T) {
    terraformOptions := &terraform.Options{
        TerraformDir: "../",
    }
    defer terraform.Destroy(t, terraformOptions)
    terraform.InitAndApply(t, terraformOptions)
    
    // Add assertions here
    instanceID := terraform.Output(t, terraformOptions, "instance_id")
    assert.NotEmpty(t, instanceID)
}
3

Integration Testing

# kitchen-terraform example
# .kitchen.yml
---
driver:
  name: terraform

provisioner:
  name: terraform

platforms:
  - name: aws

suites:
  - name: default
    verifier:
      name: terraform
      systems:
        - name: basic
          controls:
            - state_file
            - output_values

Environment Promotion

Manage infrastructure changes across multiple environments.

Environment Promotion Workflow

Select an environment to see promotion strategy:

Select an environment to see promotion strategy...
Environment Automation Level Approval Required Testing Requirements
Development Full Auto-apply None Basic validation
Staging Auto-plan, Manual apply Team Lead Integration tests
Production Manual plan and apply Multiple approvers Full test suite + security scan

Troubleshooting Common Issues

Solve frequent problems in Terraform CI/CD pipelines.

State Locking Issues

# In CI/CD, handle state locks gracefully
- name: Terraform Apply with retry
  run: |
    max_retries=3
    count=0
    until terraform apply -auto-approve; do
        count=$((count + 1))
        if [ $count -eq $max_retries ]; then
            echo "Max retries reached"
            exit 1
        fi
        echo "Retrying in 10 seconds..."
        sleep 10
    done

Timeout Handling

# Set appropriate timeouts
# GitHub Actions
timeout-minutes: 30

# GitLab CI
extends: .terraform_timeout

.terraform_timeout:
  timeout: 2h

# Jenkins
timeout(time: 30, unit: 'MINUTES') {
    sh 'terraform apply'
}

# Azure DevOps
timeoutInMinutes: 60

Pipeline Optimization Tips

  • Cache Terraform providers and plugins
  • Use parallel stages for independent modules
  • Implement pipeline templates for consistency
  • Set resource limits to prevent cost overruns
  • Use matrix builds for multiple Terraform versions

This is Part 12 of The Ultimate Terraform Mastery Series.

Next: Advanced Terraform Patterns →

Terraform Cloud and Terraform Enterprise: A Complete Guide

Terraform Cloud and Terraform Enterprise: A Guide - DevOps Preparation
Terraform Cloud Terraform Enterprise Remote Operations Collaboration DevOps

Terraform Cloud and Terraform Enterprise: A Complete Guide

Published on: November 10, 2023 | Author: DevOps Engineering Team

Master Terraform's collaboration platforms - from free-tier Terraform Cloud to enterprise-grade solutions. Learn remote operations, team workflows, and security features.

Terraform Cloud vs Terraform Enterprise

Understand the differences between HashiCorp's managed and self-managed Terraform platforms.

🚀 Terraform Cloud

Fully managed SaaS platform for teams and individuals.

  • Free tier available
  • Remote state storage
  • Remote operations
  • VCS integration
  • Team management
  • Private registry

🏢 Terraform Enterprise

Self-managed platform for enterprise requirements.

  • All TFC features
  • Private installation
  • Enhanced security
  • Audit logging
  • SAML/SSO integration
  • Premium support

Choosing the Right Platform

Terraform Cloud is ideal for most teams needing collaboration features. Terraform Enterprise is for organizations requiring private deployment, enhanced security, and compliance features.

Key Features Comparison

Detailed comparison of capabilities across both platforms.

Feature Terraform Cloud Terraform Enterprise
Pricing Model Free + Paid Tiers Annual Subscription
Deployment SaaS (Managed) Self-Hosted
Remote Operations
Private Module Registry
SAML/SSO Team & Governance Tiers
Audit Logging Governance Tier
Sentinel Policies Team & Governance Tiers

Getting Started with Terraform Cloud

Set up your first workspace and configure remote operations.

1

Create Terraform Cloud Account

# Sign up at https://app.terraform.io
# Verify email and create organization
# Generate user API token for CLI
2

Configure CLI Backend

# terraform.tf
terraform {
  cloud {
    organization = "your-org-name"
    
    workspaces {
      name = "production-network"
    }
  }
}

# Then run:
terraform login
terraform init
3

Set Workspace Variables

# In TFC UI, set environment variables:
# - AWS_ACCESS_KEY_ID (sensitive)
# - AWS_SECRET_ACCESS_KEY (sensitive)
# - TF_VAR_environment = "production"

# Or configure in workspace:
terraform {
  cloud {
    organization = "your-org"
    
    workspaces {
      name = "production"
    }
  }
}

Terraform Cloud Setup Helper

Select a setup scenario to see configuration examples:

Select a setup scenario to see configuration examples...

Remote Operations

Execute Terraform runs remotely with enhanced capabilities.

Remote Plan and Apply

# With TFC backend configured:
terraform plan   # Runs remotely
terraform apply  # Runs remotely

# Monitor in TFC UI
# View logs and outputs
# Cost estimation available

Run Triggers

# Automatic runs on VCS changes
# Manual runs via CLI/API
# Scheduled runs (cron)

# Example: Auto-apply on merge
# - VCS branch: main
# - Auto-apply: enabled
# - Trigger: pull request merge

Remote Operations Benefits

✅ Consistent execution environment ✅ Shared state locking ✅ Enhanced security ✅ Cost estimation ✅ Run history and auditing

Team Collaboration Features

Enable effective teamwork with Terraform Cloud's collaboration tools.

Team Workflow Architecture

Developer
VCS Pull Request
TFC Plan
Team Review
TFC Apply
Infrastructure
Collaboration Feature Description Benefit
Team Permissions Role-based access control Granular security
Run Policies Manual apply vs auto-apply Change control
Notifications Slack, email, webhooks Team awareness
State Versions State file history Audit and rollback
Private Registry Share modules/providers Code reuse

Security & Governance

Enterprise-grade security features for compliance and control.

Sentinel Policies

# sentinel.hcl
import "tfplan"

main = rule {
  all tfplan.resources.aws_s3_bucket as _, buckets {
    all buckets as bucket {
      bucket.applied.server_side_encryption_configuration is not null
    }
  }
}

# Policy checks:
# - Require S3 encryption
# - Restrict instance types
# - Enforce tagging
# - Limit resource counts

Variable Security

# Secure variable handling:
# - Mark sensitive variables
# - Environment-specific values
# - HCL variable definitions

variable "database_password" {
  type      = string
  sensitive = true
}

# In TFC:
# - Set as sensitive variable
# - Encrypted at rest
# - Never shown in UI/logs

Enterprise Security Features

🔐 SAML/SSO integration 🔐 Audit logging 🔐 Private network connectivity 🔐 Custom CA certificates 🔐 Compliance certifications

Terraform Cloud

$0 - $70/user/month
  • Free: Individuals & small teams
  • Team: $20/user/month
  • Governance: $70/user/month
  • Pay-as-you-go available

Terraform Enterprise

Contact Sales
  • Annual subscription
  • Based on usage/nodes
  • Premium support included
  • Volume discounts available

Migration Tips

  • Start with Terraform Cloud Free tier for evaluation
  • Use remote state migration for existing infrastructure
  • Implement Sentinel policies gradually
  • Train teams on collaboration workflows
  • Establish naming conventions early

This is Part 11 of The Ultimate Terraform Mastery Series.

Next: Terraform Modules Deep Dive →

Linux Security & Permissions for DevOps

Linux Security & Permissions - DevOps Security Guide Linux Security & Permissions ...