Skip to main content

How to Create and Use Terraform Modules for Reusable Code

 How to Create and Use Terraform Modules for Reusable Code

Your complete guide to building, publishing, and consuming Terraform modules—from local components to shared infrastructure libraries used across your entire organization.

📅 Published: Feb 2026
⏱️ Estimated Reading Time: 26 minutes
🏷️ Tags: Terraform Modules, Reusable Infrastructure, Module Composition, Module Registry, Infrastructure as Code


🧩 Introduction: Why Modules Are the Heart of Terraform

The Copy-Paste Trap

Every Terraform team eventually faces this moment. You've successfully built infrastructure for one application. Then a second team needs similar infrastructure. Then a third. Your instinct is to copy the working configuration and modify it slightly.

bash
cp -r team-a-infrastructure team-b-infrastructure
sed -i 's/app-a/app-b/g' team-b-infrastructure/main.tf

This feels efficient—for about 15 minutes. Then you discover:

❌ A security vulnerability in the original configuration. Now you must find and fix it in five, ten, or fifty copies.

❌ A new feature (like encryption) needs to be added to all applications. Each team implements it slightly differently, creating configuration drift.

❌ A team makes a "small tweak" that works for them but breaks a critical assumption other teams relied on.

❌ New team members struggle to understand why there are fifteen nearly-identical-but-slightly-different configurations.

This is the copy-paste trap, and it's killed more infrastructure initiatives than any technical failure.


What Modules Solve

Modules are Terraform's solution to this problem. A module is a container for multiple resources that are used together. It's like a function in programming—you define it once, give it a clear interface, and reuse it everywhere.

hcl
# Instead of copying 50 lines of configuration...
module "web_app_a" {
  source = "./modules/web-application"
  
  name        = "app-a"
  environment = "production"
  instance_count = 3
}

module "web_app_b" {
  source = "./modules/web-application"
  
  name        = "app-b"
  environment = "production"
  instance_count = 5
}

Modules transform infrastructure from copy-pasted scripts into composable, reusable components. They are the difference between "infrastructure as code" and "infrastructure as copy-paste."


What You'll Learn

✅ Module fundamentals — What modules are and when to create them
✅ Module structure — The standard layout every module should follow
✅ Module composition — How to call modules and pass data between them
✅ Module versioning — Publishing, versioning, and consuming modules safely
✅ Module testing — Ensuring modules work correctly before release
✅ Module registry — Sharing modules with your team and the world
✅ Module design patterns — Industry best practices for module interfaces
✅ Refactoring — Converting monolithic configurations into modules


📦 What Is a Module? (The Mental Model)

Modules Are Functions for Infrastructure

If Terraform resources are like statements in a programming language, modules are like functions. They:

  1. Accept input — Through input variables

  2. Perform operations — Create any number of resources

  3. Return output — Through output values

  4. Encapsulate complexity — Hide implementation details

  5. Are reusable — Call the same module many times with different parameters

hcl
# Function analogy in pseudocode
function create_vpc(name, cidr, subnets):
  vpc = aws_vpc.create(name, cidr)
  for each subnet in subnets:
    aws_subnet.create(vpc.id, subnet)
  return vpc.id, subnet_ids

Every Terraform configuration is itself a module. The "root module" is simply the module you're currently working in. Child modules are called from within your configuration.


The Module Contract

Every module has an implicit contract with its callers:

hcl
module "vpc" {
  source = "./modules/aws-vpc"
  
  # INPUTS: Variables the module expects
  name               = "main"
  cidr_block         = "10.0.0.0/16"
  public_subnets     = ["10.0.1.0/24", "10.0.2.0/24"]
  private_subnets    = ["10.0.10.0/24", "10.0.20.0/24"]
}

# OUTPUTS: Values the module returns
vpc_id     = module.vpc.vpc_id
subnet_ids = module.vpc.subnet_ids

A well-designed module:

  • Has a clear, minimal interface — only the variables it actually needs

  • Has sensible defaults — optional variables work out of the box

  • Hides complexity — callers don't need to understand how it works internally

  • Is composable — outputs can be passed as inputs to other modules

  • Is versioned — changes are tracked and communicated


📁 Module Structure: The Standard Layout

Every Module Needs These Files

A well-structured module follows a consistent, predictable pattern. This isn't enforced by Terraform, but it's enforced by the most important system of all: the human brain.

text
modules/
└── your-module-name/
    ├── main.tf           # Primary resource definitions
    ├── variables.tf      # Input variable declarations
    ├── outputs.tf        | Output value declarations
    ├── versions.tf       # Terraform and provider constraints
    ├── README.md         # Documentation (non-negotiable!)
    ├── examples/         # (Optional) Usage examples
    │   └── basic-usage/
    │       ├── main.tf
    │       └── variables.tf
    └── tests/            # (Optional) Integration tests
        └── main.tf

The Three Essential Files

1. variables.tf — The Module's Interface

This file declares everything a caller must (or may) provide:

hcl
# variables.tf
variable "name" {
  description = "Name prefix for all resources"
  type        = string
}

variable "environment" {
  description = "Deployment environment (dev, staging, prod)"
  type        = string
  
  validation {
    condition     = contains(["dev", "staging", "prod"], var.environment)
    error_message = "Environment must be dev, staging, or prod."
  }
}

variable "vpc_cidr" {
  description = "CIDR block for VPC"
  type        = string
  default     = "10.0.0.0/16"
}

variable "enable_nat_gateway" {
  description = "Enable NAT gateway for private subnets"
  type        = bool
  default     = false
}

Key principles:

  • Every variable has a description

  • Every variable has a type (no type = any without reason)

  • Required variables have no default

  • Optional variables have sensible defaults


2. main.tf — The Module's Implementation

This file contains the actual resource definitions:

hcl
# main.tf
resource "aws_vpc" "this" {
  cidr_block           = var.vpc_cidr
  enable_dns_hostnames = true
  enable_dns_support   = true
  
  tags = {
    Name        = "${var.name}-vpc"
    Environment = var.environment
    ManagedBy   = "Terraform"
  }
}

resource "aws_subnet" "public" {
  count = length(var.public_subnet_cidrs)
  
  vpc_id                  = aws_vpc.this.id
  cidr_block              = var.public_subnet_cidrs[count.index]
  availability_zone       = var.availability_zones[count.index]
  map_public_ip_on_launch = true
  
  tags = {
    Name        = "${var.name}-public-${count.index + 1}"
    Environment = var.environment
    Type        = "public"
  }
}

resource "aws_internet_gateway" "this" {
  vpc_id = aws_vpc.this.id
  
  tags = {
    Name        = "${var.name}-igw"
    Environment = var.environment
  }
}

resource "aws_route_table" "public" {
  vpc_id = aws_vpc.this.id
  
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.this.id
  }
  
  tags = {
    Name        = "${var.name}-public-rt"
    Environment = var.environment
  }
}

resource "aws_route_table_association" "public" {
  count = length(aws_subnet.public)
  
  subnet_id      = aws_subnet.public[count.index].id
  route_table_id = aws_route_table.public.id
}

Key principles:

  • All resource names are locally meaningful (thismain, or descriptive names)

  • Resources use input variables for configuration

  • Tags include environment and module name for traceability

  • Count and for_each enable flexible resource creation


3. outputs.tf — The Module's Return Values

This file declares what information the module exposes to callers:

hcl
# outputs.tf
output "vpc_id" {
  description = "ID of the created VPC"
  value       = aws_vpc.this.id
}

output "vpc_cidr_block" {
  description = "CIDR block of the created VPC"
  value       = aws_vpc.this.cidr_block
}

output "public_subnet_ids" {
  description = "IDs of public subnets"
  value       = aws_subnet.public[*].id
}

output "public_subnet_cidrs" {
  description = "CIDR blocks of public subnets"
  value       = aws_subnet.public[*].cidr_block
}

output "availability_zones" {
  description = "Availability zones used"
  value       = var.availability_zones
}

output "nat_gateway_ips" {
  description = "Elastic IPs of NAT gateways"
  value       = var.enable_nat_gateway ? aws_eip.nat[*].public_ip : []
}

Key principles:

  • Every output has a description

  • Outputs are minimal — expose only what callers need

  • Outputs are typed — lists remain lists, objects remain objects

  • Conditional outputs use sensible empty values (empty list, null, etc.)


🎯 When to Create a Module (And When Not To)

The Module Decision Matrix

ScenarioModule?Why
One-time infrastructure❌ NoJust write the resources directly
Infrastructure used by one team🤔 MaybeIf it's small and stable, maybe not
Infrastructure used by multiple teams✅ YesConsistency and maintenance
Infrastructure with complex internal logic✅ YesEncapsulate complexity
Infrastructure that's still evolving rapidly⏸️ WaitPremature abstraction is harmful
Infrastructure that's well-understood and stable✅ YesPerfect module candidate

Signs You Need a Module

1. You find yourself copying and pasting more than three times

bash
grep -r "aws_vpc" --include="*.tf" | wc -l
# Returns 47 — you've defined VPCs in 47 places!

2. You need to make the same change in multiple places

bash
# You just found a security vulnerability in your VPC configuration
# Now you need to update it in 15 different directories

3. You're building infrastructure that other teams will consume

hcl
# Platform team provides a "standard VPC" module
# Application teams call it without reinventing networking

4. Your configuration exceeds 200-300 lines of code

hcl
# main.tf has grown to 800 lines and is hard to navigate
# Time to split into logical modules

5. You need to enforce organizational standards

hcl
# Every VPC must have these tags, these security settings, this naming convention
# A module enforces this automatically

Signs You're Module-Crazy (Anti-Patterns)

❌ Creating a module for every single resource

hcl
# PLEASE don't do this
module "s3_bucket" {
  source = "./modules/aws-s3-bucket"  # 5-line module that just wraps the resource!
}

❌ Modules so specific they're never reused

hcl
# This module is so tightly coupled to one application that no one else can use it
module "team_a_special_snowflake" {
  # ...
}

❌ Modules with no clear abstraction boundary

hcl
# Callers need to know about VPC internals, subnet math, and routing
# The module isn't abstracting anything—it's just moving code

❌ Over-parameterization

hcl
variable "aws_region" {}  # Already set by provider
variable "vpc_id" {}      # If the module creates VPC, why accept it as input?
variable "subnet_cidrs" {}  # Defaults would cover 90% of use cases

🏗️ Module Composition: Calling Modules

The Root Module

Every Terraform configuration is a module. The configuration in your current working directory is the root module.

hcl
# root/main.tf
terraform {
  required_version = ">= 1.5"
  
  backend "s3" {
    # ...
  }
}

provider "aws" {
  region = var.region
}

# Call child modules
module "vpc" {
  source = "./modules/aws-vpc"
  
  name        = var.project_name
  environment = var.environment
  vpc_cidr    = var.vpc_cidr
}

module "eks" {
  source = "./modules/aws-eks"
  
  cluster_name    = "${var.project_name}-${var.environment}"
  subnet_ids      = module.vpc.private_subnet_ids
  node_group_size = var.node_group_size
}

Module Sources: Where Modules Live

Local paths — For modules in the same repository:

hcl
module "vpc" {
  source = "./modules/aws-vpc"  # Relative path
  # or
  source = "../shared-modules/aws-vpc"  # Parent directory
}

Git repositories — For versioned, shared modules:

hcl
module "vpc" {
  source = "git::https://github.com/your-org/terraform-aws-vpc.git?ref=v1.2.0"
}

module "eks" {
  source = "git::ssh://git@github.com/your-org/terraform-aws-eks.git?ref=v2.1.0"
}

Terraform Registry — For public modules:

hcl
module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "5.0.0"
}

HTTP URLs — For direct access:

hcl
module "consul" {
  source = "https://example.com/terraform-consul-module.zip"
}

Passing Data Between Modules

This is where modules become truly powerful—composing them like building blocks.

hcl
# 1. VPC module creates networking infrastructure
module "vpc" {
  source = "./modules/aws-vpc"
  
  name        = var.project_name
  environment = var.environment
  vpc_cidr    = var.vpc_cidr
}

# 2. Security group module uses VPC ID from VPC module
module "security_groups" {
  source = "./modules/aws-security-groups"
  
  vpc_id = module.vpc.vpc_id  # ← Output from VPC module
  environment = var.environment
}

# 3. EKS module uses subnet IDs from VPC module
module "eks" {
  source = "./modules/aws-eks"
  
  cluster_name    = "${var.project_name}-${var.environment}"
  subnet_ids      = module.vpc.private_subnet_ids  # ← Output from VPC module
  vpc_id          = module.vpc.vpc_id              # ← Output from VPC module
  
  node_security_group_id = module.security_groups.node_security_group_id  # ← Output from SG module
}

# 4. RDS module uses subnet group from VPC module
module "rds" {
  source = "./modules/aws-rds"
  
  database_name   = "${var.project_name}-db"
  subnet_ids      = module.vpc.private_subnet_ids  # ← Output from VPC module
  security_groups = [module.security_groups.database_security_group_id]  # ← Output from SG module
}

This is infrastructure as code at its best. Each module has a single responsibility. Modules are composed through their outputs. The root configuration reads like a blueprint of your entire infrastructure.


🔖 Module Versioning: The Critical Practice

Why Versioning Matters

You update a module to add a new feature. Suddenly, 27 teams' infrastructure behaves differently. Some teams get the new feature automatically (maybe they want it, maybe they don't). Others get breaking changes without warning.

Versioning solves this. It's the contract between module maintainers and module consumers.


Semantic Versioning for Modules

Follow Semantic Versioning (SemVer): MAJOR.MINOR.PATCH

VersionWhen to incrementExample
MAJORBreaking changesv2.0.0
MINORNew features, backward compatiblev1.3.0
PATCHBug fixes, backward compatiblev1.2.1

Breaking changes include:

  • Removing or renaming a variable

  • Removing or renaming an output

  • Changing a variable type

  • Changing resource behavior in a non-backward-compatible way


Versioning Strategies

Strategy 1: Git Tags (Simple)

bash
# Tag your module repository
git tag -a v1.2.0 -m "Release v1.2.0"
git push origin v1.2.0

# Consume with version pin
module "vpc" {
  source = "git::https://github.com/your-org/terraform-aws-vpc.git?ref=v1.2.0"
}

Strategy 2: Terraform Registry (Professional)

hcl
module "vpc" {
  source  = "your-org/vpc/aws"
  version = "~> 1.2"  # Allow patch updates
}

Strategy 3: Vendor in Modules (Air-gapped environments)

bash
# Download module and commit to your repository
git submodule add https://github.com/your-org/terraform-aws-vpc.git modules/vpc

Version Constraints

hcl
# Exact version (most conservative)
version = "1.2.0"

# Patch releases only (recommended for production)
version = "~> 1.2.0"  # 1.2.x, not 1.3.0

# Minor releases only
version = "~> 1.2"    # 1.x, not 2.0.0

# Greater than or equal to
version = ">= 1.2.0"

# Range
version = ">= 1.2.0, < 2.0.0"

# Multiple constraints
version = "~> 1.2, != 1.2.5"  # Avoid known bad version

Best practice: Use ~> 1.2.0 for patch updates only, or ~> 1.2 for minor updates after testing. Never use latest.


🧪 Module Testing: Trust But Verify

The Testing Pyramid for Terraform Modules

text
    /\          Integration Tests (few)
   /  \         - Create real infrastructure
  /    \        - Verify functionality
 /      \       - Destroy cleanly
/--------\
! Unit   !      Static Analysis (many)
! Tests  !      - terraform fmt, validate
!--------!      - tflint, tfsec, checkov

Level 1: Static Analysis

bash
#!/bin/bash
# test/module/static.sh

echo "=== Running Static Analysis ==="

cd modules/aws-vpc

# Format check
terraform fmt -check -recursive
if [ $? -ne 0 ]; then
  echo "❌ Terraform files not formatted"
  exit 1
fi
echo "✅ Format check passed"

# Validation
terraform init -backend=false
terraform validate
if [ $? -ne 0 ]; then
  echo "❌ Terraform validation failed"
  exit 1
fi
echo "✅ Validation passed"

# Linting (tflint)
tflint --init
tflint
if [ $? -ne 0 ]; then
  echo "❌ TFLint found issues"
  exit 1
fi
echo "✅ Linting passed"

# Security scanning (tfsec)
tfsec .
if [ $? -ne 0 ]; then
  echo "❌ TFSec found security issues"
  exit 1
fi
echo "✅ Security scan passed"

echo "=== All static checks passed ==="

Level 2: Integration Testing

This is the only way to know if your module actually works.

hcl
# test/integration/main.tf
# This is a complete, independent Terraform configuration
# that tests your module in a real cloud environment

terraform {
  required_version = ">= 1.5"
  
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}

provider "aws" {
  region = "us-west-2"
}

# Test the module with minimal configuration
module "vpc_test" {
  source = "../../modules/aws-vpc"
  
  name        = "test-${random_string.suffix.result}"
  environment = "test"
  
  # Use default values for everything else
}

resource "random_string" "suffix" {
  length  = 6
  special = false
  upper   = false
}

# Verify outputs
output "vpc_id" {
  value = module.vpc_test.vpc_id
}

output "subnet_ids" {
  value = module.vpc_test.public_subnet_ids
}
bash
#!/bin/bash
# test/integration/run.sh

echo "=== Running Integration Tests ==="

cd test/integration

# Initialize
terraform init

# Create infrastructure
terraform apply -auto-approve

# Verify resources exist (you'd have more thorough checks here)
VPC_ID=$(terraform output -raw vpc_id)
aws ec2 describe-vpcs --vpc-ids $VPC_ID > /dev/null
if [ $? -eq 0 ]; then
  echo "✅ VPC created successfully"
else
  echo "❌ VPC not found"
  exit 1
fi

# Destroy everything
terraform destroy -auto-approve

echo "=== All integration tests passed ==="

Level 3: Contract Testing

Ensure your module's interface doesn't change unexpectedly:

hcl
# test/contract/verify-interface.tf

# This test ensures all expected variables and outputs exist
# with the correct types and descriptions

locals {
  # Expected variables with their types
  expected_variables = {
    name               = "string"
    environment        = "string"
    vpc_cidr          = "string"
    enable_nat_gateway = "bool"
  }
  
  # Expected outputs with their types
  expected_outputs = {
    vpc_id               = "string"
    vpc_cidr_block       = "string"
    public_subnet_ids    = "list"
    private_subnet_ids   = "list"
    nat_gateway_ips      = "list"
  }
}

# This would be implemented with Terratest or similar
# The pattern: load module, verify interface, report mismatches

📚 Module Design Patterns

Pattern 1: The Wrapper Module

Purpose: Provide a simplified interface to a complex upstream module.

hcl
# modules/standard-vpc/main.tf

locals {
  # Enforce organizational standards
  vpc_cidr = var.vpc_cidr != null ? var.vpc_cidr : (
    var.environment == "prod" ? "10.0.0.0/16" : "10.1.0.0/16"
  )
  
  subnet_counts = {
    dev     = 2
    staging = 3
    prod    = 3
  }
  
  az_count = local.subnet_counts[var.environment]
}

# Call the community module with our standards enforced
module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "5.0.0"
  
  name = var.name
  cidr = local.vpc_cidr
  
  azs             = slice(data.aws_availability_zones.available.names, 0, local.az_count)
  private_subnets = [for i in range(local.az_count) : cidrsubnet(local.vpc_cidr, 8, i + 10)]
  public_subnets  = [for i in range(local.az_count) : cidrsubnet(local.vpc_cidr, 8, i)]
  
  enable_nat_gateway = var.environment == "prod"
  enable_vpn_gateway = false
  
  tags = {
    Environment = var.environment
    ManagedBy   = "Terraform"
    Standard    = "company-vpc"
  }
}

When to use: Your organization needs to standardize on a community module but enforce specific configurations.


Pattern 2: The Factory Module

Purpose: Create multiple instances of a resource from declarative configuration.

hcl
# modules/bucket-factory/main.tf

variable "buckets" {
  description = "Map of bucket configurations"
  type = map(object({
    versioning = optional(bool, false)
    encryption = optional(bool, true)
    lifecycle_days = optional(number, 30)
  }))
}

resource "aws_s3_bucket" "this" {
  for_each = var.buckets
  
  bucket = each.key
  tags = {
    Name        = each.key
    Environment = var.environment
    ManagedBy   = "Terraform"
  }
}

resource "aws_s3_bucket_versioning" "this" {
  for_each = {
    for key, config in var.buckets : key => config
    if config.versioning
  }
  
  bucket = aws_s3_bucket.this[each.key].id
  versioning_configuration {
    status = "Enabled"
  }
}

resource "aws_s3_bucket_server_side_encryption_configuration" "this" {
  for_each = {
    for key, config in var.buckets : key => config
    if config.encryption
  }
  
  bucket = aws_s3_bucket.this[each.key].id
  
  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256"
    }
  }
}

resource "aws_s3_bucket_lifecycle_configuration" "this" {
  for_each = var.buckets
  
  bucket = aws_s3_bucket.this[each.key].id
  
  rule {
    id     = "default-lifecycle"
    status = "Enabled"
    
    expiration {
      days = each.value.lifecycle_days
    }
  }
}

output "bucket_arns" {
  value = {
    for key, bucket in aws_s3_bucket.this : key => bucket.arn
  }
}

Usage:

hcl
module "buckets" {
  source = "./modules/bucket-factory"
  
  environment = var.environment
  buckets = {
    "app-logs-${var.environment}" = {
      versioning = true
      lifecycle_days = 90
    }
    "app-backups-${var.environment}" = {
      encryption = true
      lifecycle_days = 365
    }
    "app-temp-${var.environment}" = {
      versioning = false
      encryption = false
      lifecycle_days = 7
    }
  }
}

When to use: You need to create many similar resources with slightly different configurations.


Pattern 3: The Service Module

Purpose: Deploy a complete, self-contained service.

hcl
# modules/web-application/main.tf

module "vpc" {
  source = "../aws-vpc"
  
  name        = var.name
  environment = var.environment
}

module "database" {
  source = "../aws-rds"
  
  name           = var.name
  environment    = var.environment
  vpc_id         = module.vpc.vpc_id
  subnet_ids     = module.vpc.private_subnet_ids
  instance_class = var.database_instance_class
}

module "compute" {
  source = "../aws-ecs"
  
  name           = var.name
  environment    = var.environment
  vpc_id         = module.vpc.vpc_id
  subnet_ids     = module.vpc.public_subnet_ids
  instance_count = var.instance_count
  instance_type  = var.instance_type
  
  database_host = module.database.endpoint
}

module "cdn" {
  source = "../aws-cloudfront"
  
  name        = var.name
  environment = var.environment
  origin_url  = module.compute.load_balancer_dns
}

output "application_url" {
  value = module.cdn.domain_name
}

output "database_endpoint" {
  value = module.database.endpoint
}

When to use: Your module represents a complete application stack, composed of multiple infrastructure components.


📤 Publishing Modules: Sharing with Your Team

Step 1: Create a Dedicated Repository

text
terraform-aws-vpc/
├── README.md
├── LICENSE
├── main.tf
├── variables.tf
├── outputs.tf
├── versions.tf
├── examples/
│   ├── basic-vpc/
│   │   └── main.tf
│   └── vpc-with-nat/
│       └── main.tf
└── tests/
    └── integration/
        └── main.tf

Naming convention: terraform-<PROVIDER>-<NAME> (e.g., terraform-aws-vpc)


Step 2: Write Excellent Documentation

README.md should include:

markdown
# Terraform AWS VPC Module

A reusable Terraform module for creating VPCs with public and private subnets.

## Features

- Creates VPC with customizable CIDR block
- Creates public and private subnets across availability zones
- Optional NAT gateway for private subnet internet access
- Consistent tagging across all resources

## Usage

```hcl
module "vpc" {
  source = "git::https://github.com/your-org/terraform-aws-vpc.git?ref=v1.0.0"

  name        = "production"
  environment = "prod"
  vpc_cidr    = "10.0.0.0/16"
  public_subnet_cidrs  = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
  private_subnet_cidrs = ["10.0.10.0/24", "10.0.20.0/24", "10.0.30.0/24"]
  enable_nat_gateway = true
}

Requirements

NameVersion
terraform>= 1.5
aws~> 5.0

Inputs

NameDescriptionTypeDefaultRequired
nameName prefix for all resourcesstringn/ayes
environmentDeployment environmentstringn/ayes
vpc_cidrCIDR block for VPCstring"10.0.0.0/16"no
enable_nat_gatewayEnable NAT gatewayboolfalseno

Outputs

NameDescription
vpc_idID of the created VPC
vpc_cidr_blockCIDR block of the VPC
public_subnet_idsIDs of public subnets
private_subnet_idsIDs of private subnets
text
---

### Step 3: Version Your Module

```bash
git tag -a v1.0.0 -m "Initial release"
git push origin v1.0.0

git tag -a v1.1.0 -m "Add support for NAT gateway"
git push origin v1.1.0

Step 4: Create a Private Registry (For Large Organizations)

Using Terraform Cloud / Terraform Enterprise:

  1. Create a module repository with name terraform-<PROVIDER>-<NAME>

  2. Add terraform-<PROVIDER>-<NAME> tag

  3. Terraform Cloud auto-discovers and indexes modules

Using self-hosted registry:

hcl
# Configure a module registry in your Terraform configuration
module "vpc" {
  source = "your-company.com/network/vpc/aws"
  version = "1.2.0"
}

Using GitHub as a registry:

hcl
module "vpc" {
  source = "github.com/your-org/terraform-aws-vpc?ref=v1.2.0"
}

🛠️ Refactoring: Turning Monoliths into Modules

The Migration Strategy

Step 1: Identify module boundaries

text
Current monolithic config (all in one directory):
- VPC + subnets + route tables
- Security groups
- EC2 instances + ASG
- RDS database
- S3 buckets

Step 2: Create module directories

text
modules/
├── vpc/
├── security-groups/
├── compute/
├── database/
└── storage/

Step 3: Extract code (one module at a time)

bash
# 1. Create module structure
mkdir -p modules/vpc
cp variables.tf modules/vpc/
# ... copy VPC-related resources

# 2. Define module interface
# modules/vpc/variables.tf - only what callers need
# modules/vpc/outputs.tf - only what callers need

# 3. Replace with module call in root
module "vpc" {
  source = "./modules/vpc"
  
  name        = var.name
  environment = var.environment
  vpc_cidr    = var.vpc_cidr
}

Step 4: Use terraform state mv to preserve existing resources

bash
# IMPORTANT: Don't destroy and recreate!
# Move resources from root state to module state

terraform state mv aws_vpc.main module.vpc.aws_vpc.this
terraform state mv aws_subnet.public[0] module.vpc.aws_subnet.public[0]
terraform state mv aws_subnet.public[1] module.vpc.aws_subnet.public[1]
# ... continue for all resources

# Verify
terraform state list | grep module.vpc

Step 5: Test thoroughly

bash
terraform plan
# Should show no changes if migration was successful

Step 6: Repeat for each module


✅ Module Best Practices Checklist

Interface Design

  • Module has a clear, single responsibility

  • Required variables have no defaults

  • Optional variables have sensible defaults

  • Every variable has a description and type

  • Validation blocks enforce business rules

  • Outputs expose minimum necessary information

  • Every output has a description

Implementation

  • Resources use consistent naming (based on input variables)

  • Resources are tagged appropriately

  • Count and for_each used for flexibility

  • Dynamic blocks used for optional nested configurations

  • Local values used for complex expressions

  • No hardcoded provider configurations (unless intentional)

Documentation

  • README.md exists and is comprehensive

  • Examples directory contains working usage examples

  • Version is documented and tagged

  • CHANGELOG.md tracks changes between versions

Testing

  • Static analysis passes (fmtvalidatetflinttfsec)

  • Integration tests create and destroy real infrastructure

  • Contract tests verify module interface

  • CI pipeline runs tests on pull requests

Versioning

  • Module follows Semantic Versioning

  • Breaking changes trigger major version bump

  • Deprecation policy is documented

  • Consumers pin to specific versions or ~> constraints


🎓 Practice Exercises

Exercise 1: Create a Security Group Module

Task: Create a reusable module for AWS security groups with the following features:

  1. Accepts VPC ID and list of ingress/egress rules

  2. Provides default rules for common scenarios (web, SSH, internal)

  3. Outputs the security group ID

  4. Includes validation and documentation

Solution:

hcl
# modules/aws-security-group/variables.tf
variable "name" {
  description = "Name of the security group"
  type        = string
}

variable "vpc_id" {
  description = "VPC ID to create security group in"
  type        = string
}

variable "description" {
  description = "Description of the security group"
  type        = string
  default     = "Managed by Terraform"
}

variable "ingress_rules" {
  description = "List of ingress rules"
  type = list(object({
    description     = string
    from_port       = number
    to_port         = number
    protocol        = string
    cidr_blocks     = optional(list(string))
    security_groups = optional(list(string))
  }))
  default = []
}

variable "egress_rules" {
  description = "List of egress rules"
  type = list(object({
    description     = string
    from_port       = number
    to_port         = number
    protocol        = string
    cidr_blocks     = optional(list(string))
    security_groups = optional(list(string))
  }))
  default = []
}

variable "tags" {
  description = "Tags to apply to the security group"
  type        = map(string)
  default     = {}
}

# modules/aws-security-group/main.tf
resource "aws_security_group" "this" {
  name        = var.name
  description = var.description
  vpc_id      = var.vpc_id
  
  tags = merge({
    Name = var.name
  }, var.tags)
}

resource "aws_security_group_rule" "ingress" {
  for_each = { for idx, rule in var.ingress_rules : idx => rule }
  
  type              = "ingress"
  security_group_id = aws_security_group.this.id
  
  description = each.value.description
  from_port   = each.value.from_port
  to_port     = each.value.to_port
  protocol    = each.value.protocol
  
  cidr_blocks     = each.value.cidr_blocks
  source_security_group_id = length(each.value.security_groups) > 0 ? each.value.security_groups[0] : null
}

resource "aws_security_group_rule" "egress" {
  for_each = { for idx, rule in var.egress_rules : idx => rule }
  
  type              = "egress"
  security_group_id = aws_security_group.this.id
  
  description = each.value.description
  from_port   = each.value.from_port
  to_port     = each.value.to_port
  protocol    = each.value.protocol
  
  cidr_blocks     = each.value.cidr_blocks
  source_security_group_id = length(each.value.security_groups) > 0 ? each.value.security_groups[0] : null
}

# Common rule sets as locals
locals {
  web_ingress = [
    {
      description = "HTTP"
      from_port   = 80
      to_port     = 80
      protocol    = "tcp"
      cidr_blocks = ["0.0.0.0/0"]
    },
    {
      description = "HTTPS"
      from_port   = 443
      to_port     = 443
      protocol    = "tcp"
      cidr_blocks = ["0.0.0.0/0"]
    }
  ]
  
  ssh_ingress = [
    {
      description = "SSH"
      from_port   = 22
      to_port     = 22
      protocol    = "tcp"
      cidr_blocks = []  # Should be restricted in production!
    }
  ]
  
  all_egress = [
    {
      description = "All outbound"
      from_port   = 0
      to_port     = 0
      protocol    = "-1"
      cidr_blocks = ["0.0.0.0/0"]
    }
  ]
}

# modules/aws-security-group/outputs.tf
output "security_group_id" {
  description = "ID of the created security group"
  value       = aws_security_group.this.id
}

output "security_group_arn" {
  description = "ARN of the created security group"
  value       = aws_security_group.this.arn
}

Exercise 2: Refactor a Monolithic Configuration

Task: Given this monolithic configuration, extract the VPC and subnets into a reusable module.

hcl
# Original monolith (simplified)
resource "aws_vpc" "main" {
  cidr_block = "10.0.0.0/16"
  tags = {
    Name = "main-vpc"
  }
}

resource "aws_subnet" "public" {
  count = 3
  vpc_id = aws_vpc.main.id
  cidr_block = "10.0.${count.index}.0/24"
  availability_zone = data.aws_availability_zones.available.names[count.index]
  tags = {
    Name = "public-${count.index}"
  }
}

resource "aws_subnet" "private" {
  count = 3
  vpc_id = aws_vpc.main.id
  cidr_block = "10.0.${count.index + 10}.0/24"
  availability_zone = data.aws_availability_zones.available.names[count.index]
  tags = {
    Name = "private-${count.index}"
  }
}

resource "aws_internet_gateway" "main" {
  vpc_id = aws_vpc.main.id
}

resource "aws_route_table" "public" {
  vpc_id = aws_vpc.main.id
  
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.main.id
  }
}

resource "aws_route_table_association" "public" {
  count = 3
  subnet_id = aws_subnet.public[count.index].id
  route_table_id = aws_route_table.public.id
}

Solution Steps:

  1. Create module directory structure

  2. Define module variables (name, environment, cidr_block, subnet_counts)

  3. Move VPC, subnet, IGW, route table resources to module

  4. Define outputs (vpc_id, public_subnet_ids, private_subnet_ids)

  5. Use terraform state mv to migrate existing resources

  6. Replace with module call in root configuration


🚀 Advanced Module Patterns

Pattern: Module Composition with depends_on

hcl
# Sometimes you need explicit dependencies
module "dns" {
  source = "./modules/route53"
  
  domain = var.domain
  records = [
    {
      name = "app"
      type = "A"
      ttl  = 300
      records = [module.load_balancer.dns_name]
    }
  ]
  
  # Load balancer must exist before we can create DNS record
  depends_on = [module.load_balancer]
}

Pattern: Module Iteration with for_each

hcl
# Define multiple similar modules
locals {
  applications = {
    "api" = {
      instance_count = 3
      instance_type  = "t3.medium"
      port          = 8080
    }
    "worker" = {
      instance_count = 2
      instance_type  = "t3.large"
      port          = 5000
    }
    "frontend" = {
      instance_count = 2
      instance_type  = "t3.small"
      port          = 3000
    }
  }
}

module "applications" {
  source = "./modules/web-application"
  
  for_each = local.applications
  
  name            = each.key
  instance_count  = each.value.instance_count
  instance_type   = each.value.instance_type
  app_port        = each.value.port
  environment     = var.environment
  vpc_id          = module.vpc.vpc_id
  subnet_ids      = module.vpc.public_subnet_ids
}

output "application_urls" {
  value = {
    for name, module in module.applications : name => module.load_balancer_dns
  }
}

📋 Summary: Modules Are the Professional's Choice

Copy-pasting configurations is what beginners do. Using modules is what professionals do.

Without ModulesWith Modules
ReuseCopy files, search and replaceSingle module block
UpdatesFind and fix 50 copiesUpdate module, bump version
ConsistencyDrift guaranteedStandardized across teams
ComplexityExposed everywhereEncapsulated
TestingManual, inconsistentAutomated integration tests
Onboarding"Learn our 15 patterns""Learn our 3 modules"

The investment in modules pays for itself the first time you need to make a security update across 20 applications. It pays for itself again when a new team member can be productive in hours instead of weeks.

Start small. Extract the next piece of infrastructure you copy-paste. Version it. Document it. Share it.


🔗 Master Terraform Modules with Hands-on Labs

You now understand the theory of modules. Now practice building, publishing, and consuming them in real scenarios.

👉 Practice module creation, composition, and versioning in our interactive labs at:
https://devops.trainwithsky.com/

Our platform provides:

  • Module design workshops

  • Refactoring real monoliths into modules

  • Integration testing environments

  • Private module registry simulation

  • Team collaboration scenarios


Frequently Asked Questions

Q: When should I create a new module vs. using an existing community module?

A: Use community modules for well-understood infrastructure (VPC, EC2, RDS). Create your own modules when you need to enforce organizational standards, combine multiple resources into a service, or when no good community module exists.

Q: How do I handle breaking changes in modules?

A: Follow semantic versioning: MAJOR version for breaking changes. Provide migration documentation and, when possible, maintain backward compatibility for one major version cycle. Use version constraints to protect consumers.

Q: Can I have modules within modules?

A: Yes! This is module composition. It's a powerful pattern for building complex infrastructure from simple, reusable components.

Q: How do I test modules that depend on other modules?

A: Integration tests should test the complete composition. Create test configurations that call your module composition exactly as a consumer would.

Q: Should I commit my .terraform directory to version control?

A: No. Never. Add it to .gitignore. Commit your code, not your dependencies.

Q: How do I share modules across many teams?

A: Use a private module registry (Terraform Cloud/Enterprise) or a dedicated Git repository with semantic versioning. Document your modules thoroughly and provide working examples.

Q: What's the difference between a module and a root module?

A: Every Terraform configuration is a module. The configuration you're currently working in is the root module. Modules you call from within it are child modules. The distinction is just perspective.


Ready to start building your first module? Stuck on module design? Share your module challenges in the comments below—our community of Terraform practitioners is here to help! 💬

Comments

Popular posts from this blog

Introduction to Terraform – The Future of Infrastructure as Code

  Introduction to Terraform – The Future of Infrastructure as Code In today’s fast-paced DevOps world, managing infrastructure manually is outdated . This is where Terraform comes in—a powerful Infrastructure as Code (IaC) tool that allows you to define, provision, and manage cloud infrastructure efficiently . Whether you're working with AWS, Azure, Google Cloud, or on-premises servers , Terraform provides a declarative, automation-first approach to infrastructure deployment. Shape Your Future with AI & Infinite Knowledge...!! Read In-Depth Tech & Self-Improvement Blogs http://www.skyinfinitetech.com Watch Life-Changing Videos on YouTube https://www.youtube.com/@SkyInfinite-Learning Transform Your Skills, Business & Productivity – Join Us Today! In today’s digital-first world, agility and automation are no longer optional—they’re essential. Companies across the globe are rapidly shifting their operations to the cloud to keep up with the pace of innovatio...

📊 Monitoring & Logging in Kubernetes – Tools like Prometheus, Grafana, and Fluentd

  Monitoring & Logging in Kubernetes – Tools like Prometheus, Grafana, and Fluentd Monitoring and logging are essential for maintaining a healthy and well-performing Kubernetes cluster. In this guide, we’ll cover why monitoring is important, key monitoring tools like Prometheus and Grafana, and logging tools like Fluentd to help you gain visibility into your cluster’s performance and logs. Shape Your Future with AI & Infinite Knowledge...!! Want to Generate Text-to-Voice, Images & Videos? http://www.ai.skyinfinitetech.com Read In-Depth Tech & Self-Improvement Blogs http://www.skyinfinitetech.com Watch Life-Changing Videos on YouTube https://www.youtube.com/@SkyInfinite-Learning Transform Your Skills, Business & Productivity – Join Us Today! 🚀 Introduction In today’s fast-paced cloud-native environment, Kubernetes has emerged as the de-facto container orchestration platform. But deploying and managing applications in Kubernetes is just half the ba...

🔒 Kubernetes Security – RBAC, Network Policies, and Secrets Management

  Kubernetes Security – RBAC, Network Policies, and Secrets Management Security is a critical aspect of managing Kubernetes clusters. In this guide, we'll cover essential security mechanisms like Role-Based Access Control (RBAC) , Network Policies , and Secrets Management to help you secure your Kubernetes environment effectively. Shape Your Future with AI & Infinite Knowledge...!! Want to Generate Text-to-Voice, Images & Videos? http://www.ai.skyinfinitetech.com Read In-Depth Tech & Self-Improvement Blogs http://www.skyinfinitetech.com Watch Life-Changing Videos on YouTube https://www.youtube.com/@SkyInfinite-Learning Transform Your Skills, Business & Productivity – Join Us Today! 🚀 Introduction: Why Kubernetes Security Is Non-Negotiable As Kubernetes becomes the backbone of modern cloud-native infrastructure, security is no longer optional—it’s mission-critical . With multiple moving parts like containers, pods, services, nodes, and more, Kuberne...