How to Create and Use Terraform Modules for Reusable Code
Your complete guide to building, publishing, and consuming Terraform modules—from local components to shared infrastructure libraries used across your entire organization.
📅 Published: Feb 2026
⏱️ Estimated Reading Time: 26 minutes
🏷️ Tags: Terraform Modules, Reusable Infrastructure, Module Composition, Module Registry, Infrastructure as Code
🧩 Introduction: Why Modules Are the Heart of Terraform
The Copy-Paste Trap
Every Terraform team eventually faces this moment. You've successfully built infrastructure for one application. Then a second team needs similar infrastructure. Then a third. Your instinct is to copy the working configuration and modify it slightly.
cp -r team-a-infrastructure team-b-infrastructure sed -i 's/app-a/app-b/g' team-b-infrastructure/main.tf
This feels efficient—for about 15 minutes. Then you discover:
❌ A security vulnerability in the original configuration. Now you must find and fix it in five, ten, or fifty copies.
❌ A new feature (like encryption) needs to be added to all applications. Each team implements it slightly differently, creating configuration drift.
❌ A team makes a "small tweak" that works for them but breaks a critical assumption other teams relied on.
❌ New team members struggle to understand why there are fifteen nearly-identical-but-slightly-different configurations.
This is the copy-paste trap, and it's killed more infrastructure initiatives than any technical failure.
What Modules Solve
Modules are Terraform's solution to this problem. A module is a container for multiple resources that are used together. It's like a function in programming—you define it once, give it a clear interface, and reuse it everywhere.
# Instead of copying 50 lines of configuration... module "web_app_a" { source = "./modules/web-application" name = "app-a" environment = "production" instance_count = 3 } module "web_app_b" { source = "./modules/web-application" name = "app-b" environment = "production" instance_count = 5 }
Modules transform infrastructure from copy-pasted scripts into composable, reusable components. They are the difference between "infrastructure as code" and "infrastructure as copy-paste."
What You'll Learn
✅ Module fundamentals — What modules are and when to create them
✅ Module structure — The standard layout every module should follow
✅ Module composition — How to call modules and pass data between them
✅ Module versioning — Publishing, versioning, and consuming modules safely
✅ Module testing — Ensuring modules work correctly before release
✅ Module registry — Sharing modules with your team and the world
✅ Module design patterns — Industry best practices for module interfaces
✅ Refactoring — Converting monolithic configurations into modules
📦 What Is a Module? (The Mental Model)
Modules Are Functions for Infrastructure
If Terraform resources are like statements in a programming language, modules are like functions. They:
Accept input — Through input variables
Perform operations — Create any number of resources
Return output — Through output values
Encapsulate complexity — Hide implementation details
Are reusable — Call the same module many times with different parameters
# Function analogy in pseudocode function create_vpc(name, cidr, subnets): vpc = aws_vpc.create(name, cidr) for each subnet in subnets: aws_subnet.create(vpc.id, subnet) return vpc.id, subnet_ids
Every Terraform configuration is itself a module. The "root module" is simply the module you're currently working in. Child modules are called from within your configuration.
The Module Contract
Every module has an implicit contract with its callers:
module "vpc" { source = "./modules/aws-vpc" # INPUTS: Variables the module expects name = "main" cidr_block = "10.0.0.0/16" public_subnets = ["10.0.1.0/24", "10.0.2.0/24"] private_subnets = ["10.0.10.0/24", "10.0.20.0/24"] } # OUTPUTS: Values the module returns vpc_id = module.vpc.vpc_id subnet_ids = module.vpc.subnet_ids
A well-designed module:
Has a clear, minimal interface — only the variables it actually needs
Has sensible defaults — optional variables work out of the box
Hides complexity — callers don't need to understand how it works internally
Is composable — outputs can be passed as inputs to other modules
Is versioned — changes are tracked and communicated
📁 Module Structure: The Standard Layout
Every Module Needs These Files
A well-structured module follows a consistent, predictable pattern. This isn't enforced by Terraform, but it's enforced by the most important system of all: the human brain.
modules/
└── your-module-name/
├── main.tf # Primary resource definitions
├── variables.tf # Input variable declarations
├── outputs.tf | Output value declarations
├── versions.tf # Terraform and provider constraints
├── README.md # Documentation (non-negotiable!)
├── examples/ # (Optional) Usage examples
│ └── basic-usage/
│ ├── main.tf
│ └── variables.tf
└── tests/ # (Optional) Integration tests
└── main.tfThe Three Essential Files
1. variables.tf — The Module's Interface
This file declares everything a caller must (or may) provide:
# variables.tf variable "name" { description = "Name prefix for all resources" type = string } variable "environment" { description = "Deployment environment (dev, staging, prod)" type = string validation { condition = contains(["dev", "staging", "prod"], var.environment) error_message = "Environment must be dev, staging, or prod." } } variable "vpc_cidr" { description = "CIDR block for VPC" type = string default = "10.0.0.0/16" } variable "enable_nat_gateway" { description = "Enable NAT gateway for private subnets" type = bool default = false }
Key principles:
Every variable has a
descriptionEvery variable has a
type(notype = anywithout reason)Required variables have no default
Optional variables have sensible defaults
2. main.tf — The Module's Implementation
This file contains the actual resource definitions:
# main.tf resource "aws_vpc" "this" { cidr_block = var.vpc_cidr enable_dns_hostnames = true enable_dns_support = true tags = { Name = "${var.name}-vpc" Environment = var.environment ManagedBy = "Terraform" } } resource "aws_subnet" "public" { count = length(var.public_subnet_cidrs) vpc_id = aws_vpc.this.id cidr_block = var.public_subnet_cidrs[count.index] availability_zone = var.availability_zones[count.index] map_public_ip_on_launch = true tags = { Name = "${var.name}-public-${count.index + 1}" Environment = var.environment Type = "public" } } resource "aws_internet_gateway" "this" { vpc_id = aws_vpc.this.id tags = { Name = "${var.name}-igw" Environment = var.environment } } resource "aws_route_table" "public" { vpc_id = aws_vpc.this.id route { cidr_block = "0.0.0.0/0" gateway_id = aws_internet_gateway.this.id } tags = { Name = "${var.name}-public-rt" Environment = var.environment } } resource "aws_route_table_association" "public" { count = length(aws_subnet.public) subnet_id = aws_subnet.public[count.index].id route_table_id = aws_route_table.public.id }
Key principles:
All resource names are locally meaningful (
this,main, or descriptive names)Resources use input variables for configuration
Tags include environment and module name for traceability
Count and for_each enable flexible resource creation
3. outputs.tf — The Module's Return Values
This file declares what information the module exposes to callers:
# outputs.tf output "vpc_id" { description = "ID of the created VPC" value = aws_vpc.this.id } output "vpc_cidr_block" { description = "CIDR block of the created VPC" value = aws_vpc.this.cidr_block } output "public_subnet_ids" { description = "IDs of public subnets" value = aws_subnet.public[*].id } output "public_subnet_cidrs" { description = "CIDR blocks of public subnets" value = aws_subnet.public[*].cidr_block } output "availability_zones" { description = "Availability zones used" value = var.availability_zones } output "nat_gateway_ips" { description = "Elastic IPs of NAT gateways" value = var.enable_nat_gateway ? aws_eip.nat[*].public_ip : [] }
Key principles:
Every output has a
descriptionOutputs are minimal — expose only what callers need
Outputs are typed — lists remain lists, objects remain objects
Conditional outputs use sensible empty values (empty list,
null, etc.)
🎯 When to Create a Module (And When Not To)
The Module Decision Matrix
| Scenario | Module? | Why |
|---|---|---|
| One-time infrastructure | ❌ No | Just write the resources directly |
| Infrastructure used by one team | 🤔 Maybe | If it's small and stable, maybe not |
| Infrastructure used by multiple teams | ✅ Yes | Consistency and maintenance |
| Infrastructure with complex internal logic | ✅ Yes | Encapsulate complexity |
| Infrastructure that's still evolving rapidly | ⏸️ Wait | Premature abstraction is harmful |
| Infrastructure that's well-understood and stable | ✅ Yes | Perfect module candidate |
Signs You Need a Module
1. You find yourself copying and pasting more than three times
grep -r "aws_vpc" --include="*.tf" | wc -l # Returns 47 — you've defined VPCs in 47 places!
2. You need to make the same change in multiple places
# You just found a security vulnerability in your VPC configuration # Now you need to update it in 15 different directories
3. You're building infrastructure that other teams will consume
# Platform team provides a "standard VPC" module # Application teams call it without reinventing networking
4. Your configuration exceeds 200-300 lines of code
# main.tf has grown to 800 lines and is hard to navigate # Time to split into logical modules
5. You need to enforce organizational standards
# Every VPC must have these tags, these security settings, this naming convention # A module enforces this automatically
Signs You're Module-Crazy (Anti-Patterns)
❌ Creating a module for every single resource
# PLEASE don't do this module "s3_bucket" { source = "./modules/aws-s3-bucket" # 5-line module that just wraps the resource! }
❌ Modules so specific they're never reused
# This module is so tightly coupled to one application that no one else can use it module "team_a_special_snowflake" { # ... }
❌ Modules with no clear abstraction boundary
# Callers need to know about VPC internals, subnet math, and routing # The module isn't abstracting anything—it's just moving code
❌ Over-parameterization
variable "aws_region" {} # Already set by provider variable "vpc_id" {} # If the module creates VPC, why accept it as input? variable "subnet_cidrs" {} # Defaults would cover 90% of use cases
🏗️ Module Composition: Calling Modules
The Root Module
Every Terraform configuration is a module. The configuration in your current working directory is the root module.
# root/main.tf terraform { required_version = ">= 1.5" backend "s3" { # ... } } provider "aws" { region = var.region } # Call child modules module "vpc" { source = "./modules/aws-vpc" name = var.project_name environment = var.environment vpc_cidr = var.vpc_cidr } module "eks" { source = "./modules/aws-eks" cluster_name = "${var.project_name}-${var.environment}" subnet_ids = module.vpc.private_subnet_ids node_group_size = var.node_group_size }
Module Sources: Where Modules Live
Local paths — For modules in the same repository:
module "vpc" { source = "./modules/aws-vpc" # Relative path # or source = "../shared-modules/aws-vpc" # Parent directory }
Git repositories — For versioned, shared modules:
module "vpc" { source = "git::https://github.com/your-org/terraform-aws-vpc.git?ref=v1.2.0" } module "eks" { source = "git::ssh://git@github.com/your-org/terraform-aws-eks.git?ref=v2.1.0" }
Terraform Registry — For public modules:
module "vpc" { source = "terraform-aws-modules/vpc/aws" version = "5.0.0" }
HTTP URLs — For direct access:
module "consul" { source = "https://example.com/terraform-consul-module.zip" }
Passing Data Between Modules
This is where modules become truly powerful—composing them like building blocks.
# 1. VPC module creates networking infrastructure module "vpc" { source = "./modules/aws-vpc" name = var.project_name environment = var.environment vpc_cidr = var.vpc_cidr } # 2. Security group module uses VPC ID from VPC module module "security_groups" { source = "./modules/aws-security-groups" vpc_id = module.vpc.vpc_id # ← Output from VPC module environment = var.environment } # 3. EKS module uses subnet IDs from VPC module module "eks" { source = "./modules/aws-eks" cluster_name = "${var.project_name}-${var.environment}" subnet_ids = module.vpc.private_subnet_ids # ← Output from VPC module vpc_id = module.vpc.vpc_id # ← Output from VPC module node_security_group_id = module.security_groups.node_security_group_id # ← Output from SG module } # 4. RDS module uses subnet group from VPC module module "rds" { source = "./modules/aws-rds" database_name = "${var.project_name}-db" subnet_ids = module.vpc.private_subnet_ids # ← Output from VPC module security_groups = [module.security_groups.database_security_group_id] # ← Output from SG module }
This is infrastructure as code at its best. Each module has a single responsibility. Modules are composed through their outputs. The root configuration reads like a blueprint of your entire infrastructure.
🔖 Module Versioning: The Critical Practice
Why Versioning Matters
You update a module to add a new feature. Suddenly, 27 teams' infrastructure behaves differently. Some teams get the new feature automatically (maybe they want it, maybe they don't). Others get breaking changes without warning.
Versioning solves this. It's the contract between module maintainers and module consumers.
Semantic Versioning for Modules
Follow Semantic Versioning (SemVer): MAJOR.MINOR.PATCH
| Version | When to increment | Example |
|---|---|---|
| MAJOR | Breaking changes | v2.0.0 |
| MINOR | New features, backward compatible | v1.3.0 |
| PATCH | Bug fixes, backward compatible | v1.2.1 |
Breaking changes include:
Removing or renaming a variable
Removing or renaming an output
Changing a variable type
Changing resource behavior in a non-backward-compatible way
Versioning Strategies
Strategy 1: Git Tags (Simple)
# Tag your module repository git tag -a v1.2.0 -m "Release v1.2.0" git push origin v1.2.0 # Consume with version pin module "vpc" { source = "git::https://github.com/your-org/terraform-aws-vpc.git?ref=v1.2.0" }
Strategy 2: Terraform Registry (Professional)
module "vpc" { source = "your-org/vpc/aws" version = "~> 1.2" # Allow patch updates }
Strategy 3: Vendor in Modules (Air-gapped environments)
# Download module and commit to your repository git submodule add https://github.com/your-org/terraform-aws-vpc.git modules/vpc
Version Constraints
# Exact version (most conservative) version = "1.2.0" # Patch releases only (recommended for production) version = "~> 1.2.0" # 1.2.x, not 1.3.0 # Minor releases only version = "~> 1.2" # 1.x, not 2.0.0 # Greater than or equal to version = ">= 1.2.0" # Range version = ">= 1.2.0, < 2.0.0" # Multiple constraints version = "~> 1.2, != 1.2.5" # Avoid known bad version
Best practice: Use ~> 1.2.0 for patch updates only, or ~> 1.2 for minor updates after testing. Never use latest.
🧪 Module Testing: Trust But Verify
The Testing Pyramid for Terraform Modules
/\ Integration Tests (few) / \ - Create real infrastructure / \ - Verify functionality / \ - Destroy cleanly /--------\ ! Unit ! Static Analysis (many) ! Tests ! - terraform fmt, validate !--------! - tflint, tfsec, checkov
Level 1: Static Analysis
#!/bin/bash # test/module/static.sh echo "=== Running Static Analysis ===" cd modules/aws-vpc # Format check terraform fmt -check -recursive if [ $? -ne 0 ]; then echo "❌ Terraform files not formatted" exit 1 fi echo "✅ Format check passed" # Validation terraform init -backend=false terraform validate if [ $? -ne 0 ]; then echo "❌ Terraform validation failed" exit 1 fi echo "✅ Validation passed" # Linting (tflint) tflint --init tflint if [ $? -ne 0 ]; then echo "❌ TFLint found issues" exit 1 fi echo "✅ Linting passed" # Security scanning (tfsec) tfsec . if [ $? -ne 0 ]; then echo "❌ TFSec found security issues" exit 1 fi echo "✅ Security scan passed" echo "=== All static checks passed ==="
Level 2: Integration Testing
This is the only way to know if your module actually works.
# test/integration/main.tf # This is a complete, independent Terraform configuration # that tests your module in a real cloud environment terraform { required_version = ">= 1.5" required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" } } } provider "aws" { region = "us-west-2" } # Test the module with minimal configuration module "vpc_test" { source = "../../modules/aws-vpc" name = "test-${random_string.suffix.result}" environment = "test" # Use default values for everything else } resource "random_string" "suffix" { length = 6 special = false upper = false } # Verify outputs output "vpc_id" { value = module.vpc_test.vpc_id } output "subnet_ids" { value = module.vpc_test.public_subnet_ids }
#!/bin/bash # test/integration/run.sh echo "=== Running Integration Tests ===" cd test/integration # Initialize terraform init # Create infrastructure terraform apply -auto-approve # Verify resources exist (you'd have more thorough checks here) VPC_ID=$(terraform output -raw vpc_id) aws ec2 describe-vpcs --vpc-ids $VPC_ID > /dev/null if [ $? -eq 0 ]; then echo "✅ VPC created successfully" else echo "❌ VPC not found" exit 1 fi # Destroy everything terraform destroy -auto-approve echo "=== All integration tests passed ==="
Level 3: Contract Testing
Ensure your module's interface doesn't change unexpectedly:
# test/contract/verify-interface.tf # This test ensures all expected variables and outputs exist # with the correct types and descriptions locals { # Expected variables with their types expected_variables = { name = "string" environment = "string" vpc_cidr = "string" enable_nat_gateway = "bool" } # Expected outputs with their types expected_outputs = { vpc_id = "string" vpc_cidr_block = "string" public_subnet_ids = "list" private_subnet_ids = "list" nat_gateway_ips = "list" } } # This would be implemented with Terratest or similar # The pattern: load module, verify interface, report mismatches
📚 Module Design Patterns
Pattern 1: The Wrapper Module
Purpose: Provide a simplified interface to a complex upstream module.
# modules/standard-vpc/main.tf locals { # Enforce organizational standards vpc_cidr = var.vpc_cidr != null ? var.vpc_cidr : ( var.environment == "prod" ? "10.0.0.0/16" : "10.1.0.0/16" ) subnet_counts = { dev = 2 staging = 3 prod = 3 } az_count = local.subnet_counts[var.environment] } # Call the community module with our standards enforced module "vpc" { source = "terraform-aws-modules/vpc/aws" version = "5.0.0" name = var.name cidr = local.vpc_cidr azs = slice(data.aws_availability_zones.available.names, 0, local.az_count) private_subnets = [for i in range(local.az_count) : cidrsubnet(local.vpc_cidr, 8, i + 10)] public_subnets = [for i in range(local.az_count) : cidrsubnet(local.vpc_cidr, 8, i)] enable_nat_gateway = var.environment == "prod" enable_vpn_gateway = false tags = { Environment = var.environment ManagedBy = "Terraform" Standard = "company-vpc" } }
When to use: Your organization needs to standardize on a community module but enforce specific configurations.
Pattern 2: The Factory Module
Purpose: Create multiple instances of a resource from declarative configuration.
# modules/bucket-factory/main.tf variable "buckets" { description = "Map of bucket configurations" type = map(object({ versioning = optional(bool, false) encryption = optional(bool, true) lifecycle_days = optional(number, 30) })) } resource "aws_s3_bucket" "this" { for_each = var.buckets bucket = each.key tags = { Name = each.key Environment = var.environment ManagedBy = "Terraform" } } resource "aws_s3_bucket_versioning" "this" { for_each = { for key, config in var.buckets : key => config if config.versioning } bucket = aws_s3_bucket.this[each.key].id versioning_configuration { status = "Enabled" } } resource "aws_s3_bucket_server_side_encryption_configuration" "this" { for_each = { for key, config in var.buckets : key => config if config.encryption } bucket = aws_s3_bucket.this[each.key].id rule { apply_server_side_encryption_by_default { sse_algorithm = "AES256" } } } resource "aws_s3_bucket_lifecycle_configuration" "this" { for_each = var.buckets bucket = aws_s3_bucket.this[each.key].id rule { id = "default-lifecycle" status = "Enabled" expiration { days = each.value.lifecycle_days } } } output "bucket_arns" { value = { for key, bucket in aws_s3_bucket.this : key => bucket.arn } }
Usage:
module "buckets" { source = "./modules/bucket-factory" environment = var.environment buckets = { "app-logs-${var.environment}" = { versioning = true lifecycle_days = 90 } "app-backups-${var.environment}" = { encryption = true lifecycle_days = 365 } "app-temp-${var.environment}" = { versioning = false encryption = false lifecycle_days = 7 } } }
When to use: You need to create many similar resources with slightly different configurations.
Pattern 3: The Service Module
Purpose: Deploy a complete, self-contained service.
# modules/web-application/main.tf module "vpc" { source = "../aws-vpc" name = var.name environment = var.environment } module "database" { source = "../aws-rds" name = var.name environment = var.environment vpc_id = module.vpc.vpc_id subnet_ids = module.vpc.private_subnet_ids instance_class = var.database_instance_class } module "compute" { source = "../aws-ecs" name = var.name environment = var.environment vpc_id = module.vpc.vpc_id subnet_ids = module.vpc.public_subnet_ids instance_count = var.instance_count instance_type = var.instance_type database_host = module.database.endpoint } module "cdn" { source = "../aws-cloudfront" name = var.name environment = var.environment origin_url = module.compute.load_balancer_dns } output "application_url" { value = module.cdn.domain_name } output "database_endpoint" { value = module.database.endpoint }
When to use: Your module represents a complete application stack, composed of multiple infrastructure components.
📤 Publishing Modules: Sharing with Your Team
Step 1: Create a Dedicated Repository
terraform-aws-vpc/
├── README.md
├── LICENSE
├── main.tf
├── variables.tf
├── outputs.tf
├── versions.tf
├── examples/
│ ├── basic-vpc/
│ │ └── main.tf
│ └── vpc-with-nat/
│ └── main.tf
└── tests/
└── integration/
└── main.tfNaming convention: terraform-<PROVIDER>-<NAME> (e.g., terraform-aws-vpc)
Step 2: Write Excellent Documentation
README.md should include:
# Terraform AWS VPC Module A reusable Terraform module for creating VPCs with public and private subnets. ## Features - Creates VPC with customizable CIDR block - Creates public and private subnets across availability zones - Optional NAT gateway for private subnet internet access - Consistent tagging across all resources ## Usage ```hcl module "vpc" { source = "git::https://github.com/your-org/terraform-aws-vpc.git?ref=v1.0.0" name = "production" environment = "prod" vpc_cidr = "10.0.0.0/16" public_subnet_cidrs = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"] private_subnet_cidrs = ["10.0.10.0/24", "10.0.20.0/24", "10.0.30.0/24"] enable_nat_gateway = true }
Requirements
| Name | Version |
|---|---|
| terraform | >= 1.5 |
| aws | ~> 5.0 |
Inputs
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
| name | Name prefix for all resources | string | n/a | yes |
| environment | Deployment environment | string | n/a | yes |
| vpc_cidr | CIDR block for VPC | string | "10.0.0.0/16" | no |
| enable_nat_gateway | Enable NAT gateway | bool | false | no |
Outputs
| Name | Description |
|---|---|
| vpc_id | ID of the created VPC |
| vpc_cidr_block | CIDR block of the VPC |
| public_subnet_ids | IDs of public subnets |
| private_subnet_ids | IDs of private subnets |
--- ### Step 3: Version Your Module ```bash git tag -a v1.0.0 -m "Initial release" git push origin v1.0.0 git tag -a v1.1.0 -m "Add support for NAT gateway" git push origin v1.1.0
Step 4: Create a Private Registry (For Large Organizations)
Using Terraform Cloud / Terraform Enterprise:
Create a module repository with name
terraform-<PROVIDER>-<NAME>Add
terraform-<PROVIDER>-<NAME>tagTerraform Cloud auto-discovers and indexes modules
Using self-hosted registry:
# Configure a module registry in your Terraform configuration module "vpc" { source = "your-company.com/network/vpc/aws" version = "1.2.0" }
Using GitHub as a registry:
module "vpc" { source = "github.com/your-org/terraform-aws-vpc?ref=v1.2.0" }
🛠️ Refactoring: Turning Monoliths into Modules
The Migration Strategy
Step 1: Identify module boundaries
Current monolithic config (all in one directory): - VPC + subnets + route tables - Security groups - EC2 instances + ASG - RDS database - S3 buckets
Step 2: Create module directories
modules/ ├── vpc/ ├── security-groups/ ├── compute/ ├── database/ └── storage/
Step 3: Extract code (one module at a time)
# 1. Create module structure mkdir -p modules/vpc cp variables.tf modules/vpc/ # ... copy VPC-related resources # 2. Define module interface # modules/vpc/variables.tf - only what callers need # modules/vpc/outputs.tf - only what callers need # 3. Replace with module call in root module "vpc" { source = "./modules/vpc" name = var.name environment = var.environment vpc_cidr = var.vpc_cidr }
Step 4: Use terraform state mv to preserve existing resources
# IMPORTANT: Don't destroy and recreate! # Move resources from root state to module state terraform state mv aws_vpc.main module.vpc.aws_vpc.this terraform state mv aws_subnet.public[0] module.vpc.aws_subnet.public[0] terraform state mv aws_subnet.public[1] module.vpc.aws_subnet.public[1] # ... continue for all resources # Verify terraform state list | grep module.vpc
Step 5: Test thoroughly
terraform plan
# Should show no changes if migration was successfulStep 6: Repeat for each module
✅ Module Best Practices Checklist
Interface Design
Module has a clear, single responsibility
Required variables have no defaults
Optional variables have sensible defaults
Every variable has a description and type
Validation blocks enforce business rules
Outputs expose minimum necessary information
Every output has a description
Implementation
Resources use consistent naming (based on input variables)
Resources are tagged appropriately
Count and for_each used for flexibility
Dynamic blocks used for optional nested configurations
Local values used for complex expressions
No hardcoded provider configurations (unless intentional)
Documentation
README.md exists and is comprehensive
Examples directory contains working usage examples
Version is documented and tagged
CHANGELOG.md tracks changes between versions
Testing
Static analysis passes (
fmt,validate,tflint,tfsec)Integration tests create and destroy real infrastructure
Contract tests verify module interface
CI pipeline runs tests on pull requests
Versioning
Module follows Semantic Versioning
Breaking changes trigger major version bump
Deprecation policy is documented
Consumers pin to specific versions or ~> constraints
🎓 Practice Exercises
Exercise 1: Create a Security Group Module
Task: Create a reusable module for AWS security groups with the following features:
Accepts VPC ID and list of ingress/egress rules
Provides default rules for common scenarios (web, SSH, internal)
Outputs the security group ID
Includes validation and documentation
Solution:
# modules/aws-security-group/variables.tf variable "name" { description = "Name of the security group" type = string } variable "vpc_id" { description = "VPC ID to create security group in" type = string } variable "description" { description = "Description of the security group" type = string default = "Managed by Terraform" } variable "ingress_rules" { description = "List of ingress rules" type = list(object({ description = string from_port = number to_port = number protocol = string cidr_blocks = optional(list(string)) security_groups = optional(list(string)) })) default = [] } variable "egress_rules" { description = "List of egress rules" type = list(object({ description = string from_port = number to_port = number protocol = string cidr_blocks = optional(list(string)) security_groups = optional(list(string)) })) default = [] } variable "tags" { description = "Tags to apply to the security group" type = map(string) default = {} } # modules/aws-security-group/main.tf resource "aws_security_group" "this" { name = var.name description = var.description vpc_id = var.vpc_id tags = merge({ Name = var.name }, var.tags) } resource "aws_security_group_rule" "ingress" { for_each = { for idx, rule in var.ingress_rules : idx => rule } type = "ingress" security_group_id = aws_security_group.this.id description = each.value.description from_port = each.value.from_port to_port = each.value.to_port protocol = each.value.protocol cidr_blocks = each.value.cidr_blocks source_security_group_id = length(each.value.security_groups) > 0 ? each.value.security_groups[0] : null } resource "aws_security_group_rule" "egress" { for_each = { for idx, rule in var.egress_rules : idx => rule } type = "egress" security_group_id = aws_security_group.this.id description = each.value.description from_port = each.value.from_port to_port = each.value.to_port protocol = each.value.protocol cidr_blocks = each.value.cidr_blocks source_security_group_id = length(each.value.security_groups) > 0 ? each.value.security_groups[0] : null } # Common rule sets as locals locals { web_ingress = [ { description = "HTTP" from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] }, { description = "HTTPS" from_port = 443 to_port = 443 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ] ssh_ingress = [ { description = "SSH" from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = [] # Should be restricted in production! } ] all_egress = [ { description = "All outbound" from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } ] } # modules/aws-security-group/outputs.tf output "security_group_id" { description = "ID of the created security group" value = aws_security_group.this.id } output "security_group_arn" { description = "ARN of the created security group" value = aws_security_group.this.arn }
Exercise 2: Refactor a Monolithic Configuration
Task: Given this monolithic configuration, extract the VPC and subnets into a reusable module.
# Original monolith (simplified) resource "aws_vpc" "main" { cidr_block = "10.0.0.0/16" tags = { Name = "main-vpc" } } resource "aws_subnet" "public" { count = 3 vpc_id = aws_vpc.main.id cidr_block = "10.0.${count.index}.0/24" availability_zone = data.aws_availability_zones.available.names[count.index] tags = { Name = "public-${count.index}" } } resource "aws_subnet" "private" { count = 3 vpc_id = aws_vpc.main.id cidr_block = "10.0.${count.index + 10}.0/24" availability_zone = data.aws_availability_zones.available.names[count.index] tags = { Name = "private-${count.index}" } } resource "aws_internet_gateway" "main" { vpc_id = aws_vpc.main.id } resource "aws_route_table" "public" { vpc_id = aws_vpc.main.id route { cidr_block = "0.0.0.0/0" gateway_id = aws_internet_gateway.main.id } } resource "aws_route_table_association" "public" { count = 3 subnet_id = aws_subnet.public[count.index].id route_table_id = aws_route_table.public.id }
Solution Steps:
Create module directory structure
Define module variables (name, environment, cidr_block, subnet_counts)
Move VPC, subnet, IGW, route table resources to module
Define outputs (vpc_id, public_subnet_ids, private_subnet_ids)
Use
terraform state mvto migrate existing resourcesReplace with module call in root configuration
🚀 Advanced Module Patterns
Pattern: Module Composition with depends_on
# Sometimes you need explicit dependencies module "dns" { source = "./modules/route53" domain = var.domain records = [ { name = "app" type = "A" ttl = 300 records = [module.load_balancer.dns_name] } ] # Load balancer must exist before we can create DNS record depends_on = [module.load_balancer] }
Pattern: Module Iteration with for_each
# Define multiple similar modules locals { applications = { "api" = { instance_count = 3 instance_type = "t3.medium" port = 8080 } "worker" = { instance_count = 2 instance_type = "t3.large" port = 5000 } "frontend" = { instance_count = 2 instance_type = "t3.small" port = 3000 } } } module "applications" { source = "./modules/web-application" for_each = local.applications name = each.key instance_count = each.value.instance_count instance_type = each.value.instance_type app_port = each.value.port environment = var.environment vpc_id = module.vpc.vpc_id subnet_ids = module.vpc.public_subnet_ids } output "application_urls" { value = { for name, module in module.applications : name => module.load_balancer_dns } }
📋 Summary: Modules Are the Professional's Choice
Copy-pasting configurations is what beginners do. Using modules is what professionals do.
| Without Modules | With Modules | |
|---|---|---|
| Reuse | Copy files, search and replace | Single module block |
| Updates | Find and fix 50 copies | Update module, bump version |
| Consistency | Drift guaranteed | Standardized across teams |
| Complexity | Exposed everywhere | Encapsulated |
| Testing | Manual, inconsistent | Automated integration tests |
| Onboarding | "Learn our 15 patterns" | "Learn our 3 modules" |
The investment in modules pays for itself the first time you need to make a security update across 20 applications. It pays for itself again when a new team member can be productive in hours instead of weeks.
Start small. Extract the next piece of infrastructure you copy-paste. Version it. Document it. Share it.
🔗 Master Terraform Modules with Hands-on Labs
You now understand the theory of modules. Now practice building, publishing, and consuming them in real scenarios.
👉 Practice module creation, composition, and versioning in our interactive labs at:
https://devops.trainwithsky.com/
Our platform provides:
Module design workshops
Refactoring real monoliths into modules
Integration testing environments
Private module registry simulation
Team collaboration scenarios
Frequently Asked Questions
Q: When should I create a new module vs. using an existing community module?
A: Use community modules for well-understood infrastructure (VPC, EC2, RDS). Create your own modules when you need to enforce organizational standards, combine multiple resources into a service, or when no good community module exists.
Q: How do I handle breaking changes in modules?
A: Follow semantic versioning: MAJOR version for breaking changes. Provide migration documentation and, when possible, maintain backward compatibility for one major version cycle. Use version constraints to protect consumers.
Q: Can I have modules within modules?
A: Yes! This is module composition. It's a powerful pattern for building complex infrastructure from simple, reusable components.
Q: How do I test modules that depend on other modules?
A: Integration tests should test the complete composition. Create test configurations that call your module composition exactly as a consumer would.
Q: Should I commit my .terraform directory to version control?
A: No. Never. Add it to .gitignore. Commit your code, not your dependencies.
Q: How do I share modules across many teams?
A: Use a private module registry (Terraform Cloud/Enterprise) or a dedicated Git repository with semantic versioning. Document your modules thoroughly and provide working examples.
Q: What's the difference between a module and a root module?
A: Every Terraform configuration is a module. The configuration you're currently working in is the root module. Modules you call from within it are child modules. The distinction is just perspective.
Ready to start building your first module? Stuck on module design? Share your module challenges in the comments below—our community of Terraform practitioners is here to help! 💬
Comments
Post a Comment