A Guide to Terraform Variables, Outputs, and Best Practices
Your complete handbook for making Terraform configurations dynamic, reusable, and production-ready—from simple input variables to complex data structures.
📅 Published: Feb 2026
⏱️ Estimated Reading Time: 24 minutes
🏷️ Tags: Terraform Variables, Outputs, Data Types, Variable Validation, Terraform Best Practices
🎯 Introduction: From Hard-Coded to Dynamic Configurations
The Problem with Hard-Coded Values
Every beginner starts here:
resource "aws_instance" "web" { ami = "ami-0c55b159cbfafe1f0" # Hard-coded! instance_type = "t2.micro" # Hard-coded! subnet_id = "subnet-12345678" # Hard-coded! }
This works—once. Then you need to:
Deploy to a different region (different AMI ID)
Use a larger instance type for production
Share the configuration with a teammate
Create multiple environments (dev, staging, prod)
Suddenly, your "simple" configuration becomes a maintenance nightmare of copy-pasted files and manual edits.
Variables and outputs are the solution. They transform rigid, environment-specific configurations into flexible, reusable templates that work anywhere, for any environment, with any team.
What You'll Learn
By the end of this guide, you will understand:
✅ Input Variables — How to make your configurations parameterized and reusable
✅ Data Types — Strings, numbers, bools, lists, maps, objects, tuples
✅ Variable Validation — Ensuring users provide valid values
✅ Sensitive Variables — Handling secrets securely
✅ Output Values — Exposing information to users and other configurations
✅ Local Values — Creating intermediate calculations and clean abstractions
✅ Best Practices — Naming, organization, and team workflows
📥 Input Variables: The Parameters of Your Infrastructure
What is an Input Variable?
An input variable is a parameter to your Terraform configuration. It's how users pass information into your module without editing the source code.
# Declaration (what the variable is) variable "instance_type" { description = "EC2 instance type for web servers" type = string default = "t2.micro" } # Usage (how to use it) resource "aws_instance" "web" { instance_type = var.instance_type # ← Reference with var.NAME }
Think of variables like form fields: You define them once, and users fill them in each time they run Terraform.
Variable Declaration: The Anatomy
Every variable declaration has the same structure:
variable "name" { # ← Required: Variable name (identifier) description = "..." # ← Optional: Explain what this is for type = ... # ← Optional: Restrict allowed values default = ... # ← Optional: Fallback if not provided validation { ... } # ← Optional: Custom validation rules sensitive = true # ← Optional: Hide from output nullable = false # ← Optional: Disallow null values }
The only truly required part is the variable name. Everything else is optional—but in production code, you should always include at least description and type.
Variable Naming Conventions
Good variable names are obvious and self-documenting:
# ✅ Good - Clear purpose variable "instance_type" {} variable "vpc_cidr_block" {} variable "enable_dns_hostnames" {} # ❌ Bad - Vague or meaningless variable "type" {} # Type of what? variable "val" {} # Which value? variable "foo" {} # Seriously?
Naming best practices:
Use
snake_case(lowercase with underscores)Be specific but concise
Include units in the name when relevant (
timeout_seconds,size_gb)Boolean variables should start with
enable_,create_, oruse_
🔢 Data Types: The Shape of Your Variables
The Six Core Data Types
Type 1: String — Text values
variable "environment" { description = "Deployment environment (dev, staging, prod)" type = string default = "dev" } # Usage resource "aws_s3_bucket" "data" { bucket = "myapp-${var.environment}-data" }
String validation:
variable "environment" { type = string validation { condition = contains(["dev", "staging", "prod"], var.environment) error_message = "Environment must be dev, staging, or prod." } }
Type 2: Number — Numeric values
variable "instance_count" { description = "Number of EC2 instances to launch" type = number default = 1 } variable "disk_size_gb" { description = "Root volume size in gigabytes" type = number default = 20 validation { condition = var.disk_size_gb >= 10 && var.disk_size_gb <= 1000 error_message = "Disk size must be between 10 GB and 1000 GB." } } # Usage resource "aws_instance" "web" { count = var.instance_count root_block_device { volume_size = var.disk_size_gb } }
Type 3: Bool — True/false values
variable "enable_versioning" { description = "Enable S3 bucket versioning" type = bool default = false } # Usage resource "aws_s3_bucket_versioning" "this" { count = var.enable_versioning ? 1 : 0 bucket = aws_s3_bucket.data.id versioning_configuration { status = "Enabled" } }
Boolean naming convention: Use prefixes like enable_, create_, use_, has_.
Type 4: List — Ordered sequence of values (same type)
variable "availability_zones" { description = "List of availability zones to deploy into" type = list(string) default = ["us-west-2a", "us-west-2b", "us-west-2c"] } variable "subnet_cidrs" { description = "CIDR blocks for private subnets" type = list(string) default = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"] } # Usage resource "aws_subnet" "private" { count = length(var.subnet_cidrs) vpc_id = aws_vpc.main.id cidr_block = var.subnet_cidrs[count.index] availability_zone = var.availability_zones[count.index] }
List operations:
# Length of list length(var.availability_zones) # 3 # Access element (0-indexed) var.subnet_cidrs[0] # "10.0.1.0/24" # First and last var.subnet_cidrs[0] # First var.subnet_cidrs[length(var.subnet_cidrs) - 1] # Last # Slice (subset) slice(var.subnet_cidrs, 0, 2) # First two elements
Type 5: Map — Key-value pairs (all values same type)
variable "instance_tags" { description = "Tags to apply to all EC2 instances" type = map(string) default = { Environment = "dev" ManagedBy = "Terraform" Project = "WebApp" } } variable "instance_types_by_env" { description = "Instance type for each environment" type = map(string) default = { dev = "t2.micro" staging = "t3.small" prod = "t3.large" } } # Usage resource "aws_instance" "web" { instance_type = var.instance_types_by_env[var.environment] tags = var.instance_tags }
Map operations:
# Access value by key var.instance_types_by_env["prod"] # "t3.large" # Keys and values keys(var.instance_tags) # ["Environment", "ManagedBy", "Project"] values(var.instance_tags) # ["dev", "Terraform", "WebApp"] # Lookup with default lookup(var.instance_types_by_env, "dr", "t2.micro") # "t2.micro" (key not found)
Type 6: Object — Complex structures with named attributes (different types allowed)
variable "database_config" { description = "RDS database configuration" type = object({ engine = string engine_version = string instance_class = string storage_gb = number multi_az = bool backup_retention_days = number subnet_ids = list(string) }) default = { engine = "postgres" engine_version = "14.7" instance_class = "db.t3.micro" storage_gb = 100 multi_az = false backup_retention_days = 7 subnet_ids = [] } } # Usage resource "aws_db_instance" "main" { engine = var.database_config.engine engine_version = var.database_config.engine_version instance_class = var.database_config.instance_class allocated_storage = var.database_config.storage_gb multi_az = var.database_config.multi_az backup_retention_period = var.database_config.backup_retention_days db_subnet_group_name = aws_db_subnet_group.main.name }
Objects vs. Maps:
Maps have arbitrary keys, all values same type →
map(string)Objects have predefined keys, values can be different types →
object({...})
Use objects when the structure is fixed and known in advance. Use maps when keys are dynamic or user-defined.
Type 7: Tuple — Fixed-length list with specific types per position
variable "cidr_configuration" { description = "CIDR blocks for VPC: [vpc_cidr, public_subnet_cidr, private_subnet_cidr]" type = tuple([string, string, string]) default = ["10.0.0.0/16", "10.0.1.0/24", "10.0.2.0/24"] } # Usage resource "aws_vpc" "main" { cidr_block = var.cidr_configuration[0] } resource "aws_subnet" "public" { cidr_block = var.cidr_configuration[1] } resource "aws_subnet" "private" { cidr_block = var.cidr_configuration[2] }
Tuples are rare in practice. Objects are usually clearer because the attributes have names. Use tuples only when the position is semantically meaningful and well-documented.
Optional vs. Required Variables
Required variable (no default):
variable "vpc_id" { description = "ID of existing VPC to deploy into" type = string # No default → users MUST provide this value }
Optional variable (with default):
variable "instance_type" { description = "EC2 instance type" type = string default = "t2.micro" # Optional - will use default if not provided }
Conditionally optional (nullable = true):
variable "kms_key_id" { description = "KMS key ID for encryption (null = use AWS managed key)" type = string default = null # Explicitly no value nullable = true # Allow null (default is true) } # Usage resource "aws_s3_bucket" "data" { bucket = "myapp-data" server_side_encryption_configuration { rule { apply_server_side_encryption_by_default { sse_algorithm = var.kms_key_id != null ? "aws:kms" : "AES256" kms_master_key_id = var.kms_key_id } } } }
Design principle: Make variables required by default, optional only when there's a sensible default that works for all use cases.
✅ Variable Validation: Trust But Verify
Why Validate Variables?
Users make mistakes. They might:
Typo an environment name (
prdinstead ofprod)Enter an invalid CIDR block (
10.0.300.0/24)Specify an unsupported instance type (
t9.nano)Forget to include required tags
Validation catches these errors early—during plan, not after apply when resources are already created.
Basic Validation Rules
variable "environment" { description = "Deployment environment" type = string validation { condition = contains(["dev", "staging", "prod"], var.environment) error_message = "Environment must be 'dev', 'staging', or 'prod'." } } variable "vpc_cidr_block" { description = "CIDR block for VPC" type = string validation { condition = can(cidrnetmask(var.vpc_cidr_block)) # Valid CIDR? error_message = "VPC CIDR block must be a valid IPv4 CIDR range." } } variable "instance_type" { description = "EC2 instance type" type = string validation { condition = can(regex("^t[23]?\\.(micro|small|medium|large)$", var.instance_type)) error_message = "Instance type must be a t2 or t3 family instance." } }
Advanced Validation Patterns
Validate list contents:
variable "allowed_ports" { description = "Ports to allow in security group" type = list(number) validation { condition = alltrue([ for port in var.allowed_ports : port > 0 && port < 65536 ]) error_message = "All ports must be between 1 and 65535." } validation { condition = length(var.allowed_ports) == length(distinct(var.allowed_ports)) error_message = "Ports must be unique (no duplicates)." } }
Validate map contents:
variable "instance_types_by_env" { description = "Instance type per environment" type = map(string) validation { condition = alltrue([ for env, type in var.instance_types_by_env : contains(["dev", "staging", "prod"], env) ]) error_message = "Environment keys must be dev, staging, or prod." } validation { condition = alltrue([ for env, type in var.instance_types_by_env : can(regex("^t[23]?\\.", type)) ]) error_message = "All instance types must be from the t family." } }
Validate object attributes:
variable "database_config" { description = "Database configuration" type = object({ engine = string engine_version = string storage_gb = number backup_days = number }) validation { condition = contains(["postgres", "mysql", "aurora"], var.database_config.engine) error_message = "Engine must be postgres, mysql, or aurora." } validation { condition = var.database_config.storage_gb >= 20 && var.database_config.storage_gb <= 65536 error_message = "Storage must be between 20 GB and 65536 GB." } validation { condition = var.database_config.backup_days >= 0 && var.database_config.backup_days <= 35 error_message = "Backup retention must be between 0 and 35 days." } }
Cross-field validation:
variable "vpc_config" { type = object({ cidr_block = string public_subnets = list(string) private_subnets = list(string) }) validation { # All subnets must be within VPC CIDR condition = alltrue(concat([ for subnet in var.vpc_config.public_subnets : cidrsubnet(var.vpc_config.cidr_block, 8, 0) != null # Simplified check ], [ for subnet in var.vpc_config.private_subnets : cidrsubnet(var.vpc_config.cidr_block, 8, 0) != null ])) error_message = "All subnets must be within VPC CIDR block." } validation { # Number of public subnets must match number of AZs condition = length(var.vpc_config.public_subnets) == length(var.availability_zones) error_message = "Number of public subnets must match number of availability zones." } }
Validation Best Practices
1. Validate early, validate often
Check values as soon as they're received
Fail fast with clear error messages
Don't rely on providers to validate (they error later, after building graph)
2. Write user-friendly error messages
Tell the user what went wrong
Tell the user what is allowed
Don't use jargon or internal variable names
✅ Good:
Error: Invalid environment value Environment must be 'dev', 'staging', or 'prod'.
❌ Bad:
Error: Invalid value for var.environment Condition failed: contains(["dev","staging","prod"], var.environment)
3. Use helper functions
can()— Test if expression would succeedcontains()— Check if value is in listlength()— Validate collection sizealltrue()/anytrue()— Aggregate list conditionsregex()— Pattern matching for stringscidrnetmask()— Validate CIDR blocks
4. Don't over-validate
Let providers validate things they're already good at
Focus on business logic and constraints
Avoid validation that duplicates provider validation
🤫 Sensitive Variables: Handling Secrets
The Problem with Secrets in Terraform
Terraform state is plain text JSON. Any value you pass to Terraform—even through variables—can end up in the state file in plain text.
# ❌ DANGEROUS - Secret in state file! variable "database_password" { description = "RDS master password" type = string sensitive = true # Hides from CLI output, BUT STILL IN STATE! } resource "aws_db_instance" "main" { password = var.database_password }
The sensitive = true flag only affects CLI output. It does NOT encrypt the value in state. Anyone with access to the state file can read the password in plain text.
The Solution: Never Store Secrets in State
Pattern 1: Use secrets manager and pass ARN, not value
# ✅ GOOD - Pass secret ARN, not secret value variable "db_password_secret_arn" { description = "ARN of secret in AWS Secrets Manager containing database password" type = string } data "aws_secretsmanager_secret_version" "db_password" { secret_id = var.db_password_secret_arn } resource "aws_db_instance" "main" { password = data.aws_secretsmanager_secret_version.db_password.secret_string }
Pattern 2: Use environment variables (not for state, but for applying)
# variable.tf variable "database_password" { description = "RDS master password" type = string sensitive = true }
# Don't set in terraform.tfvars # Set as environment variable export TF_VAR_database_password="my-secure-password"
Pattern 3: External secret management tools
Vault —
data "vault_generic_secret" "db_password" {...}AWS Secrets Manager —
data "aws_secretsmanager_secret_version" {...}Azure Key Vault —
data "azurerm_key_vault_secret" {...}Google Secret Manager —
data "google_secret_manager_secret_version" {...}
Sensitive Variable Best Practices
# 1. ALWAYS mark secret variables as sensitive variable "api_key" { description = "API key for external service" type = string sensitive = true } # 2. Never set defaults for secrets variable "api_key" { description = "API key for external service" type = string # NO DEFAULT! } # 3. Never hardcode secrets in .tfvars files committed to Git # ❌ terraform.tfvars: # api_key = "sk_live_12345" # ✅ terraform.tfvars.example (safe to commit): # api_key = "sk_live_..."
Golden rule: If it's a secret, it should never appear in any file that is committed to version control—including state files. Use a secrets manager.
📤 Output Values: Exposing Information
What is an Output Value?
An output value is information that Terraform exposes to users after apply. It's how your configuration communicates important details back to the person running it—or to other configurations.
output "vpc_id" { description = "ID of the created VPC" value = aws_vpc.main.id } output "public_subnet_ids" { description = "IDs of public subnets" value = aws_subnet.public[*].id }
After terraform apply, outputs are displayed:
Apply complete! Resources: 5 added, 0 changed, 0 destroyed. Outputs: vpc_id = "vpc-0a1b2c3d4e5f67890" public_subnet_ids = [ "subnet-12345678", "subnet-23456789", "subnet-34567890", ]
Output Declaration: The Anatomy
output "name" { # ← Required: Output name (identifier) description = "..." # ← Optional: Explain what this is value = ... # ← Required: Expression to evaluate sensitive = true # ← Optional: Hide from CLI display depends_on = [] # ← Optional: Explicit dependencies }
Outputs are always computed after all resources are created. They can reference:
Resource attributes (
aws_instance.web.id)Data source attributes (
data.aws_ami.ubuntu.id)Module outputs (
module.vpc.vpc_id)Literal values (
"constant")Complex expressions (
[for i in aws_subnet.public : i.id])
Common Output Patterns
1. Expose resource identifiers
output "bucket_arn" { description = "ARN of created S3 bucket" value = aws_s3_bucket.data.arn } output "instance_public_ips" { description = "Public IP addresses of web instances" value = aws_instance.web[*].public_ip }
2. Expose connection information
output "database_endpoint" { description = "Connection endpoint for RDS cluster" value = { address = aws_rds_cluster.main.endpoint port = aws_rds_cluster.main.port database = aws_rds_cluster.main.database_name } } output "database_connection_string" { description = "JDBC connection string" value = "jdbc:postgresql://${aws_rds_cluster.main.endpoint}/${aws_rds_cluster.main.database_name}" sensitive = true # Contains hostname and port }
3. Expose URLs and endpoints
output "website_url" { description = "URL of S3 website" value = aws_s3_bucket_website_configuration.main.website_endpoint } output "load_balancer_dns" { description = "DNS name of application load balancer" value = aws_lb.main.dns_name }
4. Expose summary information
output "vpc_summary" { description = "Summary of VPC configuration" value = { id = aws_vpc.main.id cidr_block = aws_vpc.main.cidr_block public_subnet_count = length(aws_subnet.public) private_subnet_count = length(aws_subnet.private) } }
Sensitive Outputs
output "database_password" { description = "Master password for RDS instance" value = random_password.db.result sensitive = true # Hidden from CLI output }
When sensitive = true:
Output is hidden in
terraform applyandterraform outputOutput is still stored in state in plain text
Other configurations can still read it via
terraform_remote_stateYou can still force-display with
terraform output -json
This is a UI feature, not a security feature. It prevents secrets from being displayed in CI/CD logs, but does NOT encrypt them in state.
Outputs for Module Communication
This is the primary purpose of outputs—sharing data between modules.
# networking/outputs.tf output "vpc_id" { description = "ID of the VPC" value = aws_vpc.main.id } output "public_subnet_ids" { description = "IDs of public subnets" value = aws_subnet.public[*].id } output "private_subnet_ids" { description = "IDs of private subnets" value = aws_subnet.private[*].id }
# compute/main.tf module "networking" { source = "../networking" } resource "aws_instance" "web" { subnet_id = module.networking.public_subnet_ids[0] # ... } resource "aws_db_instance" "main" { db_subnet_group_name = aws_db_subnet_group.private.name # ... } resource "aws_db_subnet_group" "private" { subnet_ids = module.networking.private_subnet_ids }
This pattern—modules exposing outputs for other modules to consume—is the foundation of composable infrastructure.
📊 Local Values: Intermediate Calculations
What is a Local Value?
A local value is like a variable, but internal to your module. It's not exposed to users; it's just a convenient way to name complex expressions and avoid repetition.
locals { # Compute once, use many times common_tags = { Environment = var.environment ManagedBy = "Terraform" Project = var.project_name CreatedAt = timestamp() } # Transform user input resource_name_prefix = "${var.project}-${var.environment}" # Conditional values instance_type = var.environment == "prod" ? "t3.large" : "t3.micro" } # Usage resource "aws_instance" "web" { instance_type = local.instance_type tags = merge(local.common_tags, { Name = "${local.resource_name_prefix}-web" }) }
Local values are evaluated once, at the beginning of the Terraform run. They can reference variables, resource attributes, functions, and other local values.
Local Values vs. Variables
| Input Variables | Local Values | |
|---|---|---|
| Purpose | Accept user input | Internal calculations |
| Exposed to users | Yes | No |
| Can have defaults | Yes | Always computed |
| Can reference resources | No | Yes |
| Can use functions | Limited | Full |
| Validation | Yes | No |
Rule of thumb: If a value is derived from other values and never directly set by the user, use a local.
Common Local Value Patterns
1. Derived names
locals { name_prefix = "${var.project}-${var.environment}" bucket_name = "${local.name_prefix}-${random_string.suffix.result}" cluster_name = "${local.name_prefix}-eks" database_name = "${local.name_prefix}-db" }
2. Conditional configuration
locals { is_production = var.environment == "prod" # Resource sizing instance_type = local.is_production ? "t3.large" : "t3.micro" db_instance_class = local.is_production ? "db.t3.large" : "db.t3.small" min_size = local.is_production ? 3 : 1 max_size = local.is_production ? 10 : 3 # Feature flags enable_monitoring = local.is_production || var.environment == "staging" enable_backups = local.is_production multi_az = local.is_production }
3. Complex transformations
locals { # Convert list of subnet CIDRs to list of subnet objects public_subnets = [ for idx, cidr in var.public_subnet_cidrs : { cidr_block = cidr az = var.availability_zones[idx % length(var.availability_zones)] tags = { Name = "${var.name}-public-${idx + 1}" Type = "public" } } ] # Flatten list of maps all_subnets = concat(local.public_subnets, local.private_subnets) # Group resources by AZ subnets_by_az = { for subnet in local.all_subnets : subnet.az => subnet... } }
4. Reusable templates
locals { # IAM policy templates s3_read_policy = jsonencode({ Version = "2012-10-17" Statement = [ { Effect = "Allow" Action = [ "s3:GetObject", "s3:ListBucket" ] Resource = [ var.bucket_arn, "${var.bucket_arn}/*" ] } ] }) # User data scripts user_data = <<-EOF #!/bin/bash echo "${var.environment}" > /etc/environment systemctl enable webapp systemctl start webapp EOF }
🎯 Variable Assignment: Six Ways to Set Values
Method 1: Default Values (Least Specific)
variable "instance_type" { type = string default = "t2.micro" }
Use for: Sensible defaults that work in most cases.
Method 2: Command-line Flag
terraform apply -var="instance_type=t3.large" terraform apply -var='instance_types=["t3.micro","t3.small"]' terraform apply -var='tags={Environment="dev",Project="webapp"}'
Use for: Ad-hoc overrides, testing.
Method 3: Variable Definition Files (.tfvars)
terraform.tfvars:
instance_type = "t3.large" environment = "production"
terraform.tfvars.json:
{ "instance_type": "t3.large", "environment": "production" }
Use for: Environment-specific configurations (dev.tfvars, prod.tfvars).
Method 4: Environment Variables
export TF_VAR_instance_type="t3.large" export TF_VAR_instance_types='["t3.micro","t3.small"]' terraform plan
Use for: CI/CD pipelines, avoiding secrets in files.
Method 5: Auto-loading Files
Terraform automatically loads:
terraform.tfvarsorterraform.tfvars.jsonAny files ending in
.auto.tfvarsor.auto.tfvars.json
Use for: Default configurations, environment-specific overrides.
Method 6: Variable Precedence (Highest to Lowest)
-varcommand line flag-var-filecommand line flag*.auto.tfvarsfiles (alphabetical order)terraform.tfvarsfileEnvironment variables (
TF_VAR_*)Default value in variable declaration
Understanding precedence is critical for team workflows. Production settings should be specified in a way that developers cannot accidentally override them.
🏗️ Organizational Patterns
Pattern 1: Root Module Variables
For simple configurations, keep variables in the root directory:
project/ ├── main.tf ├── variables.tf # All variable declarations ├── outputs.tf # All output declarations ├── terraform.tfvars # Environment-specific values (gitignored) └── terraform.tfvars.example # Template (committed)
variable "environment" { description = "Deployment environment" type = string } variable "project_name" { description = "Name of the project" type = string } variable "vpc_cidr" { description = "CIDR block for VPC" type = string default = "10.0.0.0/16" } # ... more variables
Pattern 2: Module Interface Variables
For reusable modules, be explicit about the interface:
modules/
└── eks-cluster/
├── main.tf
├── variables.tf # Only what the module needs
├── outputs.tf # Only what callers need
└── README.md # Required for team adoption# REQUIRED VARIABLES (no defaults) variable "cluster_name" { description = "Name of the EKS cluster" type = string } variable "subnet_ids" { description = "Subnet IDs for EKS cluster" type = list(string) } # OPTIONAL VARIABLES (with sensible defaults) variable "kubernetes_version" { description = "Kubernetes version" type = string default = "1.28" } variable "node_instance_types" { description = "EC2 instance types for node group" type = list(string) default = ["t3.medium"] } variable "node_group_min_size" { description = "Minimum size of node group" type = number default = 1 } variable "node_group_max_size" { description = "Maximum size of node group" type = number default = 5 } variable "enable_cluster_logging" { description = "Enable EKS control plane logging" type = list(string) default = ["api", "audit", "authenticator"] }
Pattern 3: Environment Configuration Directories
For multi-environment deployments:
environments/
├── dev/
│ ├── main.tf
│ ├── variables.tf
│ ├── outputs.tf
│ ├── terraform.tfvars # dev-specific values
│ └── backend.tf # dev state location
├── staging/
│ ├── main.tf
│ ├── variables.tf
│ ├── outputs.tf
│ ├── terraform.tfvars # staging-specific values
│ └── backend.tf # staging state location
└── prod/
├── main.tf
├── variables.tf
├── outputs.tf
├── terraform.tfvars # prod-specific values
└── backend.tf # prod state locationdev/terraform.tfvars:
environment = "dev" instance_type = "t2.micro" min_size = 1 max_size = 2 enable_monitoring = false
prod/terraform.tfvars:
environment = "prod" instance_type = "t3.large" min_size = 3 max_size = 10 enable_monitoring = true enable_backups = true multi_az = true
Pattern 4: Terragrunt (For Advanced Teams)
Terragrunt is a thin wrapper that provides DRY configuration for variables across environments:
live/
├── terragrunt.hcl # Root configuration
├── dev/
│ ├── terragrunt.hcl # Dev-specific overrides
│ ├── vpc/
│ │ └── terragrunt.hcl
│ └── eks/
│ └── terragrunt.hcl
└── prod/
├── terragrunt.hcl
├── vpc/
│ └── terragrunt.hcl
└── eks/
└── terragrunt.hclTerragrunt solves the "where do I put my tfvars files?" problem elegantly, but adds another tool to your stack. Evaluate whether your team needs this complexity.
✅ Best Practices Checklist
Variable Declaration
Every variable has a
descriptionexplaining its purposeEvery variable has a
typeconstraint (notype = anywithout reason)Variables without defaults are required (makes interface clear)
Variables with defaults are optional and well-documented
sensitive = trueis set for any variable containing secretsVariable names are
snake_caseand descriptive
Variable Validation
Critical business rules are enforced with
validationblocksValidation error messages are user-friendly and actionable
Validation runs early (in
plan, not waiting for API errors)Lists and maps are validated for expected structure
Cross-field validation is used when values are interdependent
Variable Assignment
Secrets are never hardcoded in
.tfvarsfiles committed to GitSecrets are never set as default values
Environment-specific values are in separate
.tfvarsfiles.tfvarsfiles containing secrets are in.gitignoreTemplate
.tfvars.examplefiles are committed with fake values
Output Values
Every output has a
descriptionexplaining what it isOutputs expose only what other modules need
Sensitive outputs are marked
sensitive = trueOutputs are structured for readability (use objects for related values)
Outputs don't duplicate information already available elsewhere
Local Values
Repeated expressions are extracted to
localsblocksComplex transformations are documented with comments
Local names are descriptive and consistently formatted
Locals are used for conditional logic that appears multiple times
Module Design
Module variables define the minimum required interface
Module variables have sensible defaults for optional features
Module outputs expose the minimum necessary information
Module variables are validated at the module boundary
Module README includes all variables and outputs
🎓 Practice Exercises
Exercise 1: Variable Declaration and Validation
Task: Create a module for an S3 bucket with proper variable declarations.
Requirements:
Bucket name (required)
Environment (dev/staging/prod) with validation
Enable versioning (optional, default false)
Enable encryption (optional, default true)
Tags (optional map, with default tags including Environment and ManagedBy)
Lifecycle rules (optional object with expiration_days and noncurrent_version_expiration_days)
Solution:
# variables.tf variable "bucket_name" { description = "Name of the S3 bucket (must be globally unique)" type = string } variable "environment" { description = "Deployment environment" type = string validation { condition = contains(["dev", "staging", "prod"], var.environment) error_message = "Environment must be dev, staging, or prod." } } variable "enable_versioning" { description = "Enable S3 bucket versioning" type = bool default = false } variable "enable_encryption" { description = "Enable default encryption with S3 managed keys" type = bool default = true } variable "tags" { description = "Tags to apply to the bucket" type = map(string) default = {} } variable "lifecycle_rules" { description = "Lifecycle configuration rules" type = object({ expiration_days = optional(number) noncurrent_version_expiration_days = optional(number) }) default = {} } # locals.tf locals { default_tags = { Environment = var.environment ManagedBy = "Terraform" CreatedAt = timestamp() } merged_tags = merge(local.default_tags, var.tags) } # main.tf resource "aws_s3_bucket" "this" { bucket = var.bucket_name tags = local.merged_tags } resource "aws_s3_bucket_versioning" "this" { count = var.enable_versioning ? 1 : 0 bucket = aws_s3_bucket.this.id versioning_configuration { status = "Enabled" } } resource "aws_s3_bucket_server_side_encryption_configuration" "this" { count = var.enable_encryption ? 1 : 0 bucket = aws_s3_bucket.this.id rule { apply_server_side_encryption_by_default { sse_algorithm = "AES256" } } } resource "aws_s3_bucket_lifecycle_configuration" "this" { count = length(var.lifecycle_rules) > 0 ? 1 : 0 bucket = aws_s3_bucket.this.id rule { id = "default-lifecycle" status = "Enabled" dynamic "expiration" { for_each = var.lifecycle_rules.expiration_days != null ? [1] : [] content { days = var.lifecycle_rules.expiration_days } } dynamic "noncurrent_version_expiration" { for_each = var.lifecycle_rules.noncurrent_version_expiration_days != null ? [1] : [] content { noncurrent_days = var.lifecycle_rules.noncurrent_version_expiration_days } } } } # outputs.tf output "bucket_id" { description = "ID of the created bucket" value = aws_s3_bucket.this.id } output "bucket_arn" { description = "ARN of the created bucket" value = aws_s3_bucket.this.arn } output "bucket_regional_domain_name" { description = "Regional domain name of the bucket" value = aws_s3_bucket.this.bucket_regional_domain_name }
Exercise 2: Variable Precedence
Task: Given the following variable declaration and multiple assignment methods, determine what value Terraform will use.
variable "instance_count" { description = "Number of instances" type = number default = 2 }
Assignment methods:
terraform.tfvarscontains:instance_count = 3dev.auto.tfvarscontains:instance_count = 4Environment variable:
TF_VAR_instance_count=5Command line:
terraform apply -var="instance_count=6"
Question: What value will var.instance_count have?
Answer: 6 (command line flag has highest precedence)
Exercise 3: Complex Variable Transformation
Task: Transform a simple variable into a more useful structure using locals.
Input:
variable "subnet_config" { type = list(object({ cidr_block = string type = string # "public" or "private" az_index = number # 0, 1, 2 })) }
Requirements:
Group subnets by type (public/private)
Add name tags based on type and index
Ensure subnets are sorted by AZ index
Solution:
locals { # Add computed attributes to each subnet processed_subnets = [ for idx, subnet in var.subnet_config : { id = idx cidr_block = subnet.cidr_block type = subnet.type az = var.availability_zones[subnet.az_index] name_tag = "${var.environment}-${subnet.type}-${idx + 1}" az_index = subnet.az_index } ] # Group by type public_subnets = [ for subnet in local.processed_subnets : subnet if subnet.type == "public" ] private_subnets = [ for subnet in local.processed_subnets : subnet if subnet.type == "private" ] # Sort by AZ index public_subnets_sorted = sort(local.public_subnets[*].az_index) private_subnets_sorted = sort(local.private_subnets[*].az_index) } # Usage resource "aws_subnet" "public" { count = length(local.public_subnets) vpc_id = aws_vpc.main.id cidr_block = local.public_subnets[count.index].cidr_block availability_zone = local.public_subnets[count.index].az tags = { Name = local.public_subnets[count.index].name_tag } }
📚 Summary: Variables Are the Interface
Variables and outputs define the contract between your configuration and its users.
| Input Variables | Output Values | Local Values | |
|---|---|---|---|
| Purpose | Accept user input | Expose information | Internal calculations |
| User-facing | Yes | Yes | No |
| Can have defaults | Yes | Always computed | Always computed |
| Can be sensitive | Yes | Yes | No |
| Validation | Yes | No | No |
The mark of well-designed Terraform is clear, well-documented variables and outputs. A user should be able to understand how to use a module just by reading its variables.tf and outputs.tf files.
Remember:
Variables without defaults are required
Variables with defaults are optional
Every variable needs a description
Every variable needs a type
Sensitive values need
sensitive = trueand a secrets management strategyOutputs should expose the minimum necessary information
Locals should DRY up repeated expressions
🔗 Master Terraform Variables with Hands-on Labs
Theory is essential, but practice is where you build confidence. The best way to master Terraform variables is to use them in real scenarios.
👉 Practice variable declaration, validation, and complex data structures in our interactive labs at:
https://devops.trainwithsky.com/
Our platform provides:
Real-time variable validation exercises
Complex data structure challenges
Secret management scenarios
Multi-environment configuration labs
Module interface design workshops
Frequently Asked Questions
Q: Should I use map or object for configuration data?
A: Use object when the structure is fixed and you know all the keys in advance. Use map when keys are dynamic (user-provided) or when you're grouping resources by a tag.
Q: Can I reference other variables in a variable's default value?
A: No. Variable defaults cannot reference other variables, locals, or resource attributes. Use locals for derived values.
Q: How do I handle optional attributes in object variables?
A: Use the optional() modifier (Terraform 1.3+):
variable "config" { type = object({ required_attribute = string optional_attribute = optional(string, "default-value") }) }
Q: Why can't I use interpolation in variable default values?
A: Variable defaults are evaluated before any resources exist, before locals are evaluated, and before any expressions can be resolved. This is by design—variable defaults should be static.
Q: How do I debug variable values?
A: Use terraform console to evaluate expressions interactively, or use output blocks to display values during plan/apply.
Q: Should I commit .tfvars files to Git?
A: Only commit .tfvars.example files with fake values. Never commit .tfvars files containing real secrets, even in private repositories.
Q: Can I use environment variables for complex types like lists and maps?
A: Yes! Terraform parses environment variables as HCL:
export TF_VAR_instance_types='["t3.micro","t3.small"]' export TF_VAR_tags='{Environment="dev",Project="webapp"}'
Still have questions about Terraform variables? Confused about when to use maps vs. objects? Struggling with validation? Post your question in the comments below—our community is here to help! 💬
Comments
Post a Comment