Skip to main content

A Guide to Terraform Variables, Outputs, and Best Practices

 A Guide to Terraform Variables, Outputs, and Best Practices

Your complete handbook for making Terraform configurations dynamic, reusable, and production-ready—from simple input variables to complex data structures.

📅 Published: Feb 2026
⏱️ Estimated Reading Time: 24 minutes
🏷️ Tags: Terraform Variables, Outputs, Data Types, Variable Validation, Terraform Best Practices


🎯 Introduction: From Hard-Coded to Dynamic Configurations

The Problem with Hard-Coded Values

Every beginner starts here:

hcl
resource "aws_instance" "web" {
  ami           = "ami-0c55b159cbfafe1f0"  # Hard-coded!
  instance_type = "t2.micro"               # Hard-coded!
  subnet_id     = "subnet-12345678"        # Hard-coded!
}

This works—once. Then you need to:

  • Deploy to a different region (different AMI ID)

  • Use a larger instance type for production

  • Share the configuration with a teammate

  • Create multiple environments (dev, staging, prod)

Suddenly, your "simple" configuration becomes a maintenance nightmare of copy-pasted files and manual edits.

Variables and outputs are the solution. They transform rigid, environment-specific configurations into flexible, reusable templates that work anywhere, for any environment, with any team.


What You'll Learn

By the end of this guide, you will understand:

✅ Input Variables — How to make your configurations parameterized and reusable
✅ Data Types — Strings, numbers, bools, lists, maps, objects, tuples
✅ Variable Validation — Ensuring users provide valid values
✅ Sensitive Variables — Handling secrets securely
✅ Output Values — Exposing information to users and other configurations
✅ Local Values — Creating intermediate calculations and clean abstractions
✅ Best Practices — Naming, organization, and team workflows


📥 Input Variables: The Parameters of Your Infrastructure

What is an Input Variable?

An input variable is a parameter to your Terraform configuration. It's how users pass information into your module without editing the source code.

hcl
# Declaration (what the variable is)
variable "instance_type" {
  description = "EC2 instance type for web servers"
  type        = string
  default     = "t2.micro"
}

# Usage (how to use it)
resource "aws_instance" "web" {
  instance_type = var.instance_type  # ← Reference with var.NAME
}

Think of variables like form fields: You define them once, and users fill them in each time they run Terraform.


Variable Declaration: The Anatomy

Every variable declaration has the same structure:

hcl
variable "name" {  # ← Required: Variable name (identifier)
  description = "..."  # ← Optional: Explain what this is for
  type        = ...    # ← Optional: Restrict allowed values
  default     = ...    # ← Optional: Fallback if not provided
  validation { ... }   # ← Optional: Custom validation rules
  sensitive   = true   # ← Optional: Hide from output
  nullable    = false  # ← Optional: Disallow null values
}

The only truly required part is the variable name. Everything else is optional—but in production code, you should always include at least description and type.


Variable Naming Conventions

Good variable names are obvious and self-documenting:

hcl
# ✅ Good - Clear purpose
variable "instance_type" {}
variable "vpc_cidr_block" {}
variable "enable_dns_hostnames" {}

# ❌ Bad - Vague or meaningless
variable "type" {}       # Type of what?
variable "val" {}        # Which value?
variable "foo" {}        # Seriously?

Naming best practices:

  • Use snake_case (lowercase with underscores)

  • Be specific but concise

  • Include units in the name when relevant (timeout_secondssize_gb)

  • Boolean variables should start with enable_create_, or use_


🔢 Data Types: The Shape of Your Variables

The Six Core Data Types

Type 1: String — Text values

hcl
variable "environment" {
  description = "Deployment environment (dev, staging, prod)"
  type        = string
  default     = "dev"
}

# Usage
resource "aws_s3_bucket" "data" {
  bucket = "myapp-${var.environment}-data"
}

String validation:

hcl
variable "environment" {
  type = string
  
  validation {
    condition     = contains(["dev", "staging", "prod"], var.environment)
    error_message = "Environment must be dev, staging, or prod."
  }
}

Type 2: Number — Numeric values

hcl
variable "instance_count" {
  description = "Number of EC2 instances to launch"
  type        = number
  default     = 1
}

variable "disk_size_gb" {
  description = "Root volume size in gigabytes"
  type        = number
  default     = 20
  
  validation {
    condition     = var.disk_size_gb >= 10 && var.disk_size_gb <= 1000
    error_message = "Disk size must be between 10 GB and 1000 GB."
  }
}

# Usage
resource "aws_instance" "web" {
  count = var.instance_count
  
  root_block_device {
    volume_size = var.disk_size_gb
  }
}

Type 3: Bool — True/false values

hcl
variable "enable_versioning" {
  description = "Enable S3 bucket versioning"
  type        = bool
  default     = false
}

# Usage
resource "aws_s3_bucket_versioning" "this" {
  count = var.enable_versioning ? 1 : 0
  
  bucket = aws_s3_bucket.data.id
  versioning_configuration {
    status = "Enabled"
  }
}

Boolean naming convention: Use prefixes like enable_create_use_has_.


Type 4: List — Ordered sequence of values (same type)

hcl
variable "availability_zones" {
  description = "List of availability zones to deploy into"
  type        = list(string)
  default     = ["us-west-2a", "us-west-2b", "us-west-2c"]
}

variable "subnet_cidrs" {
  description = "CIDR blocks for private subnets"
  type        = list(string)
  default     = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
}

# Usage
resource "aws_subnet" "private" {
  count = length(var.subnet_cidrs)
  
  vpc_id            = aws_vpc.main.id
  cidr_block        = var.subnet_cidrs[count.index]
  availability_zone = var.availability_zones[count.index]
}

List operations:

hcl
# Length of list
length(var.availability_zones)  # 3

# Access element (0-indexed)
var.subnet_cidrs[0]  # "10.0.1.0/24"

# First and last
var.subnet_cidrs[0]  # First
var.subnet_cidrs[length(var.subnet_cidrs) - 1]  # Last

# Slice (subset)
slice(var.subnet_cidrs, 0, 2)  # First two elements

Type 5: Map — Key-value pairs (all values same type)

hcl
variable "instance_tags" {
  description = "Tags to apply to all EC2 instances"
  type        = map(string)
  default = {
    Environment = "dev"
    ManagedBy   = "Terraform"
    Project     = "WebApp"
  }
}

variable "instance_types_by_env" {
  description = "Instance type for each environment"
  type        = map(string)
  default = {
    dev     = "t2.micro"
    staging = "t3.small"
    prod    = "t3.large"
  }
}

# Usage
resource "aws_instance" "web" {
  instance_type = var.instance_types_by_env[var.environment]
  
  tags = var.instance_tags
}

Map operations:

hcl
# Access value by key
var.instance_types_by_env["prod"]  # "t3.large"

# Keys and values
keys(var.instance_tags)    # ["Environment", "ManagedBy", "Project"]
values(var.instance_tags)  # ["dev", "Terraform", "WebApp"]

# Lookup with default
lookup(var.instance_types_by_env, "dr", "t2.micro")  # "t2.micro" (key not found)

Type 6: Object — Complex structures with named attributes (different types allowed)

hcl
variable "database_config" {
  description = "RDS database configuration"
  type = object({
    engine         = string
    engine_version = string
    instance_class = string
    storage_gb     = number
    multi_az       = bool
    backup_retention_days = number
    subnet_ids     = list(string)
  })
  
  default = {
    engine                = "postgres"
    engine_version        = "14.7"
    instance_class        = "db.t3.micro"
    storage_gb           = 100
    multi_az             = false
    backup_retention_days = 7
    subnet_ids           = []
  }
}

# Usage
resource "aws_db_instance" "main" {
  engine         = var.database_config.engine
  engine_version = var.database_config.engine_version
  instance_class = var.database_config.instance_class
  allocated_storage = var.database_config.storage_gb
  multi_az = var.database_config.multi_az
  backup_retention_period = var.database_config.backup_retention_days
  db_subnet_group_name = aws_db_subnet_group.main.name
}

Objects vs. Maps:

  • Maps have arbitrary keys, all values same type → map(string)

  • Objects have predefined keys, values can be different types → object({...})

Use objects when the structure is fixed and known in advance. Use maps when keys are dynamic or user-defined.


Type 7: Tuple — Fixed-length list with specific types per position

hcl
variable "cidr_configuration" {
  description = "CIDR blocks for VPC: [vpc_cidr, public_subnet_cidr, private_subnet_cidr]"
  type = tuple([string, string, string])
  default = ["10.0.0.0/16", "10.0.1.0/24", "10.0.2.0/24"]
}

# Usage
resource "aws_vpc" "main" {
  cidr_block = var.cidr_configuration[0]
}

resource "aws_subnet" "public" {
  cidr_block = var.cidr_configuration[1]
}

resource "aws_subnet" "private" {
  cidr_block = var.cidr_configuration[2]
}

Tuples are rare in practice. Objects are usually clearer because the attributes have names. Use tuples only when the position is semantically meaningful and well-documented.


Optional vs. Required Variables

Required variable (no default):

hcl
variable "vpc_id" {
  description = "ID of existing VPC to deploy into"
  type        = string
  # No default → users MUST provide this value
}

Optional variable (with default):

hcl
variable "instance_type" {
  description = "EC2 instance type"
  type        = string
  default     = "t2.micro"  # Optional - will use default if not provided
}

Conditionally optional (nullable = true):

hcl
variable "kms_key_id" {
  description = "KMS key ID for encryption (null = use AWS managed key)"
  type        = string
  default     = null  # Explicitly no value
  nullable    = true  # Allow null (default is true)
}

# Usage
resource "aws_s3_bucket" "data" {
  bucket = "myapp-data"
  
  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm     = var.kms_key_id != null ? "aws:kms" : "AES256"
        kms_master_key_id = var.kms_key_id
      }
    }
  }
}

Design principle: Make variables required by default, optional only when there's a sensible default that works for all use cases.


✅ Variable Validation: Trust But Verify

Why Validate Variables?

Users make mistakes. They might:

  • Typo an environment name (prd instead of prod)

  • Enter an invalid CIDR block (10.0.300.0/24)

  • Specify an unsupported instance type (t9.nano)

  • Forget to include required tags

Validation catches these errors early—during plan, not after apply when resources are already created.


Basic Validation Rules

hcl
variable "environment" {
  description = "Deployment environment"
  type        = string
  
  validation {
    condition     = contains(["dev", "staging", "prod"], var.environment)
    error_message = "Environment must be 'dev', 'staging', or 'prod'."
  }
}

variable "vpc_cidr_block" {
  description = "CIDR block for VPC"
  type        = string
  
  validation {
    condition = can(cidrnetmask(var.vpc_cidr_block))  # Valid CIDR?
    error_message = "VPC CIDR block must be a valid IPv4 CIDR range."
  }
}

variable "instance_type" {
  description = "EC2 instance type"
  type        = string
  
  validation {
    condition = can(regex("^t[23]?\\.(micro|small|medium|large)$", var.instance_type))
    error_message = "Instance type must be a t2 or t3 family instance."
  }
}

Advanced Validation Patterns

Validate list contents:

hcl
variable "allowed_ports" {
  description = "Ports to allow in security group"
  type        = list(number)
  
  validation {
    condition = alltrue([
      for port in var.allowed_ports : port > 0 && port < 65536
    ])
    error_message = "All ports must be between 1 and 65535."
  }
  
  validation {
    condition = length(var.allowed_ports) == length(distinct(var.allowed_ports))
    error_message = "Ports must be unique (no duplicates)."
  }
}

Validate map contents:

hcl
variable "instance_types_by_env" {
  description = "Instance type per environment"
  type        = map(string)
  
  validation {
    condition = alltrue([
      for env, type in var.instance_types_by_env : 
      contains(["dev", "staging", "prod"], env)
    ])
    error_message = "Environment keys must be dev, staging, or prod."
  }
  
  validation {
    condition = alltrue([
      for env, type in var.instance_types_by_env :
      can(regex("^t[23]?\\.", type))
    ])
    error_message = "All instance types must be from the t family."
  }
}

Validate object attributes:

hcl
variable "database_config" {
  description = "Database configuration"
  type = object({
    engine         = string
    engine_version = string
    storage_gb     = number
    backup_days    = number
  })
  
  validation {
    condition = contains(["postgres", "mysql", "aurora"], var.database_config.engine)
    error_message = "Engine must be postgres, mysql, or aurora."
  }
  
  validation {
    condition = var.database_config.storage_gb >= 20 && var.database_config.storage_gb <= 65536
    error_message = "Storage must be between 20 GB and 65536 GB."
  }
  
  validation {
    condition = var.database_config.backup_days >= 0 && var.database_config.backup_days <= 35
    error_message = "Backup retention must be between 0 and 35 days."
  }
}

Cross-field validation:

hcl
variable "vpc_config" {
  type = object({
    cidr_block      = string
    public_subnets  = list(string)
    private_subnets = list(string)
  })
  
  validation {
    # All subnets must be within VPC CIDR
    condition = alltrue(concat([
      for subnet in var.vpc_config.public_subnets :
      cidrsubnet(var.vpc_config.cidr_block, 8, 0) != null  # Simplified check
    ], [
      for subnet in var.vpc_config.private_subnets :
      cidrsubnet(var.vpc_config.cidr_block, 8, 0) != null
    ]))
    error_message = "All subnets must be within VPC CIDR block."
  }
  
  validation {
    # Number of public subnets must match number of AZs
    condition = length(var.vpc_config.public_subnets) == length(var.availability_zones)
    error_message = "Number of public subnets must match number of availability zones."
  }
}

Validation Best Practices

1. Validate early, validate often

  • Check values as soon as they're received

  • Fail fast with clear error messages

  • Don't rely on providers to validate (they error later, after building graph)

2. Write user-friendly error messages

  • Tell the user what went wrong

  • Tell the user what is allowed

  • Don't use jargon or internal variable names

✅ Good:

text
Error: Invalid environment value
Environment must be 'dev', 'staging', or 'prod'.

❌ Bad:

text
Error: Invalid value for var.environment
Condition failed: contains(["dev","staging","prod"], var.environment)

3. Use helper functions

  • can() — Test if expression would succeed

  • contains() — Check if value is in list

  • length() — Validate collection size

  • alltrue() / anytrue() — Aggregate list conditions

  • regex() — Pattern matching for strings

  • cidrnetmask() — Validate CIDR blocks

4. Don't over-validate

  • Let providers validate things they're already good at

  • Focus on business logic and constraints

  • Avoid validation that duplicates provider validation


🤫 Sensitive Variables: Handling Secrets

The Problem with Secrets in Terraform

Terraform state is plain text JSON. Any value you pass to Terraform—even through variables—can end up in the state file in plain text.

hcl
# ❌ DANGEROUS - Secret in state file!
variable "database_password" {
  description = "RDS master password"
  type        = string
  sensitive   = true  # Hides from CLI output, BUT STILL IN STATE!
}

resource "aws_db_instance" "main" {
  password = var.database_password
}

The sensitive = true flag only affects CLI output. It does NOT encrypt the value in state. Anyone with access to the state file can read the password in plain text.


The Solution: Never Store Secrets in State

Pattern 1: Use secrets manager and pass ARN, not value

hcl
# ✅ GOOD - Pass secret ARN, not secret value
variable "db_password_secret_arn" {
  description = "ARN of secret in AWS Secrets Manager containing database password"
  type        = string
}

data "aws_secretsmanager_secret_version" "db_password" {
  secret_id = var.db_password_secret_arn
}

resource "aws_db_instance" "main" {
  password = data.aws_secretsmanager_secret_version.db_password.secret_string
}

Pattern 2: Use environment variables (not for state, but for applying)

hcl
# variable.tf
variable "database_password" {
  description = "RDS master password"
  type        = string
  sensitive   = true
}
bash
# Don't set in terraform.tfvars
# Set as environment variable
export TF_VAR_database_password="my-secure-password"

Pattern 3: External secret management tools

  • Vault — data "vault_generic_secret" "db_password" {...}

  • AWS Secrets Manager — data "aws_secretsmanager_secret_version" {...}

  • Azure Key Vault — data "azurerm_key_vault_secret" {...}

  • Google Secret Manager — data "google_secret_manager_secret_version" {...}


Sensitive Variable Best Practices

hcl
# 1. ALWAYS mark secret variables as sensitive
variable "api_key" {
  description = "API key for external service"
  type        = string
  sensitive   = true
}

# 2. Never set defaults for secrets
variable "api_key" {
  description = "API key for external service"
  type        = string
  # NO DEFAULT!
}

# 3. Never hardcode secrets in .tfvars files committed to Git
# ❌ terraform.tfvars:
# api_key = "sk_live_12345"

# ✅ terraform.tfvars.example (safe to commit):
# api_key = "sk_live_..."

Golden rule: If it's a secret, it should never appear in any file that is committed to version control—including state files. Use a secrets manager.


📤 Output Values: Exposing Information

What is an Output Value?

An output value is information that Terraform exposes to users after apply. It's how your configuration communicates important details back to the person running it—or to other configurations.

hcl
output "vpc_id" {
  description = "ID of the created VPC"
  value       = aws_vpc.main.id
}

output "public_subnet_ids" {
  description = "IDs of public subnets"
  value       = aws_subnet.public[*].id
}

After terraform apply, outputs are displayed:

text
Apply complete! Resources: 5 added, 0 changed, 0 destroyed.

Outputs:

vpc_id = "vpc-0a1b2c3d4e5f67890"
public_subnet_ids = [
  "subnet-12345678",
  "subnet-23456789",
  "subnet-34567890",
]

Output Declaration: The Anatomy

hcl
output "name" {          # ← Required: Output name (identifier)
  description = "..."    # ← Optional: Explain what this is
  value       = ...      # ← Required: Expression to evaluate
  sensitive   = true     # ← Optional: Hide from CLI display
  depends_on  = []       # ← Optional: Explicit dependencies
}

Outputs are always computed after all resources are created. They can reference:

  • Resource attributes (aws_instance.web.id)

  • Data source attributes (data.aws_ami.ubuntu.id)

  • Module outputs (module.vpc.vpc_id)

  • Literal values ("constant")

  • Complex expressions ([for i in aws_subnet.public : i.id])


Common Output Patterns

1. Expose resource identifiers

hcl
output "bucket_arn" {
  description = "ARN of created S3 bucket"
  value       = aws_s3_bucket.data.arn
}

output "instance_public_ips" {
  description = "Public IP addresses of web instances"
  value       = aws_instance.web[*].public_ip
}

2. Expose connection information

hcl
output "database_endpoint" {
  description = "Connection endpoint for RDS cluster"
  value = {
    address = aws_rds_cluster.main.endpoint
    port    = aws_rds_cluster.main.port
    database = aws_rds_cluster.main.database_name
  }
}

output "database_connection_string" {
  description = "JDBC connection string"
  value       = "jdbc:postgresql://${aws_rds_cluster.main.endpoint}/${aws_rds_cluster.main.database_name}"
  sensitive   = true  # Contains hostname and port
}

3. Expose URLs and endpoints

hcl
output "website_url" {
  description = "URL of S3 website"
  value       = aws_s3_bucket_website_configuration.main.website_endpoint
}

output "load_balancer_dns" {
  description = "DNS name of application load balancer"
  value       = aws_lb.main.dns_name
}

4. Expose summary information

hcl
output "vpc_summary" {
  description = "Summary of VPC configuration"
  value = {
    id         = aws_vpc.main.id
    cidr_block = aws_vpc.main.cidr_block
    public_subnet_count  = length(aws_subnet.public)
    private_subnet_count = length(aws_subnet.private)
  }
}

Sensitive Outputs

hcl
output "database_password" {
  description = "Master password for RDS instance"
  value       = random_password.db.result
  sensitive   = true  # Hidden from CLI output
}

When sensitive = true:

  • Output is hidden in terraform apply and terraform output

  • Output is still stored in state in plain text

  • Other configurations can still read it via terraform_remote_state

  • You can still force-display with terraform output -json

This is a UI feature, not a security feature. It prevents secrets from being displayed in CI/CD logs, but does NOT encrypt them in state.


Outputs for Module Communication

This is the primary purpose of outputs—sharing data between modules.

hcl
# networking/outputs.tf
output "vpc_id" {
  description = "ID of the VPC"
  value       = aws_vpc.main.id
}

output "public_subnet_ids" {
  description = "IDs of public subnets"
  value       = aws_subnet.public[*].id
}

output "private_subnet_ids" {
  description = "IDs of private subnets"
  value       = aws_subnet.private[*].id
}
hcl
# compute/main.tf
module "networking" {
  source = "../networking"
}

resource "aws_instance" "web" {
  subnet_id = module.networking.public_subnet_ids[0]
  # ...
}

resource "aws_db_instance" "main" {
  db_subnet_group_name = aws_db_subnet_group.private.name
  # ...
}

resource "aws_db_subnet_group" "private" {
  subnet_ids = module.networking.private_subnet_ids
}

This pattern—modules exposing outputs for other modules to consume—is the foundation of composable infrastructure.


📊 Local Values: Intermediate Calculations

What is a Local Value?

A local value is like a variable, but internal to your module. It's not exposed to users; it's just a convenient way to name complex expressions and avoid repetition.

hcl
locals {
  # Compute once, use many times
  common_tags = {
    Environment = var.environment
    ManagedBy   = "Terraform"
    Project     = var.project_name
    CreatedAt   = timestamp()
  }
  
  # Transform user input
  resource_name_prefix = "${var.project}-${var.environment}"
  
  # Conditional values
  instance_type = var.environment == "prod" ? "t3.large" : "t3.micro"
}

# Usage
resource "aws_instance" "web" {
  instance_type = local.instance_type
  
  tags = merge(local.common_tags, {
    Name = "${local.resource_name_prefix}-web"
  })
}

Local values are evaluated once, at the beginning of the Terraform run. They can reference variables, resource attributes, functions, and other local values.


Local Values vs. Variables

Input VariablesLocal Values
PurposeAccept user inputInternal calculations
Exposed to usersYesNo
Can have defaultsYesAlways computed
Can reference resourcesNoYes
Can use functionsLimitedFull
ValidationYesNo

Rule of thumb: If a value is derived from other values and never directly set by the user, use a local.


Common Local Value Patterns

1. Derived names

hcl
locals {
  name_prefix = "${var.project}-${var.environment}"
  
  bucket_name = "${local.name_prefix}-${random_string.suffix.result}"
  cluster_name = "${local.name_prefix}-eks"
  database_name = "${local.name_prefix}-db"
}

2. Conditional configuration

hcl
locals {
  is_production = var.environment == "prod"
  
  # Resource sizing
  instance_type = local.is_production ? "t3.large" : "t3.micro"
  db_instance_class = local.is_production ? "db.t3.large" : "db.t3.small"
  min_size = local.is_production ? 3 : 1
  max_size = local.is_production ? 10 : 3
  
  # Feature flags
  enable_monitoring = local.is_production || var.environment == "staging"
  enable_backups = local.is_production
  multi_az = local.is_production
}

3. Complex transformations

hcl
locals {
  # Convert list of subnet CIDRs to list of subnet objects
  public_subnets = [
    for idx, cidr in var.public_subnet_cidrs : {
      cidr_block = cidr
      az         = var.availability_zones[idx % length(var.availability_zones)]
      tags = {
        Name = "${var.name}-public-${idx + 1}"
        Type = "public"
      }
    }
  ]
  
  # Flatten list of maps
  all_subnets = concat(local.public_subnets, local.private_subnets)
  
  # Group resources by AZ
  subnets_by_az = {
    for subnet in local.all_subnets :
    subnet.az => subnet...
  }
}

4. Reusable templates

hcl
locals {
  # IAM policy templates
  s3_read_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect = "Allow"
        Action = [
          "s3:GetObject",
          "s3:ListBucket"
        ]
        Resource = [
          var.bucket_arn,
          "${var.bucket_arn}/*"
        ]
      }
    ]
  })
  
  # User data scripts
  user_data = <<-EOF
    #!/bin/bash
    echo "${var.environment}" > /etc/environment
    systemctl enable webapp
    systemctl start webapp
  EOF
}

🎯 Variable Assignment: Six Ways to Set Values

Method 1: Default Values (Least Specific)

hcl
variable "instance_type" {
  type    = string
  default = "t2.micro"
}

Use for: Sensible defaults that work in most cases.


Method 2: Command-line Flag

bash
terraform apply -var="instance_type=t3.large"
terraform apply -var='instance_types=["t3.micro","t3.small"]'
terraform apply -var='tags={Environment="dev",Project="webapp"}'

Use for: Ad-hoc overrides, testing.


Method 3: Variable Definition Files (.tfvars)

terraform.tfvars:

hcl
instance_type = "t3.large"
environment   = "production"

terraform.tfvars.json:

json
{
  "instance_type": "t3.large",
  "environment": "production"
}

Use for: Environment-specific configurations (dev.tfvars, prod.tfvars).


Method 4: Environment Variables

bash
export TF_VAR_instance_type="t3.large"
export TF_VAR_instance_types='["t3.micro","t3.small"]'
terraform plan

Use for: CI/CD pipelines, avoiding secrets in files.


Method 5: Auto-loading Files

Terraform automatically loads:

  • terraform.tfvars or terraform.tfvars.json

  • Any files ending in .auto.tfvars or .auto.tfvars.json

Use for: Default configurations, environment-specific overrides.


Method 6: Variable Precedence (Highest to Lowest)

  1. -var command line flag

  2. -var-file command line flag

  3. *.auto.tfvars files (alphabetical order)

  4. terraform.tfvars file

  5. Environment variables (TF_VAR_*)

  6. Default value in variable declaration

Understanding precedence is critical for team workflows. Production settings should be specified in a way that developers cannot accidentally override them.


🏗️ Organizational Patterns

Pattern 1: Root Module Variables

For simple configurations, keep variables in the root directory:

text
project/
├── main.tf
├── variables.tf      # All variable declarations
├── outputs.tf        # All output declarations
├── terraform.tfvars  # Environment-specific values (gitignored)
└── terraform.tfvars.example  # Template (committed)

variables.tf:

hcl
variable "environment" {
  description = "Deployment environment"
  type        = string
}

variable "project_name" {
  description = "Name of the project"
  type        = string
}

variable "vpc_cidr" {
  description = "CIDR block for VPC"
  type        = string
  default     = "10.0.0.0/16"
}

# ... more variables

Pattern 2: Module Interface Variables

For reusable modules, be explicit about the interface:

text
modules/
└── eks-cluster/
    ├── main.tf
    ├── variables.tf      # Only what the module needs
    ├── outputs.tf        # Only what callers need
    └── README.md         # Required for team adoption

variables.tf:

hcl
# REQUIRED VARIABLES (no defaults)

variable "cluster_name" {
  description = "Name of the EKS cluster"
  type        = string
}

variable "subnet_ids" {
  description = "Subnet IDs for EKS cluster"
  type        = list(string)
}

# OPTIONAL VARIABLES (with sensible defaults)

variable "kubernetes_version" {
  description = "Kubernetes version"
  type        = string
  default     = "1.28"
}

variable "node_instance_types" {
  description = "EC2 instance types for node group"
  type        = list(string)
  default     = ["t3.medium"]
}

variable "node_group_min_size" {
  description = "Minimum size of node group"
  type        = number
  default     = 1
}

variable "node_group_max_size" {
  description = "Maximum size of node group"
  type        = number
  default     = 5
}

variable "enable_cluster_logging" {
  description = "Enable EKS control plane logging"
  type        = list(string)
  default     = ["api", "audit", "authenticator"]
}

Pattern 3: Environment Configuration Directories

For multi-environment deployments:

text
environments/
├── dev/
│   ├── main.tf
│   ├── variables.tf
│   ├── outputs.tf
│   ├── terraform.tfvars      # dev-specific values
│   └── backend.tf           # dev state location
├── staging/
│   ├── main.tf
│   ├── variables.tf
│   ├── outputs.tf
│   ├── terraform.tfvars      # staging-specific values
│   └── backend.tf           # staging state location
└── prod/
    ├── main.tf
    ├── variables.tf
    ├── outputs.tf
    ├── terraform.tfvars      # prod-specific values
    └── backend.tf           # prod state location

dev/terraform.tfvars:

hcl
environment      = "dev"
instance_type    = "t2.micro"
min_size         = 1
max_size         = 2
enable_monitoring = false

prod/terraform.tfvars:

hcl
environment      = "prod"
instance_type    = "t3.large"
min_size         = 3
max_size         = 10
enable_monitoring = true
enable_backups    = true
multi_az         = true

Pattern 4: Terragrunt (For Advanced Teams)

Terragrunt is a thin wrapper that provides DRY configuration for variables across environments:

text
live/
├── terragrunt.hcl              # Root configuration
├── dev/
│   ├── terragrunt.hcl         # Dev-specific overrides
│   ├── vpc/
│   │   └── terragrunt.hcl
│   └── eks/
│       └── terragrunt.hcl
└── prod/
    ├── terragrunt.hcl
    ├── vpc/
    │   └── terragrunt.hcl
    └── eks/
        └── terragrunt.hcl

Terragrunt solves the "where do I put my tfvars files?" problem elegantly, but adds another tool to your stack. Evaluate whether your team needs this complexity.


✅ Best Practices Checklist

Variable Declaration

  • Every variable has a description explaining its purpose

  • Every variable has a type constraint (no type = any without reason)

  • Variables without defaults are required (makes interface clear)

  • Variables with defaults are optional and well-documented

  • sensitive = true is set for any variable containing secrets

  • Variable names are snake_case and descriptive

Variable Validation

  • Critical business rules are enforced with validation blocks

  • Validation error messages are user-friendly and actionable

  • Validation runs early (in plan, not waiting for API errors)

  • Lists and maps are validated for expected structure

  • Cross-field validation is used when values are interdependent

Variable Assignment

  • Secrets are never hardcoded in .tfvars files committed to Git

  • Secrets are never set as default values

  • Environment-specific values are in separate .tfvars files

  • .tfvars files containing secrets are in .gitignore

  • Template .tfvars.example files are committed with fake values

Output Values

  • Every output has a description explaining what it is

  • Outputs expose only what other modules need

  • Sensitive outputs are marked sensitive = true

  • Outputs are structured for readability (use objects for related values)

  • Outputs don't duplicate information already available elsewhere

Local Values

  • Repeated expressions are extracted to locals blocks

  • Complex transformations are documented with comments

  • Local names are descriptive and consistently formatted

  • Locals are used for conditional logic that appears multiple times

Module Design

  • Module variables define the minimum required interface

  • Module variables have sensible defaults for optional features

  • Module outputs expose the minimum necessary information

  • Module variables are validated at the module boundary

  • Module README includes all variables and outputs


🎓 Practice Exercises

Exercise 1: Variable Declaration and Validation

Task: Create a module for an S3 bucket with proper variable declarations.

Requirements:

  1. Bucket name (required)

  2. Environment (dev/staging/prod) with validation

  3. Enable versioning (optional, default false)

  4. Enable encryption (optional, default true)

  5. Tags (optional map, with default tags including Environment and ManagedBy)

  6. Lifecycle rules (optional object with expiration_days and noncurrent_version_expiration_days)

Solution:

hcl
# variables.tf
variable "bucket_name" {
  description = "Name of the S3 bucket (must be globally unique)"
  type        = string
}

variable "environment" {
  description = "Deployment environment"
  type        = string
  
  validation {
    condition     = contains(["dev", "staging", "prod"], var.environment)
    error_message = "Environment must be dev, staging, or prod."
  }
}

variable "enable_versioning" {
  description = "Enable S3 bucket versioning"
  type        = bool
  default     = false
}

variable "enable_encryption" {
  description = "Enable default encryption with S3 managed keys"
  type        = bool
  default     = true
}

variable "tags" {
  description = "Tags to apply to the bucket"
  type        = map(string)
  default     = {}
}

variable "lifecycle_rules" {
  description = "Lifecycle configuration rules"
  type = object({
    expiration_days                      = optional(number)
    noncurrent_version_expiration_days   = optional(number)
  })
  default = {}
}

# locals.tf
locals {
  default_tags = {
    Environment = var.environment
    ManagedBy   = "Terraform"
    CreatedAt   = timestamp()
  }
  
  merged_tags = merge(local.default_tags, var.tags)
}

# main.tf
resource "aws_s3_bucket" "this" {
  bucket = var.bucket_name
  
  tags = local.merged_tags
}

resource "aws_s3_bucket_versioning" "this" {
  count = var.enable_versioning ? 1 : 0
  
  bucket = aws_s3_bucket.this.id
  versioning_configuration {
    status = "Enabled"
  }
}

resource "aws_s3_bucket_server_side_encryption_configuration" "this" {
  count = var.enable_encryption ? 1 : 0
  
  bucket = aws_s3_bucket.this.id
  
  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256"
    }
  }
}

resource "aws_s3_bucket_lifecycle_configuration" "this" {
  count = length(var.lifecycle_rules) > 0 ? 1 : 0
  
  bucket = aws_s3_bucket.this.id
  
  rule {
    id     = "default-lifecycle"
    status = "Enabled"
    
    dynamic "expiration" {
      for_each = var.lifecycle_rules.expiration_days != null ? [1] : []
      content {
        days = var.lifecycle_rules.expiration_days
      }
    }
    
    dynamic "noncurrent_version_expiration" {
      for_each = var.lifecycle_rules.noncurrent_version_expiration_days != null ? [1] : []
      content {
        noncurrent_days = var.lifecycle_rules.noncurrent_version_expiration_days
      }
    }
  }
}

# outputs.tf
output "bucket_id" {
  description = "ID of the created bucket"
  value       = aws_s3_bucket.this.id
}

output "bucket_arn" {
  description = "ARN of the created bucket"
  value       = aws_s3_bucket.this.arn
}

output "bucket_regional_domain_name" {
  description = "Regional domain name of the bucket"
  value       = aws_s3_bucket.this.bucket_regional_domain_name
}

Exercise 2: Variable Precedence

Task: Given the following variable declaration and multiple assignment methods, determine what value Terraform will use.

hcl
variable "instance_count" {
  description = "Number of instances"
  type        = number
  default     = 2
}

Assignment methods:

  1. terraform.tfvars contains: instance_count = 3

  2. dev.auto.tfvars contains: instance_count = 4

  3. Environment variable: TF_VAR_instance_count=5

  4. Command line: terraform apply -var="instance_count=6"

Question: What value will var.instance_count have?

Answer: 6 (command line flag has highest precedence)


Exercise 3: Complex Variable Transformation

Task: Transform a simple variable into a more useful structure using locals.

Input:

hcl
variable "subnet_config" {
  type = list(object({
    cidr_block = string
    type       = string  # "public" or "private"
    az_index   = number  # 0, 1, 2
  }))
}

Requirements:

  1. Group subnets by type (public/private)

  2. Add name tags based on type and index

  3. Ensure subnets are sorted by AZ index

Solution:

hcl
locals {
  # Add computed attributes to each subnet
  processed_subnets = [
    for idx, subnet in var.subnet_config : {
      id          = idx
      cidr_block  = subnet.cidr_block
      type        = subnet.type
      az          = var.availability_zones[subnet.az_index]
      name_tag    = "${var.environment}-${subnet.type}-${idx + 1}"
      az_index    = subnet.az_index
    }
  ]
  
  # Group by type
  public_subnets = [
    for subnet in local.processed_subnets : subnet
    if subnet.type == "public"
  ]
  
  private_subnets = [
    for subnet in local.processed_subnets : subnet
    if subnet.type == "private"
  ]
  
  # Sort by AZ index
  public_subnets_sorted  = sort(local.public_subnets[*].az_index)
  private_subnets_sorted = sort(local.private_subnets[*].az_index)
}

# Usage
resource "aws_subnet" "public" {
  count = length(local.public_subnets)
  
  vpc_id            = aws_vpc.main.id
  cidr_block        = local.public_subnets[count.index].cidr_block
  availability_zone = local.public_subnets[count.index].az
  
  tags = {
    Name = local.public_subnets[count.index].name_tag
  }
}

📚 Summary: Variables Are the Interface

Variables and outputs define the contract between your configuration and its users.

Input VariablesOutput ValuesLocal Values
PurposeAccept user inputExpose informationInternal calculations
User-facingYesYesNo
Can have defaultsYesAlways computedAlways computed
Can be sensitiveYesYesNo
ValidationYesNoNo

The mark of well-designed Terraform is clear, well-documented variables and outputs. A user should be able to understand how to use a module just by reading its variables.tf and outputs.tf files.

Remember:

  • Variables without defaults are required

  • Variables with defaults are optional

  • Every variable needs a description

  • Every variable needs a type

  • Sensitive values need sensitive = true and a secrets management strategy

  • Outputs should expose the minimum necessary information

  • Locals should DRY up repeated expressions


🔗 Master Terraform Variables with Hands-on Labs

Theory is essential, but practice is where you build confidence. The best way to master Terraform variables is to use them in real scenarios.

👉 Practice variable declaration, validation, and complex data structures in our interactive labs at:
https://devops.trainwithsky.com/

Our platform provides:

  • Real-time variable validation exercises

  • Complex data structure challenges

  • Secret management scenarios

  • Multi-environment configuration labs

  • Module interface design workshops


Frequently Asked Questions

Q: Should I use map or object for configuration data?

A: Use object when the structure is fixed and you know all the keys in advance. Use map when keys are dynamic (user-provided) or when you're grouping resources by a tag.

Q: Can I reference other variables in a variable's default value?

A: No. Variable defaults cannot reference other variables, locals, or resource attributes. Use locals for derived values.

Q: How do I handle optional attributes in object variables?

A: Use the optional() modifier (Terraform 1.3+):

hcl
variable "config" {
  type = object({
    required_attribute = string
    optional_attribute = optional(string, "default-value")
  })
}

Q: Why can't I use interpolation in variable default values?

A: Variable defaults are evaluated before any resources exist, before locals are evaluated, and before any expressions can be resolved. This is by design—variable defaults should be static.

Q: How do I debug variable values?

A: Use terraform console to evaluate expressions interactively, or use output blocks to display values during plan/apply.

Q: Should I commit .tfvars files to Git?

A: Only commit .tfvars.example files with fake values. Never commit .tfvars files containing real secrets, even in private repositories.

Q: Can I use environment variables for complex types like lists and maps?

A: Yes! Terraform parses environment variables as HCL:

bash
export TF_VAR_instance_types='["t3.micro","t3.small"]'
export TF_VAR_tags='{Environment="dev",Project="webapp"}'

Still have questions about Terraform variables? Confused about when to use maps vs. objects? Struggling with validation? Post your question in the comments below—our community is here to help! 💬

Comments

Popular posts from this blog

Introduction to Terraform – The Future of Infrastructure as Code

  Introduction to Terraform – The Future of Infrastructure as Code In today’s fast-paced DevOps world, managing infrastructure manually is outdated . This is where Terraform comes in—a powerful Infrastructure as Code (IaC) tool that allows you to define, provision, and manage cloud infrastructure efficiently . Whether you're working with AWS, Azure, Google Cloud, or on-premises servers , Terraform provides a declarative, automation-first approach to infrastructure deployment. Shape Your Future with AI & Infinite Knowledge...!! Read In-Depth Tech & Self-Improvement Blogs http://www.skyinfinitetech.com Watch Life-Changing Videos on YouTube https://www.youtube.com/@SkyInfinite-Learning Transform Your Skills, Business & Productivity – Join Us Today! In today’s digital-first world, agility and automation are no longer optional—they’re essential. Companies across the globe are rapidly shifting their operations to the cloud to keep up with the pace of innovatio...

📊 Monitoring & Logging in Kubernetes – Tools like Prometheus, Grafana, and Fluentd

  Monitoring & Logging in Kubernetes – Tools like Prometheus, Grafana, and Fluentd Monitoring and logging are essential for maintaining a healthy and well-performing Kubernetes cluster. In this guide, we’ll cover why monitoring is important, key monitoring tools like Prometheus and Grafana, and logging tools like Fluentd to help you gain visibility into your cluster’s performance and logs. Shape Your Future with AI & Infinite Knowledge...!! Want to Generate Text-to-Voice, Images & Videos? http://www.ai.skyinfinitetech.com Read In-Depth Tech & Self-Improvement Blogs http://www.skyinfinitetech.com Watch Life-Changing Videos on YouTube https://www.youtube.com/@SkyInfinite-Learning Transform Your Skills, Business & Productivity – Join Us Today! 🚀 Introduction In today’s fast-paced cloud-native environment, Kubernetes has emerged as the de-facto container orchestration platform. But deploying and managing applications in Kubernetes is just half the ba...

🔒 Kubernetes Security – RBAC, Network Policies, and Secrets Management

  Kubernetes Security – RBAC, Network Policies, and Secrets Management Security is a critical aspect of managing Kubernetes clusters. In this guide, we'll cover essential security mechanisms like Role-Based Access Control (RBAC) , Network Policies , and Secrets Management to help you secure your Kubernetes environment effectively. Shape Your Future with AI & Infinite Knowledge...!! Want to Generate Text-to-Voice, Images & Videos? http://www.ai.skyinfinitetech.com Read In-Depth Tech & Self-Improvement Blogs http://www.skyinfinitetech.com Watch Life-Changing Videos on YouTube https://www.youtube.com/@SkyInfinite-Learning Transform Your Skills, Business & Productivity – Join Us Today! 🚀 Introduction: Why Kubernetes Security Is Non-Negotiable As Kubernetes becomes the backbone of modern cloud-native infrastructure, security is no longer optional—it’s mission-critical . With multiple moving parts like containers, pods, services, nodes, and more, Kuberne...