Skip to main content

Terraform Cloud and Terraform Enterprise: A Complete Guide

 Terraform Cloud and Terraform Enterprise: A Complete Guide

Your comprehensive guide to HashiCorp's commercial Terraform platforms—from getting started with Terraform Cloud to scaling Terraform Enterprise across thousands of users and hundreds of thousands of infrastructure operations.

📅 Published: Feb 2026
⏱️ Estimated Reading Time: 28 minutes
🏷️ Tags: Terraform Cloud, Terraform Enterprise, Remote Operations, Sentinel, Private Module Registry, Run Tasks, VCS Integration


☁️ Introduction: From Open Source to Commercial Platforms

The Problem Terraform Cloud Solves

Open-source Terraform is incredibly powerful—but it's a tool, not a platform. As your organization grows from one engineer to ten, to a hundred, to a thousand, the limitations of running Terraform locally become painfully apparent:

❌ State management becomes chaos. Who has the latest state? Who's running apply right now? What happens when two people run terraform apply simultaneously?

❌ Collaboration is fragmented. Code reviews happen in Git, but plans are run locally. Reviewers see the code, but not the actual infrastructure changes.

❌ History and audit trails are incomplete. Who changed what, when, and why? Git history shows config changes, but not who ran apply, or what the exact plan was.

❌ Policy enforcement is inconsistent. You can't prevent a developer from running terraform apply with insecure configuration—unless you take away their AWS credentials entirely.

❌ Variable management is a mess. .tfvars files proliferate. Secrets end up in version control. Every developer configures their environment differently.

❌ Cost tracking is nonexistent. Who's spinning up expensive resources? Which teams are driving your cloud bill?

This is why HashiCorp created Terraform Cloud and Terraform Enterprise. They transform Terraform from a command-line tool into a collaborative platform.


Terraform Cloud vs. Terraform Enterprise

FeatureTerraform Cloud (Free)Terraform Cloud (Team & Governance)Terraform Enterprise
HostingHashiCorp SaaSHashiCorp SaaSSelf-hosted (your infrastructure)
PriceFreePaid per userAnnual contract
State management✅ Yes✅ Yes✅ Yes
Remote operations✅ Yes✅ Yes✅ Yes
VCS integration✅ Yes✅ Yes✅ Yes
Private module registry❌ No✅ Yes✅ Yes
Sentinel policies❌ No✅ Yes✅ Yes
Run tasks❌ No✅ Yes✅ Yes
Single sign-on❌ No✅ Yes✅ Yes
Audit logging❌ No✅ Yes✅ Yes
Clustering❌ N/A❌ N/A✅ Yes
Air-gapped environments❌ No❌ No✅ Yes
SAML/SCIM❌ No✅ Yes✅ Yes

Key distinction: Terraform Cloud is HashiCorp's hosted service. Terraform Enterprise is the same software, but you run it on your own infrastructure. Choose based on your security, compliance, and networking requirements.


🚀 Getting Started with Terraform Cloud

Step 1: Create an Account and Organization

  1. Go to app.terraform.io

  2. Sign up with GitHub, GitLab, or email

  3. Create your first organization (this will be your team's namespace)

Naming matters: Your organization name appears in module sources and state paths. Choose something consistent with your company name.

text
Organization: hashicorp
Workspace: production-networking
Module source: hashicorp/vpc/aws

Step 2: Connect Your Version Control System

Terraform Cloud integrates directly with GitHub, GitLab, Bitbucket, Azure DevOps, and more.

Setup process:

  1. Navigate to Settings → Version Control

  2. Choose your provider

  3. Install the Terraform Cloud app (GitHub) or configure webhooks

  4. Select repositories to connect

Once connected, every pull request automatically runs terraform plan and posts the results as a comment.


Step 3: Create Your First Workspace

Workspaces in Terraform Cloud are NOT the same as CLI workspaces. They're completely independent configurations with their own state, variables, and run history.

bash
# CLI workspaces: Multiple states, one configuration
terraform workspace new dev

# Terraform Cloud workspaces: Independent configurations
# Each workspace has its own:
# - Terraform configuration (from VCS)
# - State file
# - Variable set
# - Run history
# - Access controls

Creating a workspace:

Option A: VCS-driven workflow

  1. Workspaces → New workspace

  2. Choose "Version control workflow"

  3. Select your repository

  4. Configure workspace settings

Option B: CLI-driven workflow

bash
# Login to Terraform Cloud
terraform login

# Configure backend
terraform {
  backend "remote" {
    organization = "my-org"
    
    workspaces {
      name = "production-network"
    }
  }
}

# Migrate local state to Terraform Cloud
terraform init -migrate-state

Step 4: Configure Variables and Sensitive Data

Terraform Cloud has two types of variables:

Variable TypePurposeUIAPIVCS
Terraform variablePassed to Terraform as var.foo
Environment variableAvailable in shell, for provider config
Terraform variable (HCL)Complex structures, maps, lists

Managing secrets:

hcl
# In Terraform Cloud UI:
# Variable: db_password
# Value: ••••••••••••••••
# Sensitive: ✅ (never shown in UI, masked in logs)

Variable sets allow you to share variables across workspaces:

hcl
# AWS credentials, common tags, etc.
variable_set "aws-credentials" {
  description = "AWS access for all production workspaces"
  variables = [
    {
      key       = "AWS_ACCESS_KEY_ID"
      value     = "AKIA..."  # Mark as sensitive!
      category  = "env"
    },
    {
      key       = "AWS_SECRET_ACCESS_KEY"
      value     = "..."      # Mark as sensitive!
      category  = "env"
    }
  ]
  
  workspaces = [
    "production-*",  # Wildcards supported!
    "staging-*"
  ]
}

🏢 Terraform Enterprise: Self-Hosted Deployment

When to Choose Terraform Enterprise

✅ Terraform Enterprise is right for you if:

  • You operate in air-gapped or restricted networks — No internet access to HashiCorp's SaaS

  • You have strict data residency requirements — State cannot leave your jurisdiction

  • You need custom identity providers — SAML, LDAP, Active Directory integration

  • You require enhanced security controls — FIPS compliance, customer-managed encryption keys

  • You're already a HashiCorp Vault customer — Native integration

❌ You probably don't need Terraform Enterprise if:

  • You're comfortable with HashiCorp's SaaS

  • Your compliance requirements are satisfied by SOC2/ISO certifications

  • You have fewer than 50 Terraform users

  • You don't have dedicated infrastructure/platform engineering team


Deployment Architecture

Terraform Enterprise can be deployed in three configurations:

DeploymentNodesPostgreSQLObject StorageRedisUse Case
Demo1EmbeddedEmbeddedEmbeddedEvaluation, < 20 users
Production (single)1ExternalExternalEmbedded< 100 users
Production (active/active)2+ExternalExternalExternalHigh availability, >100 users

Installation Methods

Method 1: Docker (Linux hosts)

bash
# Prerequisites: Docker, PostgreSQL, S3-compatible storage

cat > terraform-enterprise.conf << EOF
hostname=tfe.mycompany.com
production_type=docker
disk_path=/var/lib/tfe

Method 2: Kubernetes (Helm)

bash
# Add HashiCorp Helm repository
helm repo add hashicorp https://helm.releases.hashicorp.com

# Create values file
cat > tfe-values.yaml << EOF
settings:
  hostname: tfe.mycompany.com
  production_type: external
  
postgres:
  host: postgres.internal
  port: 5432
  database: tfe
  user: tfe
  sslmode: require
  
azure:
  account_name: tfestorageaccount
  account_key: "..."

redis:
  host: redis.internal
  port: 6379
EOF

# Install
helm install tfe hashicorp/terraform-enterprise -f tfe-values.yaml

Method 3: Replicated (Legacy, Linux hosts only)

bash
# SSH to your instance
curl -sSL -o install.sh https://install.terraform.io/ptfe/stable
sudo bash install.sh

Post-Installation Configuration

After installation, you must complete the setup wizard:

  1. Create admin user — First user, full system access

  2. Configure SSL certificate — Required for production

  3. Set up object storage — AWS S3, Azure Blob, GCS, or MinIO

  4. Integrate identity provider — SAML, LDAP, or built-in

  5. Configure email — SMTP for notifications

Validation checklist:

bash
# Check service status
sudo docker ps | grep tfe

# Verify health endpoint
curl https://tfe.mycompany.com/_health_check

# Test API access
curl https://tfe.mycompany.com/api/v2/ping

📚 Private Module Registry

Why You Need a Private Registry

Your organization has its own standards, patterns, and best practices. Public modules on the Terraform Registry don't know about your:

  • VPC CIDR conventions

  • Required tagging strategy

  • Approved instance families

  • Security baseline configurations

  • Compliance requirements

A private module registry allows you to codify these standards once, then share them across your entire organization.


Publishing Modules

Step 1: Structure your module repository

text
terraform-aws-vpc/
├── README.md
├── LICENSE
├── main.tf
├── variables.tf
├── outputs.tf
├── versions.tf
└── examples/
    ├── basic-vpc/
    │   └── main.tf
    └── vpc-with-nat/
        └── main.tf

Step 2: Tag a release

bash
git tag -a v1.0.0 -m "Initial release"
git push origin v1.0.0

Step 3: Add to private registry

Via UI:

  1. Registry → Add module

  2. Select VCS provider

  3. Choose repository

  4. Select tags to include

Via API:

bash
curl --header "Authorization: Bearer $TOKEN" \
  --header "Content-Type: application/vnd.api+json" \
  --request POST \
  --data '{
    "data": {
      "type": "registry-modules",
      "attributes": {
        "vcs-repo": {
          "identifier": "github.com/org/terraform-aws-vpc",
          "oauth-token-id": "ot-1234567890"
        }
      }
    }
  }' \
  https://tfe.mycompany.com/api/v2/registry-modules

Consuming Private Modules

Now your team can use modules with simple, clean syntax:

hcl
module "vpc" {
  source  = "app.terraform.io/my-org/vpc/aws"
  version = "~> 1.2"
  
  name        = "production"
  environment = "prod"
  vpc_cidr    = "10.0.0.0/16"
}

Benefits:

  • ✅ Semantic versioning — Pin to exact versions or ranges

  • ✅ Dependency resolution — Terraform Cloud understands module dependencies

  • ✅ Documentation — READMEs rendered in the registry UI

  • ✅ Discoverability — All modules in one place, searchable


Module Testing and Promotion

Establish a module promotion pipeline:

Using Terraform Cloud run tasks for module validation:

yaml
# .github/workflows/module-test.yml
name: Module Tests

on:
  push:
    branches: [ main ]
    paths:
      - 'terraform-aws-*/**'

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      
      - name: Run Terraform tests
        run: |
          for dir in terraform-aws-*; do
            cd $dir
            terraform init -backend=false
            terraform validate
            tflint
            checkov -d .
            cd ..
          done
      
      - name: Publish to Registry
        if: startsWith(github.ref, 'refs/tags/v')
        run: |
          curl --header "Authorization: Bearer ${{ secrets.TFE_TOKEN }}" \
            --request POST \
            "https://app.terraform.io/api/v2/organizations/my-org/registry-modules"

🛡️ Sentinel: Policy as Code

What is Sentinel?

Sentinel is HashiCorp's embedded policy-as-code framework. It allows you to define "if-then" rules that enforce compliance, security, and operational policies before Terraform applies changes.

Think of Sentinel as a guardrail for your infrastructure:

text
Terraform Plan → Sentinel Evaluation → Allow/Deny → Apply

Sentinel Policy Structure

python
# policy/restrict-ec2-instance-types.sentinel
import "tfplan"

# Allowed instance families
allowed_families = [
  "t3",
  "m5",
  "c5",
  "r5",
]

# Get all EC2 instances
instances = filter tfplan.resource_changes as _, rc {
  rc.mode == "managed" and
  rc.type == "aws_instance"
}

# Check each instance
violations = []
for instances as name, rc {
  instance_type = rc.change.after.instance_type
  
  # Extract family (t3.micro -> t3)
  family = split(instance_type, ".")[0]
  
  if family not in allowed_families {
    append(violations, instance_type)
  }
}

# Main rule
main = rule {
  length(violations) == 0
}

# Enforcement level: "advisory", "soft", "hard"

Enforcement levels:

  • Advisory — Logs but doesn't block

  • Soft — Requires override with justification

  • Hard — Cannot be overridden, blocks apply


Common Sentinel Policies

1. Enforce required tags:

python
import "tfplan"

required_tags = [
  "Environment",
  "Owner",
  "CostCenter",
  "ManagedBy",
]

resources_with_tags = filter tfplan.resource_changes as _, rc {
  rc.change.after.tags else {} is not null
}

violations = []
for resources_with_tags as _, rc {
  tags = rc.change.after.tags
  for required_tags as rt {
    if rt not in keys(tags) {
      append(violations, {
        resource: rc.address,
        missing_tag: rt,
      })
    }
  }
}

main = rule { length(violations) == 0 }

2. Restrict AWS regions:

python
import "tfplan"

allowed_regions = [
  "us-west-2",
  "us-east-1",
  "eu-west-1",
]

# Check provider configurations
providers = filter tfplan.provider_configs as _, pc {
  pc.alias is null
}

violations = []
for providers as _, pc {
  region = pc.config.region
  
  if region not in allowed_regions {
    append(violations, region)
  }
}

main = rule { length(violations) == 0 }

3. Prevent public S3 buckets:

python
import "tfplan"
import "tfconfig"

# Find S3 buckets
buckets = filter tfplan.resource_changes as _, rc {
  rc.mode == "managed" and
  rc.type == "aws_s3_bucket"
}

violations = []
for buckets as _, rc {
  # Check if bucket has public ACLs
  acl = rc.change.after.acl
  
  if acl == "public-read" or acl == "public-read-write" {
    append(violations, rc.address)
  }
  
  # Check if bucket has public policy
  policy = rc.change.after.policy else ""
  
  if "\"Principal\":\"*\"" in policy {
    append(violations, rc.address)
  }
}

main = rule { length(violations) == 0 }

4. Cost control - prevent large instances:

python
import "tfplan"

# Define cost thresholds (approximate hourly)
cost_threshold = {
  "aws_instance": 0.50,        # $0.50/hour
  "aws_db_instance": 0.75,     # $0.75/hour
  "aws_elasticache_cluster": 0.40, # $0.40/hour
}

# Instance type to approximate cost mapping
instance_costs = {
  "t3.micro": 0.0104,
  "t3.small": 0.0208,
  "t3.medium": 0.0416,
  "t3.large": 0.0832,
  "m5.large": 0.096,
  "m5.xlarge": 0.192,
  "r5.large": 0.126,
  "c5.large": 0.085,
}

high_cost_resources = []
for tfplan.resource_changes as _, rc {
  if rc.type in keys(cost_threshold) {
    instance_type = rc.change.after.instance_type
    
    if instance_type in keys(instance_costs) {
      cost = instance_costs[instance_type]
      
      if cost > cost_threshold[rc.type] {
        append(high_cost_resources, {
          address: rc.address,
          type: rc.type,
          instance_type: instance_type,
          hourly_cost: cost,
        })
      }
    }
  }
}

main = rule { length(high_cost_resources) == 0 }

# Soft enforcement with override
main = rule {
  length(high_cost_resources) == 0
} else {
  print("High-cost resources require VP approval")
  override_required()
}

Policy Sets and Enforcement Levels

Organize policies into sets:

hcl
# policy-set/security.sentinel.hcl
policy "restrict-ec2-instance-types" {
  source = "./policies/restrict-ec2-instance-types.sentinel"
  enforcement_level = "hard"
}

policy "enforce-required-tags" {
  source = "./policies/enforce-required-tags.sentinel"
  enforcement_level = "soft"
}

policy "prevent-public-s3-buckets" {
  source = "./policies/prevent-public-s3-buckets.sentinel"
  enforcement_level = "hard"
}

Apply policy sets to workspaces:

bash
# Via UI: Workspace → Settings → Policy Sets
# Via API:
curl --header "Authorization: Bearer $TOKEN" \
  --request POST \
  https://app.terraform.io/api/v2/policy-sets \
  --data @policy-set.json

🔌 Run Tasks: Third-Party Integrations

What Are Run Tasks?

Run tasks allow you to integrate third-party tools directly into the Terraform Cloud workflow. Instead of running scanners separately in CI, you can integrate them natively:


Available Run Task Integrations

ToolPurposeIntegration Method
CheckovSecurity scanningNative
InfracostCost estimationNative
SnykVulnerability scanningNative
AquaContainer securityNative
WizCloud securityNative
CustomAny HTTP endpointWebhook

Configuring a Run Task

Example: Infracost cost estimation

bash
# 1. Get Infracost API key
# https://www.infracost.io/docs/terraform_cloud/

# 2. Register run task in Terraform Cloud
curl --header "Authorization: Bearer $TFE_TOKEN" \
  --request POST \
  https://app.terraform.io/api/v2/organizations/$ORG_NAME/tasks \
  --data '{
    "data": {
      "type": "tasks",
      "attributes": {
        "name": "infracost",
        "url": "https://api.infracost.io/terraform_cloud",
        "hmac-key": "'$INFRACOST_API_KEY'",
        "description": "Cloud cost estimation",
        "category": "task"
      }
    }
  }'

# 3. Attach to workspace
curl --header "Authorization: Bearer $TFE_TOKEN" \
  --request POST \
  https://app.terraform.io/api/v2/workspaces/$WORKSPACE_ID/relationships/tasks \
  --data '{
    "data": [
      {
        "type": "tasks",
        "id": "'$TASK_ID'"
      }
    ]
  }'

Custom Run Tasks

You can create your own run tasks using webhooks:

python
# run-task-server.py
from flask import Flask, request, jsonify
import hmac
import hashlib

app = Flask(__name__)

HMAC_KEY = "your-secret-key"

@app.route('/run-task', methods=['POST'])
def run_task():
    # Verify signature
    signature = request.headers.get('X-TFE-HMAC-Signature')
    payload = request.get_data()
    expected = hmac.new(
        HMAC_KEY.encode('utf-8'),
        payload,
        hashlib.sha256
    ).hexdigest()
    
    if not hmac.compare_digest(signature, expected):
        return jsonify({"error": "Invalid signature"}), 401
    
    # Parse payload
    data = request.json
    stage = data['stage']  # pre_plan, post_plan, pre_apply
    
    # Your custom logic here
    violations = scan_terraform_plan(data['plan_json'])
    
    if violations:
        return jsonify({
            "status": "failed",
            "message": f"Found {len(violations)} violations",
            "data": {
                "violations": violations
            }
        }), 200
    else:
        return jsonify({
            "status": "passed",
            "message": "All checks passed"
        }), 200

if __name__ == '__main__':
    app.run(port=5000)

📊 Cost Estimation and Management

Infracost Integration

Infracost shows you the cost impact of infrastructure changes BEFORE you apply them.

bash
# terraform plan output in PR comments

Cost breakdown by resource:

text
Project: my-org/production-network

~ aws_instance.web[0] (t3.micro → t3.large)
  +$0.0416/hr (+$29.95/mo)

+ aws_db_instance.main
  $0.115/hr ($82.80/mo)

Monthly cost change: +$112.75
Total monthly cost: $1,245.32

Setting budget alerts:

hcl
# budget-policy.sentinel
import "tfplan"
import "http"

# Call Infracost API to get cost estimate
cost_estimate = http.get(
  "https://api.infracost.io/v1/estimate",
  {
    headers: {
      "Authorization": "Bearer ${var.infracost_api_key}"
    },
    body: {
      plan_json: tfplan.json()
    }
  }
)

monthly_cost = cost_estimate.body.total_monthly_cost

budget_limit = 5000  # $5,000/month

main = rule { monthly_cost <= budget_limit }

👥 Team and Governance

Organization Structure

Design your Terraform Cloud organization for scale:

text
Organization: my-company
│
├── Teams
│   ├── Platform-Engineering (owners)
│   ├── Networking-Team (manage: networking-*)
│   ├── Security-Team (manage: sentinel, read: all)
│   └── Application-Teams (manage: app-*, plan: all)
│
├── Workspaces
│   ├── global-*
│   ├── networking-*
│   ├── security-*
│   ├── shared-*
│   └── app-*
│
├── Variable Sets
│   ├── aws-credentials-prod
│   ├── aws-credentials-dev
│   ├── common-tags
│   └── sentinel-config
│
├── Policy Sets
│   ├── security-baseline
│   ├── cost-control
│   └── tagging-standards
│
└── Registry Modules
    ├── vpc/aws
    ├── eks/aws
    └── rds-postgres/aws

SSO and SAML Integration

Configure SAML 2.0 for single sign-on:

xml
<!-- Service Provider Metadata -->
<EntityDescriptor entityID="https://app.terraform.io/saml/metadata/org-1234">
  <SPSSODescriptor>
    <AssertionConsumerService 
      Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST"
      Location="https://app.terraform.io/saml/consume/org-1234"/>
    <NameIDFormat>
      urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress
    </NameIDFormat>
  </SPSSODescriptor>
</EntityDescriptor>

Team sync from SAML:

hcl
# Users are automatically added/removed from Terraform Cloud teams
# based on their SAML group membership

saml_team_mapping {
  "github-admins" = "Platform-Engineering"
  "github-networking" = "Networking-Team"
  "github-security" = "Security-Team"
  "github-developers" = "Application-Teams"
}

Audit Logging

Terraform Enterprise provides detailed audit logs for compliance:

json
{
  "timestamp": "2026-02-15T14:30:45Z",
  "event_type": "workspace:apply",
  "actor": {
    "id": "user-123",
    "username": "john.doe@example.com",
    "saml": "okta:engineering"
  },
  "resource": {
    "type": "workspace",
    "id": "ws-456",
    "name": "production-api"
  },
  "organization": "my-company",
  "context": {
    "run_id": "run-789",
    "plan_id": "plan-012",
    "vcs_commit": "a1b2c3d4e5f6..."
  },
  "metadata": {
    "changes": {
      "add": 3,
      "change": 5,
      "delete": 0
    },
    "duration": "2m34s"
  }
}

Forward audit logs to SIEM:

bash
# Example: Splunk HTTP Event Collector
curl -k https://splunk.company.com:8088/services/collector \
  -H "Authorization: Splunk $SPLUNK_TOKEN" \
  -d @audit-log.json

🔄 Migration Strategies

From Open Source to Terraform Cloud

Phase 1: Pilot

  1. Select 1-2 non-critical workspaces

  2. Configure remote backend

  3. Test CLI-driven workflow

  4. Validate no regressions

bash
# Add backend configuration
terraform {
  backend "remote" {
    hostname = "app.terraform.io"
    organization = "my-org"
    
    workspaces {
      name = "pilot-network"
    }
  }
}

# Migrate state
terraform init -migrate-state

Phase 2: VCS Integration

  1. Connect repository

  2. Configure VCS-driven workspace

  3. Set up branch-based workflows

  4. Add team members

Phase 3: Policy and Governance

  1. Create Sentinel policy sets

  2. Configure run tasks

  3. Set up cost estimation

  4. Enable audit logging

Phase 4: Scale

  1. Migrate all workspaces

  2. Train teams

  3. Establish module registry

  4. Implement promotion workflows


From Terraform Cloud to Terraform Enterprise

Migration options:

Option A: API-based migration (recommended)

python
# migrate.py
import requests

TFC_TOKEN = "..."
TFE_TOKEN = "..."
TFC_ORG = "my-org"
TFE_ORG = "my-company"
TFE_HOST = "tfe.mycompany.com"

# List all workspaces in TFC
response = requests.get(
    f"https://app.terraform.io/api/v2/organizations/{TFC_ORG}/workspaces",
    headers={"Authorization": f"Bearer {TFC_TOKEN}"}
)

for workspace in response.json()["data"]:
    name = workspace["attributes"]["name"]
    
    # Create workspace in TFE
    payload = {
        "data": {
            "type": "workspaces",
            "attributes": {
                "name": name,
                "terraform_version": workspace["attributes"]["terraform-version"]
            }
        }
    }
    
    tfe_response = requests.post(
        f"https://{TFE_HOST}/api/v2/organizations/{TFE_ORG}/workspaces",
        headers={"Authorization": f"Bearer {TFE_TOKEN}"},
        json=payload
    )
    
    # Download state from TFC
    state_url = workspace["relationships"]["current-state-run"]["links"]["self"]
    state_response = requests.get(
        f"https://app.terraform.io{state_url}",
        headers={"Authorization": f"Bearer {TFC_TOKEN}"}
    )
    
    # Upload state to TFE
    requests.post(
        f"https://{TFE_HOST}/api/v2/workspaces/{tfe_response.json()['data']['id']}/state-versions",
        headers={"Authorization": f"Bearer {TFE_TOKEN}"},
        json={"data": {"type": "state-versions", "attributes": {"state": state_response.text}}}
    )

Option B: Terraform-based migration

hcl
# migration.tf
provider "tfe" {
  hostname = "app.terraform.io"
  token    = var.tfc_token
}

provider "tfe" {
  alias    = "tfe"
  hostname = var.tfe_hostname
  token    = var.tfe_token
}

# Read workspaces from TFC
data "tfe_workspace_ids" "all" {
  provider      = tfe
  organization  = var.tfc_org
  names         = ["*"]
}

# Create workspaces in TFE
resource "tfe_workspace" "migrated" {
  provider = tfe.tfe
  for_each = data.tfe_workspace_ids.all.ids
  
  organization = var.tfe_org
  name         = each.key
  # ... other settings
}

# Migrate state via API
# (requires null_resource with local-exec)

📋 Terraform Cloud/Enterprise Best Practices

Workspace Design

  • ✅ One workspace per environment per component — prod-networkingdev-networking, not a single workspace with CLI workspaces

  • ✅ Use VCS-driven workflow — Never manually upload configuration files

  • ✅ Enable automatic speculative plans — PR comments are essential for collaboration

  • ✅ Set workspace tags — For filtering and variable set application

  • ✅ Use remote state exclusively — No local fallback

Variable Management

  • ✅ Use variable sets — For credentials, common tags, shared configuration

  • ✅ Mark all secrets as sensitive — Never expose in UI or logs

  • ✅ Separate configuration by environment — Different variable sets for dev/staging/prod

  • ✅ Use HCL variables for complex structures — Maps, lists, objects

  • ✅ Version variable sets — Changes can be previewed before applying

Policy and Governance

  • ✅ Start with advisory policies — Measure impact before enforcing

  • ✅ Layer policies — Security (hard), cost (soft), compliance (advisory)

  • ✅ Test policies in development workspaces — Never test new policies in production

  • ✅ Use policy sets for grouping — Security policies, cost policies, compliance policies

  • ✅ Document override processes — Who can approve, how to document

Module Registry

  • ✅ Establish versioning strategy — Semantic versioning, major/minor/patch

  • ✅ Require README files — No module is complete without documentation

  • ✅ Test modules before publishing — Integration tests, security scans

  • ✅ Use module tags — For discoverability

  • ✅ Deprecate modules gracefully — Mark as deprecated, provide migration path

Run Tasks

  • ✅ Integrate security scanning — Checkov, Snyk, Aqua

  • ✅ Add cost estimation — Infracost, AWS Cost Explorer

  • ✅ Create custom run tasks — For organization-specific checks

  • ✅ Fail early — Pre-plan tasks before expensive API calls

  • ✅ Monitor task performance — Slow tasks delay deployments


🧪 Practice Exercises

Exercise 1: Migrate a Local Workspace to Terraform Cloud

Task: Take an existing Terraform configuration using local state and migrate it to Terraform Cloud.

Solution:

bash
# 1. Create workspace in Terraform Cloud UI

# 2. Add backend configuration
cat >> backend.tf << 'EOF'
terraform {
  backend "remote" {
    organization = "my-org"
    
    workspaces {
      name = "migration-demo"
    }
  }
}
EOF

# 3. Login to Terraform Cloud
terraform login

# 4. Migrate state
terraform init -migrate-state

# 5. Verify
terraform plan

Exercise 2: Create a Sentinel Policy

Task: Create a policy that prevents EC2 instances from being launched in the us-east-1 region.

Solution:

python
# restrict-us-east-1.sentinel
import "tfplan"

blocked_region = "us-east-1"

# Find EC2 instances in the blocked region
violations = filter tfplan.resource_changes as _, rc {
  rc.mode == "managed" and
  rc.type == "aws_instance" and
  rc.change.after.placement.availability_zone matches blocked_region
}

main = rule {
  length(violations) == 0
}

# Enforcement level: hard

Exercise 3: Publish a Module to Private Registry

Task: Take an existing local module, add proper documentation, tag it, and publish to Terraform Cloud private registry.

Solution:

bash
# 1. Structure your module
mkdir -p terraform-aws-demo-bucket
cd terraform-aws-demo-bucket

cat > README.md << 'EOF'
# Terraform AWS Demo Bucket Module

This module creates an S3 bucket with versioning and encryption.

## Usage

```hcl
module "bucket" {
  source  = "app.terraform.io/my-org/demo-bucket/aws"
  version = "1.0.0"

  bucket_name = "my-demo-bucket"
  environment = "dev"
}

Requirements

NameVersion
terraform>= 1.5
aws~> 5.0

Inputs

NameDescriptionTypeDefaultRequired
bucket_nameS3 bucket namestringn/ayes
environmentEnvironment namestringn/ayes

Outputs

NameDescription
bucket_idBucket ID
bucket_arnBucket ARN
EOF

2. Tag release

git add .
git commit -m "feat: initial release"
git tag -a v1.0.0 -m "Initial release"
git push origin main --tags

3. Publish via UI or API

text
---

## 📚 Summary: Terraform Cloud/Enterprise Is Not Optional at Scale

**Open-source Terraform is a tool. Terraform Cloud/Enterprise is a platform.**

| | Open Source | Terraform Cloud | Terraform Enterprise |
|--|-------------|-----------------|---------------------|
| **State management** | Manual | ✅ Automatic | ✅ Automatic |
| **Collaboration** | Git + manual | ✅ Native | ✅ Native |
| **Policy enforcement** | ❌ None | ✅ Sentinel | ✅ Sentinel |
| **Private modules** | Git submodules | ✅ Registry | ✅ Registry |
| **Cost estimation** | ❌ None | ✅ Infracost | ✅ Infracost |
| **Audit logging** | ❌ None | ❌ Basic | ✅ Enterprise |
| **Air-gapped support** | ✅ Yes | ❌ No | ✅ Yes |

**The progression is clear:**
1. Start with open source for learning and small projects
2. Move to Terraform Cloud (free) for team collaboration
3. Upgrade to Terraform Cloud (Team & Governance) for policy and modules
4. Deploy Terraform Enterprise for air-gapped environments and enhanced compliance

**Every organization that uses Terraform seriously eventually adopts one of these platforms.** The question isn't whether you'll migrate—it's when.

---

## 🔗 Master Terraform Cloud/Enterprise with Hands-on Labs

**The best way to learn Terraform Cloud is to use it—and the best way to learn Terraform Enterprise is to deploy it.**

**👉 Practice workspace management, policy as code, and module registry workflows in our interactive labs at:**
**https://devops.trainwithsky.com/**

Our platform provides:
- Real Terraform Cloud workspaces to manage
- Sentinel policy writing exercises
- Private module registry publishing
- Run task integration scenarios
- Migration simulations (OSS → TFC → TFE)
- Multi-team governance challenges

---

### Frequently Asked Questions

**Q: Is Terraform Cloud free?**

**A:** Yes, Terraform Cloud has a generous free tier that includes remote state management, remote operations, and VCS integration for up to 5 users. Paid tiers add Sentinel policies, private module registry, and team management.

**Q: Can I use Terraform Cloud with multiple cloud providers?**

**A:** Absolutely. Terraform Cloud works with all Terraform providers—AWS, Azure, GCP, Kubernetes, GitHub, etc. Variable sets make it easy to manage credentials for multiple providers across workspaces.

**Q: How does Terraform Cloud handle state locking?**

**A:** Terraform Cloud provides automatic state locking. When a run is in progress, no other run can modify that workspace's state. This prevents corruption from concurrent applies.

**Q: What's the difference between CLI-driven and VCS-driven workspaces?**

**A:** 
- **CLI-driven**: Run `terraform apply` from your local machine. Good for development, testing, and existing workflows.
- **VCS-driven**: Runs triggered by Git pushes. Good for production, automation, and team workflows.

**Q: Can I use existing Terraform state files with Terraform Cloud?**

**A:** Yes. `terraform init -migrate-state` will copy your local state file to Terraform Cloud. You can also manually upload state files via the UI or API.

**Q: How do I handle secrets in Terraform Cloud?**

**A:** Use sensitive variables. Mark them as "sensitive" and they will be masked in logs and never displayed in the UI. Better yet, integrate with Vault or AWS Secrets Manager using run tasks or data sources.

**Q: What happens if Terraform Cloud goes down?**

**A:** Terraform Cloud is a highly available SaaS with 99.95% uptime SLA. If it's unavailable, you cannot run remote operations, but existing infrastructure continues to function. You can also configure local fallback state as a backup plan.

**Q: Can I self-host Terraform Enterprise behind a firewall?**

**A:** Yes. This is the primary use case for Terraform Enterprise. It can be deployed in air-gapped environments with no internet access.

---

*Have questions about migrating to Terraform Cloud? Struggling with Sentinel policy syntax? Wondering if Terraform Enterprise is right for your organization? Share your challenges in the comments below—our community includes practitioners from organizations of all sizes!* 💬

Comments

Popular posts from this blog

Introduction to Terraform – The Future of Infrastructure as Code

  Introduction to Terraform – The Future of Infrastructure as Code In today’s fast-paced DevOps world, managing infrastructure manually is outdated . This is where Terraform comes in—a powerful Infrastructure as Code (IaC) tool that allows you to define, provision, and manage cloud infrastructure efficiently . Whether you're working with AWS, Azure, Google Cloud, or on-premises servers , Terraform provides a declarative, automation-first approach to infrastructure deployment. Shape Your Future with AI & Infinite Knowledge...!! Read In-Depth Tech & Self-Improvement Blogs http://www.skyinfinitetech.com Watch Life-Changing Videos on YouTube https://www.youtube.com/@SkyInfinite-Learning Transform Your Skills, Business & Productivity – Join Us Today! In today’s digital-first world, agility and automation are no longer optional—they’re essential. Companies across the globe are rapidly shifting their operations to the cloud to keep up with the pace of innovatio...

📊 Monitoring & Logging in Kubernetes – Tools like Prometheus, Grafana, and Fluentd

  Monitoring & Logging in Kubernetes – Tools like Prometheus, Grafana, and Fluentd Monitoring and logging are essential for maintaining a healthy and well-performing Kubernetes cluster. In this guide, we’ll cover why monitoring is important, key monitoring tools like Prometheus and Grafana, and logging tools like Fluentd to help you gain visibility into your cluster’s performance and logs. Shape Your Future with AI & Infinite Knowledge...!! Want to Generate Text-to-Voice, Images & Videos? http://www.ai.skyinfinitetech.com Read In-Depth Tech & Self-Improvement Blogs http://www.skyinfinitetech.com Watch Life-Changing Videos on YouTube https://www.youtube.com/@SkyInfinite-Learning Transform Your Skills, Business & Productivity – Join Us Today! 🚀 Introduction In today’s fast-paced cloud-native environment, Kubernetes has emerged as the de-facto container orchestration platform. But deploying and managing applications in Kubernetes is just half the ba...

🔒 Kubernetes Security – RBAC, Network Policies, and Secrets Management

  Kubernetes Security – RBAC, Network Policies, and Secrets Management Security is a critical aspect of managing Kubernetes clusters. In this guide, we'll cover essential security mechanisms like Role-Based Access Control (RBAC) , Network Policies , and Secrets Management to help you secure your Kubernetes environment effectively. Shape Your Future with AI & Infinite Knowledge...!! Want to Generate Text-to-Voice, Images & Videos? http://www.ai.skyinfinitetech.com Read In-Depth Tech & Self-Improvement Blogs http://www.skyinfinitetech.com Watch Life-Changing Videos on YouTube https://www.youtube.com/@SkyInfinite-Learning Transform Your Skills, Business & Productivity – Join Us Today! 🚀 Introduction: Why Kubernetes Security Is Non-Negotiable As Kubernetes becomes the backbone of modern cloud-native infrastructure, security is no longer optional—it’s mission-critical . With multiple moving parts like containers, pods, services, nodes, and more, Kuberne...