Skip to content

gruntwork-io/terragrunt-infrastructure-live-stacks-example

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Maintained by Gruntwork.io

Example infrastructure-live for Terragrunt (with Stacks)

This repository, along with the terragrunt-infrastructure-catalog-example repository, offers a best practice system for organizing your Infrastructure as Code (IaC) so that you can maintain your IaC at any scale with confidence using an infrastructure-live repository.

If you have not already done so, you are encouraged to read the Terragrunt Getting Started Guide to get familiar with the terminology and concepts used in this repository before proceeding.

What is an infrastructure-live repository?

An infrastructure-live repository is a Gruntwork best practice for managing your "live" infrastructure. That is, the infrastructure that is actually provisioned, as opposed to infrastructure patterns that can be provisioned.

Key Features & Benefits

This repository provides a practical blueprint for managing infrastructure with Terragrunt, demonstrating:

  • Modern Terragrunt Workflow: Leverages Terragrunt Stacks for clear dependency management and streamlined multi-component deployments.
  • Scalable infrastructure-live Structure: Organizes infrastructure logically by account and region, providing a proven foundation adaptable to growing complexity.
  • Best-Practice Separation: Clearly separates environment-specific "live" configurations (this repo) from reusable infrastructure patterns (via an infrastructure-catalog).
  • DRY Configuration: Reduces code duplication using hierarchical configuration files (root.hcl, account.hcl, region.hcl).
  • Concrete End-to-End Example: Deploys a sample stateful serverless application (Lambda, DynamoDB) across distinct production and non-production environments.
  • Reproducible Tooling: Includes mise configuration for easy installation of pinned versions of Terragrunt and OpenTofu/Terraform.

Getting Started

Tip

If you have an existing repository that was started using the terragrunt-infrastructure-live-example repository as a starting point, follow the migration guide for help in adjusting your existing configurations to take advantage of the patterns outlined in this repository.

To use this repository, you'll want to fork this repository into your own Git organization.

The steps for doing this are the following:

  1. Create a new Git repository in your organization (e.g. GitHub, GitLab).

  2. Create a bare clone of this repository somewhere on your local machine.

    git clone --bare https://github.com/gruntwork-io/terragrunt-infrastructure-live-stacks-example.git
  3. Push the bare clone to your new Git repository.

    cd terragrunt-infrastructure-live-stacks-example.git
    git push --mirror <YOUR_GIT_REPO_URL> # e.g. git push --mirror [email protected]:acme/terragrunt-infrastructure-live-stacks-example.git
  4. Remove the local clone of the repository.

    cd ..
    rm -rf terragrunt-infrastructure-live-stacks-example.git
  5. (Optional) Delete the contents of this usage documentation from your fork of this repository.

Prerequisites

To use this repository, you'll want to make sure you have the following installed:

To simplify the process of installing these tools, you can install mise, then run the following to concurrently install all the tools you need, pinned to the versions they were tested with (as tracked in the mise.toml file):

mise install

Repository Contents

Note

This code is solely for demonstration purposes. This is not production-ready code, so use at your own risk. If you are interested in battle-tested, production-ready Terragrunt and OpenTofu/Terraform code, continuously updated and maintained by a team of subject matter experts, consider purchasing a subscription to the Gruntwork IaC Library.

This repository contains the following:

  • root.hcl: The root Terragrunt configuration inherited by all other Terragrunt units in this repository.

  • non-prod/prod directories: Each of these directories are a representation of an AWS account, and all the infrastructure that's provisioned for an account can be found in the respective directory.

  • account.hcl files: In each account directory, there is an account.hcl file that defines the common configurations for that account.

  • region.hcl files: In each region directory (e.g. us-east-1), there is a region.hcl file that defines the common configurations for that region.

  • root.hcl: The root Terragrunt configuration inherited by all other Terragrunt units in this repository. This file contains code that reads the account.hcl and region.hcl files in every unit, and leverages them for provider and backend configurations.

  • terragrunt.stack.hcl files: These files define a stack of Terragrunt units.

    Both the terragrunt.stack.hcl files in this repository provision the units required for a stateful Lambda service, including:

    • AWS Lambda Function
    • DynamoDB Table
    • IAM Role

    The configurations for these resources aren't defined in this repository, but are instead defined in the terragrunt-infrastructure-catalog-example repository.

    This is a recommended, Gruntwork best practice, as it allows infrastructure teams to iterate on infrastructure patterns as versioned, immutable artifacts, and then reference pinned versions of these patterns in their "live" infrastructure-live repositories.

Best practices for an infrastructure-live repository

The following conventions are some best practices for an infrastructure-live repository, as defined by Gruntwork:

How to provision the infrastructure in this repository

Setup

Before you start provisioning the infrastructure in this repository, you'll want to do the following:

  1. Update the bucket attribute of the remote_state block in the root.hcl file to a unique name.

    remote_state {
      backend = "s3"
      config = {
        encrypt        = true
        # vvvvv Replace this vvvvvv
        bucket         = "${get_env("EX_BUCKET_PREFIX", "")}terragrunt-example-tf-state-${local.account_name}-${local.aws_region}"
        # ^^^^^ Replace this ^^^^^^
        key            = "${path_relative_to_include()}/tf.tfstate"
        region         = local.aws_region
        use_lockfile  = true
      }
      generate = {
        path      = "backend.tf"
        if_exists = "overwrite_terragrunt"
      }
    }

    Alternatively, you can set the EX_BUCKET_PREFIX environment variable to set a custom prefix. S3 bucket names must be globally unique across all AWS customers, so you'll have to make sure that the value you choose doesn't conflict with any existing bucket names.

  2. Set the EX_NON_PROD_ACCOUNT_ID and EX_PROD_ACCOUNT_ID environment variables to the AWS account IDs you want to use for non-production and production workloads, respectively.

    export EX_NON_PROD_ACCOUNT_ID="123456789012"
    export EX_PROD_ACCOUNT_ID="210987654321"

    Alternatively, you can replace the get_env(...) calls in non-prod/account.hcl and prod/account.hcl with hardcoded account ID strings.

    [!TIP] If you want everything deployed in a single AWS account, you can set both environment variables to the same value (or use the same hardcoded string in both files).

  3. Configure your local AWS credentials using one of the supported authentication mechanisms.

Provisioning a single stack

  1. Navigate to the directory of the stack you want to provision. e.g.

    cd non-prod/us-east-1/stateful-lambda-service
  2. Run the following to generate the relevant units for the stack, and run a plan against them.

    terragrunt run --all --non-interactive --backend-bootstrap plan

    [!TIP]

    The --backend-bootstrap flag there allows Terragrunt to automatically create relevant backend resources for you before running OpenTofu.

    You don't need to use the flag if you are using backend resources that already exist, or don't want Terragrunt to create them for you.

  3. If the plan looks good, run the following to apply the changes.

    terragrunt run --all --non-interactive apply

Provisioning all stacks

If you want to provision all the stacks in this repository, you can do the same at the root of the repository.

  1. Navigate back up to the root of the repository.

    cd ../../..
  2. Plan all units.

    terragrunt run --all --non-interactive plan
  3. Apply all units.

    terragrunt run --all --non-interactive apply

Interacting with the provisioned infrastructure

If you'd like to interact with the infrastructure that was just provisioned, you can do the following:

  1. Get the output values for the stack that you just provisioned.

    $ cd non-prod/us-east-1/stateful-lambda-service
    $ terragrunt stack output
    role = {
      arn  = "arn:aws:iam::XXXXXXXXXXXX:role/stateful-lambda-service-dev-role"
      name = "stateful-lambda-service-dev-role"
    }
    db = {
      arn  = "arn:aws:dynamodb:us-east-1:XXXXXXXXXXXX:table/stateful-lambda-service-dev-db"
      name = "stateful-lambda-service-dev-db"
    }
    lambda_service = {
      function_arn  = "arn:aws:lambda:us-east-1:XXXXXXXXXXXX:function:stateful-lambda-service-dev"
      function_name = "stateful-lambda-service-dev"
      function_url  = "https://XXXXXXXXXX.lambda-url.us-east-1.on.aws/"
    }

    Note that all the units in the stack display their outputs here. Outputs are organized by stack, then unit, then output name.

  2. Use the output values to interact with the infrastructure.

    $ URL="$(terragrunt stack output --raw lambda_service.function_url)"
    $ curl $URL
    {"count":0}
    $ curl -X POST $URL
    {"count":1}
    $ curl $URL
    {"count":1}

    A GET request returns the current count as JSON. A POST request increments the count and returns the updated value.

    Outputs can be indexed by output key. In this case, the lambda_service unit has an output key of function_url, so we can access it directly with lambda_service.function_url. When outputs are nested into stacks, you can access them by chaining the stack name, unit name, and output key.

How is the code in this repository organized?

The IaC code in this repository is organized into a hierarchy of Terragrunt stacks, representing the blast radius of the infrastructure being managed in the context of AWS infrastructure management.

The hierarchy is as follows:

account
 └ region
    └ resources

Where:

  • account is the AWS account being managed (e.g. non-prod, prod, mgmt).
  • region is the AWS region being managed (e.g. us-east-1).
  • resources are the resources being managed (e.g. stateful-lambda-service).

This structure is geared towards exclusive management of infrastructure in AWS, but can be adapted to other cloud providers, etc. by adjusting the hierarchy according to the patterns of the platform.

The top-level account and region directories are there to give clear context for users that are familiar with AWS infrastructure management. Authentication and configuration of the OpenTofu AWS provider is specific to the AWS account and region, so it's also useful to have these directories to provide a straight-forward way to configure the provider.

Many teams like to organize their environments into individual AWS accounts, and this structure makes it easy to do that. If you are part of a team that manages multiple environments in a single AWS account, you can simply add a new level of hierarchy under the region directory, like this:

account
 └ region
    └ environment
       └ resources

There's also an established convention to leverage a special _global directory to manage resources that are available across all regions, environments, etc. Structuring your IaC like that would look like this:

account
 ├ _global
 │  └ resources
 └ region
    ├ _global
    │  └ resources
    └ environment
       └ resources

Where:

  • Account-level _global: Contains resources that are available across all regions in the account, such as IAM users, Route 53 hosted zones, and CloudTrail.
  • Region-level _global: Contains resources that are available across all environments in a region, such as Route 53 A records, SNS topics, and ECR repositories.

The resources directory can be arbitrarily deep, and can be used to organize resources under management in a way that makes sense for the team managing the infrastructure. In this repository it's fairly shallow for the sake of simplicity, with the units constituting the stack in the terragrunt.stack.hcl files defined in the terragrunt-infrastructure-catalog-example repository.

Where to store configuration

root.hcl

The contents of the root.hcl file is configuration that is common to all units the repository. It's idiomatic Terragrunt code to always include this root file in every unit. There's very frequently some boilerplate configuration like defining the OpenTofu provider, setting state backend configurations, etc. that have to be repeated in every unit, so it's nice to have it defined once in the root file, and included in every unit.

Avoid overloading this file with too much configuration, as it might not be the right level of abstraction for the configuration you need. Instead, prefer to use terragrunt.stack.hcl files to define configurations in values attributes, and pass down the values to the stacks and units that need them.

terragrunt.stack.hcl

The terragrunt.stack.hcl file is used to define configurations for a stack of Terragrunt units. Any time you have multiple terragrunt.hcl files that need to be run together as part of your infrastructure deployment, you can define a terragrunt.stack.hcl file instead to encapsulate those configurations as a single entity that can be reliably reproduced across accounts, regions, environments, etc.

Using the values attribute of stack and unit configuration blocks allows you to pass down configuration to units with granular control.

For example, consider this portion of the prod/us-east-1/stateful-lambda-service/terragrunt.stack.hcl file:

unit "lambda_service" {
  // You'll typically want to pin this to a particular version of your catalog repository.
  // e.g.
  // source = "github.com/acme/terragrunt-infrastructure-catalog//units/lambda-stateful-service?ref=v0.1.0"
  //
  // If you are using a private catalog, you may want to use an SSH source URL instead:
  // source = "git::[email protected]:acme/terragrunt-infrastructure-catalog.git//units/lambda-stateful-service"
  source = "github.com/gruntwork-io/terragrunt-infrastructure-catalog-example//units/js-lambda-stateful-service"

  path = "service"

  values = {
    // This version here is used as the version passed down to the unit
    // to use when fetching the OpenTofu/Terraform module.
    version = "main"

    name = local.name

    // Required inputs
    runtime    = "nodejs22.x"
    source_dir = "./src"
    handler    = "index.handler"
    zip_file   = "handler.zip"

    // Optional inputs
    memory  = 128
    timeout = 3

    // Dependency paths
    role_path           = "../roles/lambda-iam-role-to-dynamodb"
    dynamodb_table_path = "../db"
  }
}

Here, you can see that the values attribute is setting exactly the values that are unique to the lambda_service unit in the context of the stateful-lambda-service stack, including the version of the OpenTofu module it uses, and the relative paths to the dependencies it relies on (e.g. the db and role units).

What to do with .terraform.lock.hcl files

When you run terragrunt commands you may find that .terraform.lock.hcl files are created in your working directories.

These files are intentionally not committed to this example repository, but definitely should be in your own repositories!

They help make sure that your IaC results in reproducible infrastructure. For more on this, read Lock File Handling docs.

How to get help

If you need help troubleshooting usage of this repository, or Terragrunt in general, check out the Support docs.

About

A Gruntwork recommended, best practice infrastructure-live repository using Terragrunt Stacks.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages