6 comments

  • sausagefeet 2 days ago
    I work on Terrateam[0], an open source IaC orchesterator. In our opinion, workspaces in Terraform/Tofu kind of suck. The problem is: multiple environments are never actually the same. Workspaces are built under the premise that the differences are small enough to encode in some conditionals, but that just doesn't scale well.

    What we recommend people do is use modules to encapsulate the shape of your infrastructure and parameterize it. Then have each environment be a directory which instantiates the module (or modules).

    This is more robust for a few reasons:

    1. In most cases, as you scale, differences between environments will grow, with this approach you don't have to make a single root module act like a bunch of root modules via variables and conditionals, instead each environment is its own root module and if you need to do something unique in a particular environment, you can just implement that in the appropriate root module.

    2. It's easier to see what environments are under management by inspecting the layout of the repository. With workspaces, you need to understand how whatever tooling you are using is executed because that is where the environments will be expressed.

    Last weekend I also implemented what I call "Terralith" which is a proof-of-concept for how to treat a single root module as multiple environments in a principled way. I wrote a blog about the experience if anyone is interested: https://pid1.dev/posts/terralith/

    [0] https://github.com/terrateamio/terrateam

  • jitl 2 days ago
    The terraform documentation explicitly advises you NOT to do this (https://developer.hashicorp.com/terraform/cli/workspaces#whe...)

    > In particular, organizations commonly want to create a strong separation between multiple deployments of the same infrastructure serving different development stages or different internal teams. In this case, the backend for each deployment often has different credentials and access controls. CLI workspaces within a working directory use the same backend, so they are not a suitable isolation mechanism for this scenario.

    For a practical scenario, you will often need different environments to roll out changes at different times, or to have other slight variance. If you rely solely on variables to be the only difference between environments, you will need a lot of tricky shenanigans to say, create a new dynamodb for a proof of concept only in “dev” but not in prod. Sure, you can use `count = var.env == “dev” ? 1 : 0` but this gets old fast.

    Much better to make modules for common stuff, and then compose them in your different environments. Depending on the complexity of your needs, keeping good organization & practice around using modules can be a bit challenging, but it will definitely scale through composition.

    Modules also important to make multiple copies of things within an environment, for example to have a cluster in us-west-2 and a cluster in eu-central-1, both are in production environment. I would assume if I started with workspaces I would rapidly hit a point where I want to use it as a module and then need to re-organize things. If you chose workspaces as soon as you want a second region you need a big refactoring, but if you are using modules, you just instantiate the module a second time in your second region.

    • wmfiv 2 days ago
      I think all your points are valid, but I've also had good results using workspaces for environments. Here's generally how I structure my terraform primarily targeting AWS.

      - 1 Terraform workspace per environment (dev, test, prod, etc.).

      - Managing changes to workspaces / environments is done with whatever approach you use for everything else (releasing master using some kind of CICD pipeline or release branches). The Terraform is preferably in the same git repositories as your code but can be separate.

      - The CICD tool injects an environment variable into the build to select the appropriate workspace and somehow supplies credentials granting access to a role that can be assumed in the appropriate account.

      - A region module / folder that defines the resources you want in each region. This is your "main" module that specifies everything you want.

      - Minimal top level terraform that instantiates multiple AWS providers (one for each region) and uses them to create region modules. Any cross region or global resources are also defined here.

      - The region module uses submodules to create the actual resources (RDS, VPCs, etc.) as needed.

      This approach assumes you want to deploy to all your regions in one go. That may not be the case.

    • potamic 2 days ago
      I'm not sure that's completely what they are saying. Under the use cases section, they do acknowledge that workspaces can be used for test environments. I think what they are saying is to align workspaces to your architecture and team structure. If you have the same team managing their components across environments, workspaces could be a fine way of doing it.

      The problem with manually composing your environments is that you can lose parity across them. Over time it can be hard to point to what is the source of truth for your architecture. Like, this environment invokes these modules, but this other environment does not! Which is the correct one? But when systems cross team boundaries then they will diverge by design, using workspaces for such cases may not be a good idea.

  • jayceedenton 2 days ago
    Is there any benefit to using workspaces over just introducing some variables and having an 'environment' variable?

    You can have a directory per environment and a directory of shared resources that are used by all environments.

    It seems like workspaces add a new construct to be learned and another thing to add to all commands without much benefit. Could we just stick with the simple way of doing this?

    • maurobaraldi 2 days ago
      The proposal shows an example on how to isolate environments without duplicating some code. It acts, more or less, as a template for the architecture which you render it according to the values (environments/accounts).

      I agree it isn't a simplest way to do that, but I don't think that it add as much complexity this. Perhaps it could be more laborious for the point of view of architecture, but it could be easier to handle and maintain.

    • NomDePlum 2 days ago
      Been a while since I used workspaces but my understanding is that you have: - a directory that has the infrastructure code - a directory per environment that has the specific configuration to be applied to that environment

      It's a pretty classic separation of code and config. Might not be intuitive to everyone I guess, but that separation is very beneficial I find.

      For instance, adding a new environment is relatively trivial. Not something you do all the time granted, but I have had the need on occasions.

      Same goes for removing an environment.

    • memhole 2 days ago
      You described my pattern. I don’t care for it. I just don’t know what other pattern I could apply. It can get really messy and the code duplication is a headache. Humans don’t read the middle if that makes sense. So you’re having to keep a very keen eye on making the environment directories the same in my experience.

      Things like terragrunt can be really helpful. I don’t use terraform without it.

    • mjlee 2 days ago
      Performance. I've seen workspaces with just a thousand resources take 30 minutes to plan and apply. That's a pretty reasonable number to get to if you have per developer or per customer environments, or deploy infrastructure to multiple regions.
    • _joel 2 days ago
      They seem to play nicer with Terraform Cloud, when I've used it. I'm not sure how useful they are if it's just vanilla tf, especially if your codebase is simple. I guess it's just extra isolation for safety, perhaps.
  • hoofhearted 2 days ago
    The Terraformer tool was the biggest blessing when I had to reverse engineer our AWS stack into .tf modules.

    Shoutout to the Waze team for creating it!

    https://github.com/GoogleCloudPlatform/terraformer

    We built out a large serverless stack on AWS, and we got a request from higher ups to convert it all into Terraform modules for portability and transparency purposes.

    The Terraformer tool pulled in the entire stack and spit out the whole thing into tf files in less than 30 seconds.

    Everyone was super impressed on the team lol.

  • tbrb 2 days ago
    I generally consider the AWS CLI configuration to be something that's unique to a developer's workstation, and shouldn't be referenced in terraform code (in the form of tying the workspace name to your AWS profile name).

    This would only work if all developers on a team have synchronised the same AWS CLI config (which to me is like asking people to synchronise dotfiles, not something I'd be willing to do).

    My go-to architecture for multi-environment tends to be this, as it lends itself relatively well to Git Flow (or GitHub Flow): https://github.com/antonbabenko/terraform-best-practices/tre...

    • jitl 2 days ago
      We do what you advise against at Notion and it seems to work great for our org of ~100s of developers (although a smaller fraction need to edit terraform regularly).

      We use a CLI command `notion aws-sso-login` that logs us into our main user account and adds a profile per delegated app-environment pair account you can access (like app-dev/collections-infra, app-prod/collections-infra) to the AWS CLI config file. This ensures at least the standard list of profiles is present on everyone’s machine whenever they have valid credentials. I have yet to hear anyone complain about this config file meddling.

      Then in our terraform directories, we use direnv to set the AWS_PROFILE environment variable to the appropriate profile to manage that stack. You can always override if you need to use a different profile for some reason.

    • thayne 2 days ago
      So how do you manage getting credentials for different accounts?
      • tbrb 2 days ago
        The AWS SDK supports supplying credentials based on environment variables. When on my workstation I set AWS_PROFILE to select what profile I'm using, prior to running Terraform. This is then portable to CI where we may be using something like https://github.com/aws-actions/configure-aws-credentials to assume a role rather than using a pre-configured CLI profile.
        • thayne 2 days ago
          But then you still need to either need to have something to sync the aws configuration among developers for all the accounts, or all developers will have to configure the aws config themselves, which isn't very scalable when you have more than a couple accounts you need to deal with.
    • maurobaraldi 2 days ago
      I've inspired in this repository to elaborate the proposal. The proposal could be adapted to this repo as well.
  • new_user_final 2 days ago
    There is a typo in the submission title. Isn't it easier to copy than type the whole title?
    • maurobaraldi 2 days ago
      Fixed. thanks for the watchful eyes :-)