Labels

datadog (1) terraform (2)

April 24, 2020

Environment Variables in Terraform Cloud Remote Run


It is great to be able to use the output (or state data) from other terraform cloud workspace.

In most of the cases, it will be in the same TFC organization.
But one of the required arguments in this "terraform_remote_state" data object is "organization"... Hmm, this is where I am running just now.
The second required argument is "name" (remote workspace name).
Hmm, what if you are using some workspace name convention or workspace prefixes?

Ok, it looks like it can be done easily.
Like any "CICDaaS" with remote runners, TFC has a unique dynamic (system) environment variables on each run, like "runID", workspace and organization names.
To be sure what variables exist - just run some TF config with local-exec command "printenv" - you will see all Key=Value.

Note that only system variables with "TF_VAR_" prefix are accessible via terraform for you, this has no connection with "terraform providers"  that have pre-compiled specific system variables for their own need (like AWS_DEFAULT_REGION).

Lets back to our case.

So we have two workspaces in TFC, under the same organization:

  1. Workspace where we are bringing up AWS VPC and EKS, MongoDB
  2. Workspace where we will deploy Kubernetes services with helm charts.
To be able to deploy to created Kubernetes cluster (EKS) - "second" workspace must pass Kubernetes authentication first. Also, Kubernetes services should get the "MongoDB_URI" string.

That's why we will call "first" workspace "demo" and "second" we will call "demo-helm".
Then in "second" workspace, an object "terraform_remote_state" must run before the rest of object/resources:



I hope this helps a little to not define these things in variables and to prevent helm deployment on the wrong cluster by its nature.

No comments:

Post a Comment