- HCL 95.8%
- Smarty 4.2%
* add multi-region doc * update docs * chores/updates to GKE config * update IAM config * update GCS bucket config for multi-region * update Redis config (improved KMS IAM) * update Helm overrides template for multi-region and chores * update Cloud SQL config for multi-region and passwordless IAM auth support * update inputs and outputs * bump google provider version constraint to 7.x and remove google-beta provider * update database secret variable validation * update examples to reflect changes * update docs * improve display_name of control plane authorized cidr config * Copilot suggestions --------- Co-authored-by: Alex Basista <abasista@hashicorp.com> |
||
|---|---|---|
| .github | ||
| docs | ||
| examples | ||
| templates | ||
| .copywrite.hcl | ||
| .env.local.example | ||
| .gitignore | ||
| .terraform-docs.yml | ||
| cloud_dns.tf | ||
| data.tf | ||
| gcs_bucket.tf | ||
| gke.tf | ||
| iam.tf | ||
| ip_address.tf | ||
| LICENSE | ||
| locals_helm_overrides.tf | ||
| outputs.tf | ||
| postgresql.tf | ||
| README.md | ||
| redis.tf | ||
| Taskfile.yml | ||
| variables.tf | ||
| versions.tf | ||
Terraform Enterprise HVD on GCP GKE
Terraform module aligned with HashiCorp Validated Designs (HVD) to deploy Terraform Enterprise (TFE) on Google Kubernetes Engine (GKE). This module supports bringing your own GKE cluster, or optionally creating a new GKE cluster dedicated to running TFE. This module does not use the Kubernetes or Helm Terraform providers, but rather includes Post Steps for the application layer portion of the deployment leveraging the kubectl and helm CLIs.
Prerequisites
General
- TFE license file (e.g.,
terraform.hclic) - Terraform CLI (version
>= 1.9) installed on workstation - General understanding of how to use Terraform (Community Edition)
- General understaning of how to use Google Cloud Platform (GCP)
- General understanding of how to use Kubernetes and Helm
gcloudCLI installed on workstationkubectlCLI andhelmCLI installed on workstationgitCLI and Visual Studio Code code editor installed on worksation are strongly recommended- GCP project that TFE will be deployed in with permissions to provision these resources via Terraform CLI
- (Optional) GCS bucket for GCS remote state backend that will be used to manage the Terraform state of this TFE deployment (out-of-band from the TFE application infrastructure) via Terraform CLI (Community Edition)
Networking
- VPC network that TFE will be deployed in
- GKE cluster must be deployed in the same VPC network as CloudSQL for PostgreSQL database instance and Memorystore for Redis instance
- Private Service Access (PSA) configured in VPC to enable private connectivity from GKE worker nodes to Cloud SQL for PostgreSQL database instance and Memorystore for Redis instance
- Subnet for GKE cluster (if you plan to use this module to create your GKE cluster for TFE rather than bring your own GKE cluster)
- It is strongly recommended that this subnet has Private Google Access enabled to allow private access from the GKE cluster to the Google Cloud Storage (GCS) bucket.
- Static IP address for TFE load balancer (to be used by either a Kubernetes
Serviceof typeLoadBalanceror an ingress controller) - Chosen fully qualified domain name (FQDN) for TFE instance (e.g.,
tfe-prod.gcp.example.com)
Firewall rules / network traffic requirements
- Allow
TCP:443ingress to TFE load balancer from CIDR ranges of TFE users/clients, VCS provider, and any other external systems that needs to access the TFE UI or API - Allow
TCP:8201between TFE pods (for TFE embedded Vault internal cluster communication) - typically handled automatically/natively by GKE and does not require a custom firewall rule - Allow
TCP:443egress to Terraform endpoints listed here from TFE pods - If your GKE cluster is private, your clients/workstations must be able to reach the GKE control plane via
kubectlandhelm - Review the TFE ingress requirements
- Review the TFE egress requirements
TLS certificates
- TLS certificate (e.g.
cert.pem) and private key (e.g.privkey.pem) that matches your chosen fully qualified domain name (FQDN) for TFE- TLS certificate and private key must be in PEM format
- Private key must not be password protected
- TLS certificate authority (CA) bundle (e.g.
ca_bundle.pem) corresponding with the CA that issues your TFE TLS certificates- CA bundle must be in PEM format
- You may include additional certificate chains corresponding to external systems that TFE will make outbound connections to (e.g., your self-hosted VCS, if its certificate was issued by a different CA than the issuer of your TFE TLS certificate)
Secret management
Google Secret Manager secrets:
- PostgreSQL database password secret
Compute (optional)
If you plan to create a new GKE cluster using this module, then there is no GKE prereq. Otherwise:
- GKE cluster
- (Recommended) Workload identity enabled on GKE cluster (
workload_pool = "<PROJECT_ID>.svc.id.goog") - GKE node pool for TFE application (control plane)
Usage
-
Create/configure/validate the applicable prerequisites.
-
Nested within the examples directory are subdirectories that contain ready-made Terraform configurations of example scenarios for how to deploy this module. To get started, choose an example scenario. If you are starting without an existing GKE cluster, then you should select the new-gke example scenario.
-
Copy all of the Terraform files from your example scenario of choice into a new destination directory to create the Terraform configuration that will manage your TFE deployment. If you are not sure where to create this new directory, it is common for users to create an
environments/directory at the root of this repo (once you have cloned it down locally), and then a subdirectory for each TFE instance deployment. For example:. └── environments ├── production │ ├── backend.tf │ ├── main.tf │ ├── outputs.tf │ ├── terraform.tfvars │ └── variables.tf └── sandbox ├── backend.tf ├── main.tf ├── outputs.tf ├── terraform.tfvars └── variables.tf📝 Note: In this example, the user will have two separate TFE deployments; one for their
sandboxenvironment, and one for theirproductionenvironment. This is recommended, but not required. -
(Optional) Uncomment and update the GCS remote state backend configuration provided in the
backend.tffile with your own custom values. While this step is highly recommended, it is technically not required to use a remote backend config for your TFE deployment (if you are in a sandbox environment, for example). -
Populate your own custom values into the
terraform.tfvars.examplefile that was provided (in particular, values enclosed in the<>characters). Then, remove the.examplefile extension such that the file is now namedterraform.tfvars. -
Navigate to the directory of the newly created Terraform configuration for your TFE deployment, and run
terraform init,terraform plan, andterraform apply.
At this point, the Terraform-managed infrastructure resources for TFE have been created.
The next phase of the deployment is the application layer (referred to as the Post Steps). This phase involves interacting with your GKE cluster using kubectl and installing the TFE application using helm. The steps are documented using these CLI tools as a baseline; equivalent Kubernetes tooling or workflows may be used as appropriate.
Post Steps
-
Authenticate to your GKE cluster:
gcloud auth login gcloud config set project <PROJECT_ID> gcloud container clusters get-credentials <GKE_CLUSTER_NAME> --region <REGION> -
Create the Kubernetes namespace for TFE:
kubectl create namespace tfe📝 Note: You may name it something different than
tfeif you prefer. If you do name it differently, be sure to update your value of thetfe_kube_namespaceandtfe_kube_svc_accountinput variables accordingly (the Helm chart will automatically create a Kubernetes service account for TFE based on the name of the namespace). -
Create the required secrets for your TFE deployment within your new Kubernetes namespace for TFE. There are several ways to do this, whether it be from the CLI via
kubectl, or another method involving a third-party secrets helper/tool.See the Kubernetes-Secrets doc for details on the required secrets and how to create them.
-
This Terraform module will automatically generate a Helm overrides file within your Terraform working directory named
./helm/module_generated_helm_overrides.yaml. This Helm overrides file contains values interpolated from some of the infrastructure resources that were created by Terraform in step 6.Within the Helm overrides file, update or validate the values for the remaining settings that are enclosed in the
<>characters. You may also add any additional configuration settings into your Helm overrides file at this time (see the Helm-Overrides doc for more details). -
Now that you have customized your
module_generated_helm_overrides.yamlfile, rename it to something more applicable to your deployment, such asprod_tfe_overrides_primary.yaml(or whatever you prefer).Then, within your
terraform.tfvarsfile, set the value ofcreate_helm_overrides_filetofalse, as we no longer need the Terraform module to manage this file or generate a new one on a subsequent Terraform run. -
Add the HashiCorp Helm registry:
helm repo add hashicorp https://helm.releases.hashicorp.com📝 Note: If you have already added the
hashicorpHelm repository, you should runhelm repo update hashicorpto ensure that you have the latest version. -
Install the TFE application via
helm:helm install terraform-enterprise hashicorp/terraform-enterprise --namespace <TFE_NAMESPACE> --values <TFE_OVERRIDES_FILE> -
Verify the TFE pod(s) are starting successfully:
View the events within the namespace:
kubectl get events --namespace <TFE_NAMESPACE>View the pod(s) within the namespace:
kubectl get pods --namespace <TFE_NAMESPACE>View the logs from the pod:
kubectl logs <TFE_POD_NAME> --namespace <TFE_NAMESPACE> -f -
If you did not create a DNS record during your Terraform deployment in the previous section (via the boolean input
create_tfe_cloud_dns_record), then create a DNS record for your TFE FQDN that resolves to your TFE load balancer, depending on how the load balancer was configured during your TFE deployment:-
If you are using a Kubernetes service of type
LoadBalancer(what the module-generated Helm overrides defaults to), the DNS record should resolve to the static IP address of your TFE load balancer:kubectl get services --namespace <TFE_NAMESPACE> -
If you are using a custom Kubernetes ingress (meaning you customized your Helm overrides in step 10), the DNS record should resolve to the IP address of your ingress controller load balancer:
kubectl get ingress <INGRESS_NAME> --namespace <INGRESS_NAMESPACE>
-
-
Verify the TFE application is ready:
curl https://<TFE_FQDN>/_health_check -
Follow the remaining steps here to finish the installation setup, which involves creating the initial admin user.
Docs
Below are links to various docs related to the customization and management of your TFE deployment:
- TFE Deployment Customizations
- TFE Helm Overrides
- TFE Version Upgrades
- TFE TLS Certificate Rotation
- TFE Configuration Settings
- TFE Kubernetes Secrets
- TFE Multi-Region Deployment
Requirements
| Name | Version |
|---|---|
| terraform | >= 1.9 |
| ~> 7.14 | |
| local | >= 2.5.1 |
| random | >= 3.6.2 |
Providers
| Name | Version |
|---|---|
| ~> 7.14 | |
| local | >= 2.5.1 |
| random | >= 3.6.2 |
Resources
Inputs
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
| friendly_name_prefix | Prefix used to name all GCP resources uniquely. It is most common to use either an environment (e.g. 'sandbox', 'prod'), a team name, or a project name here. | string |
n/a | yes |
| project_id | ID of GCP project to deploy TFE in. | string |
n/a | yes |
| tfe_fqdn | Fully qualified domain name (FQDN) of TFE instance. This name should eventually resolve to the TFE load balancer DNS name or IP address and will be what clients use to access TFE. | string |
n/a | yes |
| vpc_name | Name of existing VPC network to create resources in. | string |
n/a | yes |
| cloud_dns_zone_name | Name of Google Cloud DNS managed zone to create TFE DNS record in. Only valid when create_cloud_dns_record is true. |
string |
null |
no |
| cloud_sql_service_agent_email | Email address of the Google-managed Cloud SQL service agent (service account) for this GCP project (usually service-<PROJECT_ID>@gcp-sa-cloud-sql.iam.gserviceaccount.com). Only required when using a customer-managed encryption key (CMEK) to grant the service agent encrypt/decrypt permissions. | string |
null |
no |
| common_labels | Common labels to apply to all GCP resources that support labels. | map(string) |
{} |
no |
| create_gke_cluster | Boolean to create a GKE cluster. | bool |
false |
no |
| create_helm_overrides_file | Boolean to generate a YAML file from template with Helm overrides values for your TFE deployment. Set this to false after your initial TFE deployment is complete, as we no longer want the Terraform module to manage it (since you will be customizing it further). |
bool |
true |
no |
| create_tfe_cloud_dns_record | Boolean to create Google Cloud DNS record for TFE using the value of tfe_fqdn for the record name. |
bool |
false |
no |
| create_tfe_lb_ip | Boolean to create a static IP address for TFE load balancer (load balancer is created/managed by Helm/Kubernetes). | bool |
true |
no |
| enable_gke_workload_identity | Boolean to enable GCP workload identity with GKE cluster. | bool |
true |
no |
| enable_passwordless_iam_db_auth | Whether to enable passwordless IAM authentication to Cloud SQL for PostreSQL database instance. | bool |
false |
no |
| gcs_custom_dual_region_locations | Optional list of exactly two GCS region codes (e.g., ["US-EAST1", "US-CENTRAL1"]) to use dual-region custom placement. When set, gcs_location must be the corresponding multi-region (US, EU, or ASIA), and gcs_location must not be a predefined dual-region code (NAM4, EUR4, ASIA1). |
list(string) |
null |
no |
| gcs_force_destroy | Boolean indicating whether to allow force destroying the TFE GCS bucket. GCS bucket can be destroyed if it is not empty when true. |
bool |
false |
no |
| gcs_kms_cmek_name | Name of Cloud KMS customer managed encryption key (CMEK) to use for TFE GCS bucket encryption. | string |
null |
no |
| gcs_kms_keyring_name | Name of Cloud KMS key ring that contains KMS customer managed encryption key (CMEK) to use for TFE GCS bucket encryption. Geographic location (region) of the key ring must match the location of the TFE GCS bucket. | string |
null |
no |
| gcs_location | Location of TFE GCS bucket to create. Supports multi-region (US, EU, ASIA) and predefined dual-region (NAM4, EUR4, ASIA1). | string |
"US" |
no |
| gcs_public_access_prevention | Prevent public access to TFE GCS bucket. | string |
"enforced" |
no |
| gcs_rpo | The recovery point objective for cross-region replication of the GCS bucket. | string |
"DEFAULT" |
no |
| gcs_storage_class | Storage class of TFE GCS bucket. | string |
"STANDARD" |
no |
| gcs_uniform_bucket_level_access | Boolean to enable uniform bucket level access on TFE GCS bucket. | bool |
true |
no |
| gcs_versioning_enabled | Boolean to enable versioning on TFE GCS bucket. | bool |
true |
no |
| gke_cluster_is_private | Boolean indicating if GKE network access is private cluster. | bool |
true |
no |
| gke_cluster_name | Name of GKE cluster to create. | string |
"tfe-gke-cluster" |
no |
| gke_cluster_node_locations | List of zones in which node pool nodes should be located. | list(string) |
null |
no |
| gke_control_plane_authorized_cidr | CIDR block allowed to access GKE control plane. | string |
null |
no |
| gke_control_plane_cidr | Control plane IP range of private GKE cluster. Must not overlap with any subnet in GKE cluster's VPC. | string |
"10.0.10.0/28" |
no |
| gke_deletion_protection | Boolean to enable deletion protection on GKE cluster. | bool |
false |
no |
| gke_enable_private_endpoint | Boolean to enable private endpoint on GKE cluster. | bool |
true |
no |
| gke_http_load_balancing_disabled | Boolean to enable HTTP load balancing on GKE cluster. | bool |
false |
no |
| gke_l4_ilb_subsetting_enabled | Boolean to enable layer 4 ILB subsetting on GKE cluster. | bool |
true |
no |
| gke_node_count | Number of GKE nodes per zone in TFE node pool. | number |
1 |
no |
| gke_node_disk_size_gb | Boot disk size in gigabytes (GB) for GKE nodes in TFE node pool. | number |
100 |
no |
| gke_node_disk_type | Type of disk for GKE nodes in TFE node pool. | string |
"hyperdisk-balanced" |
no |
| gke_node_pool_name | Name of TFE node pool to create in GKE cluster. | string |
"tfe-gke-node-pool" |
no |
| gke_node_type | Size/machine type of GKE nodes in TFE node pool. | string |
"n4-standard-8" |
no |
| gke_release_channel | The channel to use for how frequent Kubernetes updates and features are received. | string |
"REGULAR" |
no |
| gke_remove_default_node_pool | Boolean to remove the default node pool in GKE cluster. | bool |
true |
no |
| gke_subnet_name | Name or self_link to existing VPC subnetwork to create GKE cluster in. | string |
null |
no |
| is_secondary_region_deployment | Whether this deployment represents the secondary (DR) region (TFE warm-standby instance). | bool |
false |
no |
| postgres_availability_type | Availability type of Cloud SQL for PostgreSQL instance. | string |
"REGIONAL" |
no |
| postgres_backup_config | Backup configuration for Cloud SQL for PostgreSQL instance. | object({ |
{ |
no |
| postgres_db_is_replica | Whether the Cloud SQL for PostreSQL database instance in this deployment is a read replica. | bool |
false |
no |
| postgres_deletion_protection | Whether to prevent the Cloud SQL for PostgreSQL instance from being destroyed. | bool |
true |
no |
| postgres_disk_autoresize | Whether to enable autoresize on the Cloud SQL for PostgreSQL disk. | bool |
true |
no |
| postgres_disk_size | Size in GB of PostgreSQL disk. | number |
100 |
no |
| postgres_disk_type | Type of data disk for Cloud SQL for PostgreSQL instance. | string |
"PD_SSD" |
no |
| postgres_edition | Cloud SQL for PostgreSQL edition (ENTERPRISE or ENTERPRISE_PLUS). | string |
"ENTERPRISE_PLUS" |
no |
| postgres_insights_config | Configuration settings for Cloud SQL for PostgreSQL insights. | object({ |
{ |
no |
| postgres_kms_cmek_name | Name of Cloud KMS customer managed encryption key (CMEK) to use for Cloud SQL for PostgreSQL database instance. | string |
null |
no |
| postgres_kms_keyring_name | Name of Cloud KMS Key Ring that contains KMS key specified in postgres_kms_cmek_name. Geographic location (region) of key ring must match the location of the TFE Cloud SQL for PostgreSQL database instance. |
string |
null |
no |
| postgres_machine_type | Machine size of Cloud SQL for PostgreSQL instance. | string |
"db-perf-optimized-N-8" |
no |
| postgres_maintenance_window | Optional maintenance window settings for the Cloud SQL for PostgreSQL instance. | object({ |
{ |
no |
| postgres_master_instance_name | Name of TFE Cloud SQL for PostgreSQL database instance deployed in primary region. Used to create a read replica in the secondary region. Only set when postgres_db_is_replica is true. |
string |
null |
no |
| postgres_ssl_mode | Indicates whether to enforce TLS/SSL connections to the Cloud SQL for PostgreSQL instance. | string |
"ENCRYPTED_ONLY" |
no |
| postgres_version | PostgreSQL version to use. | string |
"POSTGRES_16" |
no |
| redis_auth_enabled | Boolean to enable authentication on Redis instance. | bool |
true |
no |
| redis_connect_mode | Network connection mode for Redis instance. | string |
"PRIVATE_SERVICE_ACCESS" |
no |
| redis_kms_cmek_name | Name of Cloud KMS customer managed encryption key (CMEK) to use for TFE Redis instance. | string |
null |
no |
| redis_kms_keyring_name | Name of Cloud KMS key ring that contains KMS customer managed encryption key (CMEK) to use for TFE Redis instance. Geographic location (region) of key ring must match the location of the TFE Redis instance. | string |
null |
no |
| redis_memory_size_gb | The size of the Redis instance in GiB. | number |
6 |
no |
| redis_tier | The service tier of the Redis instance. Defaults to STANDARD_HA for high availability. |
string |
"STANDARD_HA" |
no |
| redis_transit_encryption_mode | Determines transit encryption (TLS) mode for Redis instance. | string |
"DISABLED" |
no |
| redis_version | The version of Redis software. | string |
"REDIS_7_2" |
no |
| tfe_cloud_dns_record_ip_address | IP address of DNS record for TFE. Only valid when create_cloud_dns_record is true and create_tfe_lb_ip is false. |
string |
null |
no |
| tfe_database_name | Name of TFE PostgreSQL database to create. | string |
"tfe" |
no |
| tfe_database_parameters | Additional parameters to pass into the TFE database settings for the PostgreSQL connection URI. | string |
"sslmode=require" |
no |
| tfe_database_password_secret_version | Name of Google Secret Manager secret version for the PostgreSQL password. Only used for primary region deployments when enable_passwordless_iam_db_auth is false. |
string |
null |
no |
| tfe_database_user | Name of TFE PostgreSQL database user to create. Only valid for primary region deployments when password auth is used. | string |
null |
no |
| tfe_gcp_svc_account_name | Name of GCP custom service account for TFE. Service account is used for GKE workload identity, GCS bucket permissions, and optional database authentication. | string |
"tfe-gcp-sa" |
no |
| tfe_gcs_bucket_name | Name of TFE GCS bucket that was created in the primary region TFE deployment. Only set when is_secondary_region_deployment is true. |
string |
null |
no |
| tfe_kube_namespace | Name of Kubernetes namespace for TFE (created in post-deployment steps). Used to configure GCP workload identity with GKE. | string |
"tfe" |
no |
| tfe_kube_svc_account | Name of Kubernetes Service Account for TFE (created by Helm chart). Used to configure GCP workload identity with GKE. | string |
"tfe" |
no |
| tfe_lb_ip_address | IP address to assign to TFE load balancer. Must be a valid IP address from tfe_lb_subnet_name when tfe_lb_ip_address_type is INTERNAL. |
string |
null |
no |
| tfe_lb_ip_address_type | Type of IP address to assign to TFE load balancer. Valid values are 'INTERNAL' or 'EXTERNAL'. | string |
"INTERNAL" |
no |
| tfe_lb_subnet_name | Name or self_link to existing VPC subnetwork to create TFE internal load balancer IP address in. | string |
null |
no |
| vpc_project_id | ID of GCP Project where the existing VPC resides if it is different than the default project. | string |
null |
no |
Outputs
| Name | Description |
|---|---|
| gke_cluster_name | Name of TFE GKE cluster. |
| postgres_db_instance_id | Name (ID) of TFE Cloud SQL for PostgreSQL database instance in this region. |
| redis_server_ca_certs | CA certificate of TFE Redis instance. Add this to your TFE CA bundle. |
| tfe_database_host | IP address and port of TFE Cloud SQL for PostgreSQL database instance. |
| tfe_database_name | TFE PostgreSQL database name. |
| tfe_database_password | TFE PostgreSQL database password. |
| tfe_database_password_base64 | Base64-encoded TFE PostgreSQL database password. |
| tfe_database_user | TFE PostgreSQL database username. |
| tfe_lb_ip_address | IP address of TFE load balancer. |
| tfe_lb_ip_address_name | Name of IP address resource of TFE load balancer. |
| tfe_object_storage_google_bucket | Name of TFE GCS bucket. |
| tfe_redis_host | Hostname/IP address (and port if non-default) of TFE Redis instance. |
| tfe_redis_password | Auth string of TFE Redis instance. |
| tfe_redis_password_base64 | Base64-encoded auth string of TFE Redis instance. |
| tfe_service_account_email | TFE GCP service account email address. Only produced when enable_gke_workload_identity is true. |
| tfe_service_account_key | TFE GCP service account key in JSON format, base64-encoded. Only produced when enable_gke_workload_identity is false. |