No description
  • HCL 86%
  • Smarty 14%
Find a file
Sara Chandler 0d052a4772
Merge pull request #2 from hashicorp/quickstart
Quickstart and Version updates
2022-07-15 08:27:43 -07:00
examples update client with new versions and outputs 2022-07-14 10:23:16 -07:00
modules update resources for new version compatibility 2022-07-14 10:21:09 -07:00
.gitignore initial commit 2021-10-29 08:37:50 -07:00
CHANGELOG.md update resources for new version compatibility 2022-07-14 10:21:09 -07:00
CODEOWNERS add codeowners (#1) 2022-05-06 10:55:06 -04:00
LICENSE initial commit 2021-10-29 08:37:50 -07:00
main.tf update resources for new version compatibility 2022-07-14 10:21:09 -07:00
outputs.tf initial commit 2021-10-29 08:37:50 -07:00
README.md update resources for new version compatibility 2022-07-14 10:21:09 -07:00
variables.tf update resources for new version compatibility 2022-07-14 10:21:09 -07:00
versions.tf format 2022-07-14 12:53:53 -07:00

Consul Enterprise GCP Module

This is a Terraform module for provisioning Consul Enterprise on GCP. This module defaults to setting up a cluster with 5 Consul server nodes (as recommended by the Consul Reference Architecture).

About This Module

This module implements the Consul single datacenter Reference Architecture on GCP using the Enterprise version of Consul 1.12+.

How to Use This Module

  • Ensure your GCP credentials are configured correctly and have permission to use the following GCP services:

  • To deploy without an existing VPC, use the example VPC code to build out the pre-requisite environment. Ensure you are selecting a region that has at least three zones.

  • To deploy into an existing VPC, ensure the following components exist and are routed to each other correctly:

    • Google Compute Network: manages a VPC network
    • Subnet: a single subnet in which to deploy the Consul cluster
    • One Cloud Router and Cloud NAT: the provided user data script requires outbound internet access to download & configure Consul
  • Use the example code to create TLS certs, ACL token, and a gossip token, all stored in the GCP Secret Manager

  • Create a Terraform configuration that pulls in the Consul module and specifies values for the required variables:

provider "google" {
  project = "my-project-id"
  region = "us-west1"
}

module "consul-ent" {
  source = "github.com/hashicorp/terraform-gcp-consul-ent-starter"

  # The secret id/name given to the google secrets manager secret for the Consul gossip encryption key
  gossip_secret_id             = "terraform_example_module_consul_gossip_secret"
  #Your GCP project ID
  project_id                   = "my-project-id"
  # Prefix for uniquely identifying GCP resources
  resource_name_prefix         = "test"
  # Self link of the subnetwork you wish to deploy into
  subnetwork                   = "https://www.googleapis.com/compute/v1/projects/my-project-id/regions/us-west1/subnetworks/subnet-01"
  # Secret id/name given to the google secret manager tls secret
  tls_secret_id                = "terraform_example_module_consul_tls_secret"
  # Path to Consul Enterprise license file
  consul_license_filepath      = "/Users/user/Downloads/consul.hclic"
}
  • Run terraform init and terraform apply

  • To finish configuring your Consul cluster securely and allow access to the Consul CLI, you must bootstrap the ACL system after Consul cluster creation. Begin by logging into your Consul cluster:

    • SSH: you must provide a cidr range value for the ssh_source_ranges variable. The default value is a range provided by google for use with the Identity-Aware Proxy service.
      • Please note this Consul cluster is not public-facing. If you want to use SSH from outside the VPC, you are required to establish your own connection to it (VPN, etc).
  • To bootstrap the ACL system, run the following commands:

consul acl bootstrap
  • Please securely store the bootstrap token (shown as the SecretID) the Consul returns to you.
  • Use the bootstrap token to create an appropriate policy for your Consul servers and associate their token with it. E.g., assuming test as the module's resource_name_prefix:
export CONSUL_HTTP_TOKEN="<your bootstrap token>"
cat << EOF > consul-servers-policy.hcl
node_prefix "test-consul-server-vm" {
  policy = "write"
}
operator = "write"
EOF
consul acl policy create -name consul-servers -rules @consul-servers-policy.hcl
consul acl token create -policy-name consul-servers -secret "<your server token in terraform_example_module_consul_acl_server_secret>"
  • Now clients can be configured to connect to the cluster. To provision clients, see the following code in the examples directory.

  • Allow clients to auto-join the cluster by creating a client acl policy while logged into your Consul cluster:

cat << EOF > consul-clients-policy.hcl
node_prefix "test-consul-client-vm" {
policy = "write"
}
operator = "read"
EOF
consul acl policy create -name consul-clients -rules @consul-clients-policy.hcl
consul acl token create -policy-name consul-clients -secret "<your client token in terraform_example_module_consul_acl_client_secret>"
# Once you've finished creating acl policies, unset the initial management token
unset CONSUL_HTTP_TOKEN
  • To check the status of your Consul cluster, run the list-peers command:
consul operator raft list-peers

License

This code is released under the Mozilla Public License 2.0. Please see LICENSE for more details.