No description
  • HCL 86.3%
  • Shell 13.7%
Find a file
2024-11-24 20:45:06 -08:00
auditlogs added kibaba + elasticsearch support via filebeat 2022-04-26 17:45:30 -06:00
compose [COMPLIANCE] Add Copyright and License Headers 2023-02-01 17:50:53 +00:00
postgres added kibaba + elasticsearch support via filebeat 2022-04-26 17:45:30 -06:00
terraform [COMPLIANCE] Add Copyright and License Headers 2023-02-01 17:50:53 +00:00
.gitignore added .terraform.lock.hcl 2022-07-20 12:49:42 -06:00
deploy fix: Make deploy script work across more systems 2022-07-25 19:24:41 -07:00
filebeat.docker.yml [COMPLIANCE] Add Copyright and License Headers 2023-02-01 17:50:53 +00:00
LICENSE [COMPLIANCE] Add MPL 2.0 LICENSE 2022-10-11 22:56:56 +00:00
README.md Update README.md 2024-11-24 20:45:06 -08:00

Docker Deployment

New repo here: https://github.com/hashicorp-education/learn-boundary-event-logging

This directory contains an example deployment of Boundary using docker-compose and Terraform. The lab environment is meant to accompany the Hashicorp Learn Boundary event logging tutorial.

In this example, Boundary is deployed using the hashicorp/boundary Dockerhub image. The Boundary service ports are forwarded to the host machine to mimic being in a "public" network.

This deployment includes the following containers:

  • boundary controller
  • boundary worker
  • boundary db (postgres)
  • elasticsearch
  • kibana
  • filebeat
  • postgres target

Huge thanks to @tmessi for building the Kibana intergration components.

Getting Started

There is a helper script called deploy in this directory. You can use this script to deploy, login, and cleanup.

Start the docker-compose deployment:

./deploy all

To login your Boundary CLI:

./deploy login

To stop all containers and start from scratch:

./deploy cleanup

Login to the UI:

  • Open browser to localhost:9200
  • Login Name: user1
  • Password: password
  • Auth method ID: find this in the UI when selecting the auth method or from TF output
$ boundary authenticate password -login-name user1 -password password -auth-method-id <get_from_console_or_tf>

Authentication information:
  Account ID:      apw_gAE1rrpnG2
  Auth Method ID:  ampw_Qrwp0l7UH4
  Expiration Time: Fri, 06 Nov 2020 07:17:01 PST
  Token:           at_NXiLK0izep_s14YkrMC6A4MajKyPekeqTTyqoFSg3cytC4cP8sssBRe5R8cXoerLkG7vmRYAY5q1Ksfew3JcxWSevNosoKarbkWABuBWPWZyQeUM1iEoFcz6uXLEyn1uVSKek7g9omERHrFs

Audit logs and ELK

The boundary controller is configured to write out audit events to a log file, auditevents/controller.log. The docker-compose.yml provides services for collecting and shipping these logs to elasticsearch with kibana for visualization of the audit events.

The deploy script changes the permissions on the audtlogs/ directory:

$ chmod 777 ./auditlogs

Note: You might need to increase system limits for elasticsearch. See here for more details:

$ sudo sysctl -w vm.max_map_count=262144 

Note: You may need to change the permissions set on the audit log file produced by boundary:

$ chmod 666 ./auditlogs/controller.log

Once the deployment is healthy, you can login to kibana using username elastic and the password elastic at http://localhost:5601 in a web browser. The $ELASTIC_PASSWORD, $KIBANA_PASSWORD, and $KIBANA_PORT can be modified within the compose/.env file.

To start creating visualizations for the data, create a data view:

If a dataView is not automatically discovered, check the permissions on auditlogs/ and auditlogs/controller.log.