Occasional blog posts from a random systems engineer

Blog - MattBits

last update:

Whilst implementing basic end-to-end tests for, an open source Terraform-cloud alternative, Terrarun (https://github.com/matthewJohn/terrarun), I need to deploy the Hashicorp Terraform cloud agent. The agent (as well as the Terraform-cloud Terraform provider) require a trusted SSL certificate to correctly interact with the server. I’ve deployed various solutions in the past, such as automated distribution of Letsencrypt certificates and Hashicorp Vault for generating complex PKI setups, but not had to deal with generating CA certs and server certificates in a fully automated method.

Gitlab Pipeline Templates For a while, I have built custom pipelines for my Gitlab projects, each starting from scratch and facing the same issues. With the introduction of a new platform for running services, I needed to: Use custom versions of Terraform Inject CA certificates into each container Use a replacement docker registry, which now used authentication, provided by Vault Authenticate to vault for Terraform I created a test application deployment, using base docker images, which resulted in more boiler-plate code than actual deployment logic, so I tested out the Official Gitlab Terraform templates (https://gitlab.

Pre-amble This blog post was written in-flight during a quest to create a secure deployment mechanism for Terraform projects to Vault, Consul and Nomad. The beginning portion was written whilst attempting to use a technique that ended up failing. Feel free to skip this portion and jump to “Using JWT authentication” Intro For the Hashicorp stack of my homelab, I have: Vault cluster Consul cluster (single DC) Nomad servers Nomad clients using multiple datacenters An offline root CA, intermediate CA and terraform state are stored/managed by Minio (local S3-compatible alternative) and an seperated isolated Vault cluster.

Background I maintain a handful of open sources projects - most of which are of no interest to anyone. There are one or two, however, that have a small handful of users - and also a small number of contributors. Because of this, I spent a very inconsistent amount of time on each of these, occasionally fixing bugs and occasionally spending an hour or two each for a week to get a feature done.

AWS offers a broad variety of services - some of which are essential and unavoidable when using Amazon as a cloud provider. Others, on the other hand, provide solutions that, whilst on the surface are great as a quick start, do not scale. By scale, I don’t mean in the usual compute-power way, I mean in cost. If you use a service that provides scalability with a steady and predictable workload, then you are paying for flexibility that you aren’t using - and you do pay for this.

Goals I started off with a basic goal - host a small python website, which uses a MySQL-like database on a highly available cluster. After recently moving from a rack in a datacenter (and fortunately saving ~£250/month), I started looking at hosting some web applications on VPSs from a couple of providers. Of course, this means: Every instance costs money - I can no longer spin up 10 extra VMs because the hardware is already running… each instance costs.

Tinc - Mesh VPN

Tinc appears to be one of the few open source mesh VPNs and, in my expierience, once working, performs incredible well. That said, the configuration of tinc is a little clunky and repetitive, nor does their documentation give much of a clue as to what is required for a minimal setup. Example The following example will connect 4 machines in a tinc mesh network; one machine has a direct internet connection (and have public IP addresses); two are behind a NAT gateway, with tinc ports being fowarded to one of the nodes; one machine is behind a different NAT gateway, again, unable to forward ports.