Dymamic Terraform Provider I wanted to create a proof-of-concept simple method of storing and retrieving data. User X --push--> Database --read--> User Y This would be used as a sort of dictionary for users to query information being published by others. The exact nature and source of the data made Terraform an obvious choice for this. I explored the most basic option: A basic boiler-plated http call: data "http" "example" { url = "https://my-db.
Blog - MattBits
last update:For Christmas last year, I wanted to make a present for my Mum. My parents had recently had solar panels installed with a battery. The system worked, so I was told, that the solar power would: primarily power the house; then charge the battery; once filled heat water for the hot water tank She wanted to be more energy efficient, but knowing when to use the dishwasher and when to have a shower could be complex.
Traefik’s ConsulCatalog plugin provides a defaultRule parameter, which is applied by default to exposed services. The example from the docs (https://doc.traefik.io/traefik/providers/consul-catalog/#defaultrule), suggests: For a given service, if no routing rule was defined by a tag, it is defined by this defaultRule instead. The defaultRule must be set to a valid Go template, and can include sprig template functions. The service name can be accessed with the Name identifier, and the template has access to all the labels (i.
Whilst implementing basic end-to-end tests for, an open source Terraform-cloud alternative, Terrarun (https://github.com/matthewJohn/terrarun), I need to deploy the Hashicorp Terraform cloud agent. The agent (as well as the Terraform-cloud Terraform provider) require a trusted SSL certificate to correctly interact with the server. I’ve deployed various solutions in the past, such as automated distribution of Letsencrypt certificates and Hashicorp Vault for generating complex PKI setups, but not had to deal with generating CA certs and server certificates in a fully automated method.
Gitlab Pipeline Templates For a while, I have built custom pipelines for my Gitlab projects, each starting from scratch and facing the same issues. With the introduction of a new platform for running services, I needed to: Use custom versions of Terraform Inject CA certificates into each container Use a replacement docker registry, which now used authentication, provided by Vault Authenticate to vault for Terraform I created a test application deployment, using base docker images, which resulted in more boiler-plate code than actual deployment logic, so I tested out the Official Gitlab Terraform templates (https://gitlab.
Pre-amble This blog post was written in-flight during a quest to create a secure deployment mechanism for Terraform projects to Vault, Consul and Nomad. The beginning portion was written whilst attempting to use a technique that ended up failing. Feel free to skip this portion and jump to “Using JWT authentication” Intro For the Hashicorp stack of my homelab, I have: Vault cluster Consul cluster (single DC) Nomad servers Nomad clients using multiple datacenters An offline root CA, intermediate CA and terraform state are stored/managed by Minio (local S3-compatible alternative) and an seperated isolated Vault cluster.
Background I maintain a handful of open sources projects - most of which are of no interest to anyone. There are one or two, however, that have a small handful of users - and also a small number of contributors. Because of this, I spent a very inconsistent amount of time on each of these, occasionally fixing bugs and occasionally spending an hour or two each for a week to get a feature done.
AWS offers a broad variety of services - some of which are essential and unavoidable when using Amazon as a cloud provider. Others, on the other hand, provide solutions that, whilst on the surface are great as a quick start, do not scale. By scale, I don’t mean in the usual compute-power way, I mean in cost. If you use a service that provides scalability with a steady and predictable workload, then you are paying for flexibility that you aren’t using - and you do pay for this.
Goals I started off with a basic goal - host a small python website, which uses a MySQL-like database on a highly available cluster. After recently moving from a rack in a datacenter (and fortunately saving ~£250/month), I started looking at hosting some web applications on VPSs from a couple of providers. Of course, this means: Every instance costs money - I can no longer spin up 10 extra VMs because the hardware is already running… each instance costs.
Tinc appears to be one of the few open source mesh VPNs and, in my expierience, once working, performs incredible well. That said, the configuration of tinc is a little clunky and repetitive, nor does their documentation give much of a clue as to what is required for a minimal setup. Example The following example will connect 4 machines in a tinc mesh network; one machine has a direct internet connection (and have public IP addresses); two are behind a NAT gateway, with tinc ports being fowarded to one of the nodes; one machine is behind a different NAT gateway, again, unable to forward ports.