Occasional blog posts from a random systems engineer

Tinc - Mesh VPN

· Read in about 6 min · (1081 Words)

Tinc appears to be one of the few open source mesh VPNs and, in my expierience, once working, performs incredible well.

That said, the configuration of tinc is a little clunky and repetitive, nor does their documentation give much of a clue as to what is required for a minimal setup.

Example

The following example will connect 4 machines in a tinc mesh network;

  • one machine has a direct internet connection (and have public IP addresses);
  • two are behind a NAT gateway, with tinc ports being fowarded to one of the nodes;
  • one machine is behind a different NAT gateway, again, unable to forward ports.

Example-specific configurations

In this example, we will name the connection tinctest and will use the following subnet for the VPN: 172.16.0.0/29

  • Node 1:
  • Name: alice
  • IP Address: 1.1.1.1
  • Tinc IP: 172.16.0.1
  • Node 2:
  • Name: bob
  • Intenal IP: 10.1.1.2
  • External IP (with port forwarding): 2.2.2.2
  • Tinc IP: 172.16.0.1
  • Node 3:
  • Name: cindy
  • Internal IP: 10.1.1.3
  • Tinc IP: 172.16.0.1
  • Node 4:
  • Name: dylan
  • Internal IP: 10.2.1.2
  • Tinc IP: 172.16.0.1

Installation

The following should be performed on all nodes:

#!/bin/bash
# Install tinc package
apt-get update
apt-get install tinc --assume-yes

# Configure base directories for global/node configurations
mkdir -p /etc/tinc/tinctest/hosts/

Initial configuration

Tinc consists of two types of configuration files: a configuration file for configuring the local instance of tinc and a configuration that contains the IP address and public key of each node in the cluster, as well as a couple of helper scripts.

Global config

To create the global configuration for each machine, we must work out which machines will be able to directly connect to each other.

The more machines that are specified to that can be connected to, the more resiliant the connections will be. For example, if all nodes just connect to node 1 (alice) - if alice is unavailable, none of the machines will be able to connect to each other.

  • alice will connect to:
  • bob
  • bob will connect to:
  • alice
  • cindy
  • cindy will connect to:
  • alice
  • bob
  • dylan will connect to:
  • alice
  • bob

Although this would work if all nodes simply connected to alice and bob, since cindy and bob are on the same network, we will get them to connect to each other’s private IPs, since this will avoid unecessary traffic hitting their external gateway.

Armed with this information, let’s create the tinc configurations…

The following configuration will be used for bob:

#!/bin/bash
cat > /etc/tinc/tinctest/tinc.conf <<EOF
Name = bob
AddressFamily = ipv4
Interface = tun1
ConnectTo = alice
ConnectTo = cindy
BindToAddress = 10.1.1.2
EOF

# Create tinc helper scripts for bring up/down the tun interface
cat > /etc/tinc/tinctest/tinc-up <<'EOF'
#!/bin/sh
ifconfig $INTERFACE 172.16.0.2 netmask 255.255.255.248 # Update the tinc IP for the node and
                                                       # the subnet chosen for the tinc network
EOF

cat > /etc/tinc/tinctest/tinc-down <<'EOF'
#!/bin/sh
ifconfig $INTERFACE down
EOF

chmod +x /etc/tinc/tinctest/tinc-{up,down}

Repeat this on all nodes, updating the Name, the BindToAddress and the ConnectTo sections, as well as the tinc IP address in the helper scripts.

Certifiate generation/sharing

This is where the tinc configuration gets a bit clunky, especially for on-going maintenance (adding/removing nodes from the cluster)…

A node-specific configuration must be created on each node.

#!/bin/bash
cat >> /etc/tinc/tinctest/hosts/bob <<'EOF'
# The IP address that other nodes will use to connect to this node.
For machines that can't be directly accessed (cindy and dylan), remove this line.
Address = 2.2.2.2
# The 'subnet' specifies subnets that nodes will route to this node to connect to.
# This can be used all nodes to route traffic to other networks through this node, e.g. to the LAN network that the machine is connected to.
# For now, it'll be left so that it will just provide routing to the tun interface of the node.
Subnet = 172.16.0.2/32

EOF

Once the configuration for the local node is present, a key-pair can be generated for the node:

#!/bin/bash
# Personally, I stuck with 4096-bit, but can be changed to whatever you feel comfortable with.
tincd -K4096 -n tinctest
# Accept the default paths, which will the private key in /etc/tinc/tinctest/rsa_key.priv and the public key will be put in the host configuration that we created in the previous step.

For a fairly flat configuration (i.e all nodes connect in the same way, e.g. they are all public IP addresses and all connect to each other), the node configurations can simply be copied to each of the other nodes, so that each node has it’s own, a well as all other nodes' configurations. However, since some nodes are on the same network behind a NAT, it is preferable for those nodes to connect directly to each other’s private IPs.

Once all node’s own configurations are present on each other, I personally just tar/un-tar the configurations, since this is easier over SSH connection:

On each node, perform the following

#!/bin/bash
cd /etc/tinc/tinctest/hosts
tar -zcvf - ./ | base64 

Once this is performed on all nodes, I then run the following on each nodes 3 times (using the output from each of the other nodes):

#!/bin/bash
cd /etc/tinc/tinctest/hosts
base64 -d | tar -zxvf -
# Paste contents from other node and press Ctrl+d

Once complete, each node should have the 4 identical files in /etc/tinc/tinctest/hosts.

To get bob/cindy to directly connect to each other, add Address 10.1.1.3 to /etc/tinc/tinctest/hosts/cindy on bob and Address 10.1.1.2 to /etc/tinc/tincctest/hosts/bob to cindy.

Testing

To start tinc in the foreground with debug, run the following on each node:

tincd -n tinctest --no-detach --debug=2

You should now be able to ping each of the nodes from each other:

user@alice:~$ ping 172.16.0.2
PING 172.16.0.2 (172.16.0.2) 56(84) bytes of data.
64 bytes from 172.16.0.2: icmp_seq=1 ttl=64 time=35.3 ms
64 bytes from 172.16.0.2: icmp_seq=2 ttl=64 time=36.0 ms
64 bytes from 172.16.0.2: icmp_seq=3 ttl=64 time=32.4 ms
--- 172.16.0.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 32.405/34.613/36.095/1.606 ms

Personally, I’ve created a small start script, which can be run on boot:

cat > /usr/local/sbin/start_tinc <<EOF
#!/bin/bash

tincd -n tinctest
EOF
chmod +x /usr/local/sbin/start_tinc

Conclusion

Personally, I have found tinc to be incredibly reliable, fast and durable (when it comes to node failures).

However, the configuration does become annoying when adding new nodes and is something that I wish to automate at some point. But when managing 10 nodes, which access each other in different ways means that a lot of the node configurations differ and requires a little bit of thought whilst writing them.

Comments