Set up a minimal nebula overlay network

By | June 4, 2024

Nebula is a tool for making overlay networks. I haven’t used an overlay network before (I’ve been using “regular” VPNs) but I think the overlay network concept will unlock some new convenience/functionality for me. After a brief survey of the available overlay network tools, Nebula seemed the most approachable to me. This is how I set up a minimal Nebula network. I am doing this on Ubuntu Linux.

This is based on How to create your first overlay network.

The steps are as follows:

  • Download the software.
  • Create the certificate authority
  • Set up a lighthouse
  • Set up a non-lighthouse node
  • Run the nebula software as a system service (on Ubuntu or other linux using systemd)
  • Add another node to the existing overlay network

Download the software

Head over to the Nebula releases page and find the appropriate package for your system. I’m getting nebula-linux-amd64.tar.gz Replace the linux-amd64 with the one that matches your system.

I put the binaries in /usr/local/bin because that seems like the right place for this sort of thing. You might prefer to put them elsewhere.

wget https://github.com/slackhq/nebula/releases/download/v1.9.0/nebula-linux-amd64.tar.gz
tar -xzf nebula-linux-amd64.tar.gz
sudo mv nebula /usr/local/bin
sudo mv nebula-cert /usr/local/bin
rm nebula-linux-amd64.tar.gz

You can also download the sample configuration file and read through it:

wget https://raw.githubusercontent.com/slackhq/nebula/master/examples/config.yml

Create the certificate authority

Each host in your overlay network will need a certificate for authentication purposes. To make these certificates, you’ll need a certificate authority which you can create with the nebula-cert tool. I’m using the optional -encrypt flag which will add a little extra protection to your private key (by encrypting it and requiring a password each time you use it). Also, replace “My Organization Name” with the name of your organization, or your name, or whatever.

Create the certificate authority:

nebula-cert ca -encrypt -name "My Organization Name"

The output of this command will be two files – ca.crt and ca.key. Each host in your overlay network will get a copy of ca.crt. The ca.key file should be kept absolutely secret (maybe offline) as it is used to sign the certificates that authenticate your hosts on the network.

Your certificate authority will expire in a year, so you’ll have to rotate the certificates periodically. I’ll write about that at some point, but for now see: How to rotate to a new certificate authority

Set up a lighthouse

You’ll need to pick an ip address range for your overlay network, It should be in one of the reserved ranges for private networks. It’s important that this address range doesn’t overlap with the address ranges of the networks your devices are on. So, for example if your local network is 192.168.1.0/24 you don’t want to use that one for your overlay network. For the purpose of this guide I’ll use 10.100.100.0/24.

You’ll also need your lighthouse to be reachable over the internet via a public IP address. Ideally this is a static ip (using the ip address or dns name), but a dynamic ip with some dynamic DNS facility might be good enough (It’s good enough for me).

We’ll give the lighthouse the address 10.100.100.1/24

Create the certificate for the lighthouse using the CA you created in the previous step (do this in the directory with your ca.crt and ca.key):

nebula-cert sign -name "mylighthouse" -ip "10.100.100.1/24"

You now have mylighthouse.crt and mylighthouse.key.

Create a config.yml with the following contents:

pki:
  ca: /etc/nebula/ca.crt
  cert: /etc/nebula/mylighthouse.crt
  key: /etc/nebula/mylighthouse.key
static_host_map:
lighthouse:
  am_lighthouse: true
  hosts:
listen:
  host: 0.0.0.0
  port: 4242
firewall:
  outbound_action: drop
  inbound_action: drop
  outbound:
    # Allow all outbound traffic from this node
    - port: any
      proto: any
      host: any
  inbound:
    # Allow icmp between any nebula hosts
    - port: any
      proto: icmp
      host: any

The general consensus seems to be that the right place to put these files is in /etc/nebula (and that’s where the official howto tells us to put them). This makes me a little uncomfortable because of the off chance that, at some point in the future, I will install, via the system package manager, some package that will want that directory and there will be some kind of conflict or clobbering or whatever. An alternative location might be /usr/local/etc/nebula. In any case, the arguments for /etc/nebula are compelling and the chance for a package-manager related conflict remote, so I’m going with /etc/nebula. You should follow your own sensibilities.

On that note, you should move mylighthouse.crt, mylighthouse.key, and config.yml to /etc/nebula (on your lighthouse machine). Also, put a copy of ca.crt (but not ca.key) in there:

sudo mkdir /etc/nebula
sudo mv mylighthouse.crt /etc/nebula
sudo mv mylighthouse.key /etc/nebula
sudo mv config.yml /etc/nebula
sudo cp ca.crt /etc/nebula

We have specified port 4242 as the port the lighthouse will listen on (you can pick a different port if you want). Since the lighthouse needs to be accessible from the Internet you’ll need to arrange for your various firewalls to allow that port through to your lighthouse machine.

Now start up the lighthouse:

sudo nebula -config /etc/nebula/config.yml

This will take over your terminal for now. Once we know everything is working, we’ll set it up to start as a service.

Set up a non-lighthouse node

We’re going to give the non-lighthouse device the ip address 10.100.100.2/24

Create another certificate using the same ca:

nebula-cert sign -name "myotherhost" -ip "10.100.100.2/24"

Copy the myotherhost.crt, myotherhost.key, and ca.crt (but not ca.key) to /etc/nebula your non-lighthouse device.

Download the nebula software on your other device and put it in /usr/local/bin as before(you’ll only need nebula and not nebula-cert).

Create config.yml in /etc/nebula (on the non-lighthouse machine) with the following contents:

pki:
  ca: /etc/nebula/ca.crt
  cert: /etc/nebula/myotherhost.crt
  key: /etc/nebula/myotherhost.key
# this maps the nebula address of your lighthouse to the internet address
# of your lighthouse. Replace "internet.address.of.your.lighthouse"
# with the public ip or dns name of your lighthouse on the internet
static_host_map:
  "10.100.100.1": ["internet.address.of.your.lighthouse:4242"]
lighthouse:
  am_lighthouse: false
  interval: 60
  # under hosts we need the nebula ip of the lighthouse
  hosts:
    - "10.100.100.1"
listen:
  host: 0.0.0.0
  port: 0
firewall:
  outbound_action: drop
  inbound_action: drop
  outbound:
    # Allow all outbound traffic from this node
    - port: any
      proto: any
      host: any
  inbound:
    # Allow icmp between any nebula hosts
    - port: any
      proto: icmp
      host: any

Make sure to replace “internet.address.of.your.lighthouse:4242” with the actual ip address or dns name and port of your lighthouse on the Internet.

Run nebula on the non-lighthouse device.

sudo nebula -config /etc/nebula/config.yml

You should see some stuff happening on your lighthouse terminal as this node registers itself with the lighthouse.

Test that the overlay network works by pinging the nebula ip address (10.100.100.1) of the lighthouse from the non-lighthouse.

And also from the lighthouse node you should be able to ping the other node at the nebula ip address (10.100.100.2).

Run nebula as a systemd service

If you’re not on a Linux that uses systemd, go check out the nebula service scripts on Github and find the thing that matches your situation.

If you’re on systemd, use this configuration (directly from that GitHub). You should download or copy the one from github in case there have been improvements since I wrote this, but I’m pasting the text here in case something happens to the github one (this should be in a file called nebula.service):

[Unit]
Description=Nebula overlay networking tool
Wants=basic.target network-online.target nss-lookup.target time-sync.target
After=basic.target network.target network-online.target
Before=sshd.service

[Service]
Type=notify
NotifyAccess=main
SyslogIdentifier=nebula
ExecReload=/bin/kill -HUP $MAINPID
ExecStart=/usr/local/bin/nebula -config /etc/nebula/config.yml
Restart=always

[Install]
WantedBy=multi-user.target

If you put your binaries or config in a different place than I did, you’ll have to update the ExecStart line in the service file. Put the nebula.service file in /etc/systemd/system

wget https://raw.githubusercontent.com/slackhq/nebula/master/examples/service_scripts/nebula.service
sudo mv nebula.service /etc/systemd/system

Install that service file on both of your nebula nodes.

You can kill the nebula processes running in your terminals and start the nebula services like so:

sudo service nebula start

If you want the nebula service to start each time you start up the machine, enable it:

sudo systemctl enable nebula.service

Add another node to the existing nebula network

To add another (Linux) node, simply follow the steps for adding a non-lighthouse node:

  1. Download the binaries and put them in /etc/nebula
  2. Create a new certificate and key (nebula-cert sign -name “mysecondhost” -ip “10.100.100.3/24”) with a new ip address. You’ll do this on whichever machine you are keeping your ca.key and then copy them over.
  3. Make a copy of the non-lighthouse config file and update the file names in the pki section if necessary. You shouldn’t need to change anything else. You also don’t need to tell any of the other nodes (including the lighthouse) about this new node.
  4. Put your config.yml, your new certificate, your new key, and a copy of ca.crt into /etc/nebula.
  5. Install the service script.
  6. Start the service.
  7. Ping the new node from the other nodes to make sure everything works.

Next Steps

All we can do with the above setup is ping from one node to the other, which isn’t really that useful. The next most obvious step is to add some more inbound firewall rules. For example, to add ssh and https you might change the firewall section to something like this:

firewall:
  outbound_action: drop
  inbound_action: drop
  outbound:
    # Allow all outbound traffic from this node
    - port: any
      proto: any
      host: any
  inbound:
    # Allow icmp between any nebula hosts
    - port: any
      proto: icmp
      host: any

    - port: 443
      proto: tcp
      host: any

    - port: 22
      proto: tcp
      host: any

I hope you found this to be helpful.