Running an Amazon Linux2 virtual machine on your own hardware
by Morgan Shorter on Fri, 21 Jan 2022I have a prod test-bed with privileged ports, FQDNs, and production configs running in my home lab. With it, I can test all our apps, proxies, databases, and unattended install scripts without having to deploy a new instance. In this article, I'll explain how I set up an Amazon Linux 2 virtual machine on my headless home server, and how I used WireGuard to elevate that VM from a handy sandbox to a first-class, low-maintenance development tool.
First, how did I get here?
Recently, we switched all of our new infrastructure over from CentOS 7 to Amazon Linux 2. Thankfully, from a maintenance standpoint, this has been almost a non-event. I say almost, because one of the things that we lost when leaving CentOS was the tremendous amount of third-party documentation available for it. Want to run your own VM? You can have your pick of tutorials. Not so for Amazon Linux 2. And, even though Amazon Linux 2 is based on CentOS 7, it's just different enough that following guides and documentation for CentOS will sometimes lead you astray in not-so-obvious ways.
At the outset of the migration, it wasn't readily apparent if images for other hypervisors were even available. Let's not forget that Amazon wants you to rent their VM rather than run your own, so they don't exactly advertise this capability.
The show must go on, and my work soon necessitated brute-forcing a solution.
As part of a project to migrate our internal issue-tracking and CI to a new set of tools, I found myself needing to test an install process of rapidly increasing complexity. To make it work, everything--the new CI tools, Postgres, nginx, cleanup hooks, configuration--would require new install scripts to be committed into our unattended deployment system, Drop. The scripts themselves, which orchestrate dozens of platform-specific dependencies elsewhere in the stack: socket permissions, groups, file ownership, database passwords, and so on, also needed to be well tested. How do you test non-portable prod install scripts without making a mess in prod?
You don't. Not unless you have a near-identical test machine.
Replicating one of our ec-2 instances to test on was technically an option, but 1) that wouldn't have exactly been a clean testing environment, and 2) that would have cost us money. Conveniently, I have a small home server that I was already using to test some parts of the migration individually, but it's far from a clean system. It doesn't even run a RedHat distro, so divination and proof-of-concepts were about as close as I could get to real testing on that machine; hardly confidence-inspiring. And so, I was forced to tackle the VM issue.
Normally, accessing such a VM from a separate development machine involves creating and maintaining a Rube Goldberg contraption of socks proxies, aliases, and the like. Rather than kid myself that I was going to remember how something like that worked, I leveraged WireGuard to sidestep the issue entirely. The addition of a WireGuard VPN to the VM enabled me to connect to it from elsewhere on the network without brittle ad-hoc workarounds, without constantly creating or destroying proxies, and without needing to interrupt my workflow to remember how everything is wired together.
Without a suitable VPN, a port-forwarding rule would need to be created for every port that gets used, and a network tunnel with matching firewall rules would need to be created for every additional machine that needs to connect to the VM; not exactly the most scalable solution. This complexity increases geometrically with the underlying complexity of the networked services running on the VM. I chose WireGuard out of the available VPN tools, because I've found it to consistently be the simplest, most flexible, and most straightforward for whatever I throw at it.
Some of the methods I cover here may be unnecessarily roundabout for many users. These methods, namely editing the VM image before it's first boot, are peculiar to my own requirements that the install process be headless. There are easier ways to achieve similar results using the VirtualBox GUI which I won't cover here. However, I have included some troubleshooting tips that utilize the GUI, should the need arise.
An Overview
The way packets get routed through WireGuard and VirtualBox is admittedly complex, so I've included this graph to better illustrate what the my network looked like when all was said and done:
Don't worry if the graph isn't obvious right away. It exists to serve as a reference, should the reader have trouble visualizing how everything fits together.
Roughly speaking, there are three parts to making this whole thing work:
- Cooking the .vdi image to perfection
- Configuring VirtualBox to work with the network
- Installing and configuring WireGuard on the VM
Throughout this article, I will refer to the host machine, on which the hypervisor and VM are running, as "Server", and the endpoint machine, from which arbitrary connections to the VM are to be made, as "Laptop", for the sake of clarity. In practice, they can be any type of suitable hardware.
This article assumes a few things about things about the target systems:
- All the machines used in this exercise are running a decent GNU/Linux distro. This isn't strictly required, but you may have to modify the examples to fit your system.
- WireGuard is already set up on your host and endpoint systems. The examples also assume some familiarity with WireGuard.
- You have the
mkpasswd
utility or an equivalent available somewhere (I'll get to that later). - You can get a root shell on both the host and endpoint.
- The host machine is headless, has a CPU that supports virtualization, and has at least 1G of ram to spare.
- You have 50+GB of free disk space for editing the .vdi image
VirtualBox
Installing VirtualBox
Here's an example command to install VirtualBox and its dependencies on an Arch-based system like the one running on my server:
$ pacman -S libvirt virt-install virt-viewer virtualbox \ virtualbox-guest-utils virtualbox-host-dkms
You will also need WireGuard and mkpasswd
. Both are fairly easy to
install on most systems if they aren't already available; consult your package
manager for details. On most systems, the mkpasswd
utility is
packaged with whois
,
but this isn't the case for Arch-based distros. If you need
mkpasswd
on an Arch-based system, you can
get it from the AUR.
Once the requisite tools are installed, go to https://cdn.amazonlinux.com/os-images/latest/
and download the
Editing The VDI File
To make the magic happen without using the VirtualBox GUI, we're going to
(ab)use the cloud config that comes inside the official image. In order
to actually edit the relevant files though, we'll need to deconstruct the
VDI, or Virtual Disk Image (not to be confused with Microsoft's Virtual Desktop Infrastructure), is a file format that allows for de-duplication of zero-filled blocks in a raw disk image, preventing unused disk space within the image from being written back to the host filesystem. Just like a raw disk image, it includes partition tables, partitions, filesystems--the whole nine. VDI is VirtualBox's default disk image format.
VirtualBox gives us a handy tool to mount these special images via the Linux loopback virtual block device fuse driver using the following command syntax:
$ vboximg-mount -i $VM_UUID -o allow_root $MOUNTPOINT
This would create a few files in $MOUNTPOINT
, each of which are
separate partitions. It would seem a reasonable assumption that all one would
have to do is mount one of these partitions and start editing files. But that
would make too much sense. vboximg-mount
uses fuse to mount
a read only disk image, so any changes to the filesystem
it contains (or any other part of the image) won't get written back to disk.
For our purposes, this is pretty useless.
There exist third-party tools that claim to be able to edit VDI files in-place,
but they all have suspicious-looking download pages, so I decided to avoid them.
Fortunately, VirtualBox does come with a tool for dumping a raw disk
image from a VDI file. This is where the 50G of free disk space comes in. Raw
disk images aren't de-duplicated, so that petite 2G file you downloaded will
balloon to it's actual size of 25G. You may want to double check
that your disk(s) have sufficient free space with df -h
before proceeding.
To dump the raw image, use the following command and syntax:
$ vboxmanage clonemedium --format RAW $VDI_FILE.vdi $RAW_IMAGE.img
In my case, this was:
$ vboxmanage clonemedium --format RAW \ amzn2-virtualbox-2.0.20211005.0-x86_64.xfs.gpt.vdi /tmp/aml2.img
That command will error if there is insufficient disk space.
If you didn't run out of disk or have some other error, you should get some output similar to this:
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Clone medium created in format 'RAW'. UUID: e60d412f-f0aa-47be-a7be-fc0ba841baf3
Running fdisk
on the raw image that was just created will give us
some necessary info.
$ fdisk -l /tmp/aml2.img Disk /tmp/aml2.img: 25 GiB, 26843545600 bytes, 52428800 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: A25D5C3B-C384-4FD8-AC66-8E0AE36EF9DF Device Start End Sectors Size Type /tmp/aml2.img1 4096 52428766 52424671 25G Linux filesystem /tmp/aml2.img128 2048 4095 2048 1M BIOS boot Partition table entries are not in disk order.
We can see that it has a valid GPT with two partitions. Raw block devices
aren't mountable on their own, so we'll need to extract the partition we want
with disk dump. We're going for the 25G root partition. We'll tell disk dump
to read from skip
to that sector of input.
The starting sector of each partition can be found in the Start
column of the previous fdisk
output.
GPT doesn't allow partitions to extend all the way to the end of the block
device, because it stores its backup partition table there. Consequently,
we need to tell disk dump the exact number of sectors we want to snarf from
input. In the Sectors column of fdisk
output, we can see how many sectors
are in each partition, so we'll tell disk dump to count
52424671 sectors of input before stopping.
The host triplet in the original image filename gives us
$ dd if=/tmp/aml2.img of=/tmp/aml2.xfs skip=4096 count=52424671 status=progress 26840399872 bytes (27 GB, 25 GiB) copied, 441 s, 60.9 MB/s 52424671+0 records in 52424671+0 records out 26841431552 bytes (27 GB, 25 GiB) copied, 440.714 s, 60.9 MB/s
We finally have a writable filesystem image, so let's mount it and get to work.
I used
$ sudo -Es $ mount -t xfs -o rw /tmp/aml2.xfs tmpmnt/ $ cd tmpmnt
Let's add authentication data to the cloud config so we can use SSH and
sudo
. We'll need a password hash, an SSH public key from the
Laptop, and an SSH public key from the Server.
# On the Server $ cat .ssh/vmhost_ed25519.pub ssh-ed25519 SERVER-PUBLIC-KEY VM Host # On the Laptop $ cat .ssh/aml2-endpoint_ed25519.pub ssh-ed25519 LAPTOP-PUBLIC-KEY Endpoint PC # note: PASSWORD is the cleartext password here. $ mkpasswd -m sha-512 PASSWORD PASSWORD_HASH PASSWORD_HASH
Open the cloud config file in your text editor to add the keys and password hash.
Edit this section in
system_info: default_user: name: ec2-user lock_passwd: false gecos: EC2 Default User groups: [wheel, adm, systemd-journal] sudo: ["ALL=(ALL) NOPASSWD:ALL"] shell: /bin/bash distro: amazon paths: cloud_dir: /var/lib/cloud templates_dir: /etc/cloud/templates ssh_svcname: sshd
so that it looks like this:
system_info: default_user: name: ec2-user lock_passwd: false gecos: EC2 Default User groups: [wheel, adm, systemd-journal] sudo: ["ALL=(ALL) NOPASSWD:ALL"] shell: /bin/bash passwd: PASSWORD_HASH ssh_authorized_keys: - ssh-ed25519 SERVER-PUBLIC-KEY VM Host - ssh-ed25519 LAPTOP-PUBLIC-KEY Endpoint PC distro: amazon paths: cloud_dir: /var/lib/cloud templates_dir: /etc/cloud/templates ssh_svcname: sshd
Adding those SSH public keys will allow inbound SSH connections to authenticate
as ec2-user using either of the corresponding private keys. Astute
readers may have noticed that we don't actually need to set the password
to use sudo
because of that NOPASSWD
directive there,
but it's a good thing to have in your back pocket in case you need to perform
some other action that requires authentication.
Now we can exit that directory, unmount the filesystem, and exit the root shell.
$ cd $ umount tmpmount $ exit
To put the modified partition back inside the disk image we'll need disk dump
again. Note that we're using seek
instead of skip
here, since we're skipping sectors on the output file, rather than the input.
Don't forget to use conv=notrunc
here so dd
doesn't
overwrite the backup partition table.
$ dd of=/tmp/aml2.img if=/tmp/aml2.xfs seek=4096 status=progress conv=notrunc 26841010688 bytes (27 GB, 25 GiB) copied, 1093 s, 24.6 MB/s 52424671+0 records in 52424671+0 records out 26841431552 bytes (27 GB, 25 GiB) copied, 1093.03 s, 24.6 MB/s
This is a good opportunity to double check our partition table to ensure we didn't make any errors.
$ fdisk -l /tmp/aml2.img Disk /tmp/aml2.img: 25 GiB, 26843545600 bytes, 52428800 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: A25D5C3B-C384-4FD8-AC66-8E0AE36EF9DF Device Start End Sectors Size Type /tmp/aml2.img1 4096 52428766 52424671 25G Linux filesystem /tmp/aml2.img128 2048 4095 2048 1M BIOS boot Partition table entries are not in disk order.
If that wasn't the output you got, you might be in trouble...
Time to put the genie back in the bottle. Squashing that disk image back into a VDI file is pretty straightforward, but takes a while:
$ vboxmanage convertfromraw /tmp/aml2.img ./aml2.vdi Converting from raw image file="/tmp/aml2.img" to file="./aml2.vdi"... Creating dynamic image with size 26843545600 bytes (25600MB).../pre>
That's it for modifying the disk image. It's now usable for a headless install.
Configuring The VM
The first time I set up a VirtualBox VM, I used sudo
and stored
my VMs in a root-owned directory under $XDG_CONFIG_HOME
which
is not really what we want for root. We're not binding to any low ports
on the Server, so that isn't necessary. Additionally, it's safer to run VMs
as a less privileged user.
To install the VM in the home of a regular user, login as that user and:
$ mkdir vbox $ vboxmanage setproperty machinefolder $PWD/vbox $ vboxmanage createvm --name aml2-demo --register Virtual machine 'aml2-demo' is created and registered. UUID: 06c3542d-47e6-46f4-ba68-f2c86a105ce9 Settings file: '/home/username/vbox/aml2-demo/aml2-demo.vbox'
This designates vbox
as the directory under which subdirectories
named after their corresponding VM will be created upon VM registration.
Within each of those subdirectories, VirtualBox creates an XML file containing
all configuration settings for that VM.
It's time to add some configuration to the VM using the modifyvm
command. We'll limit it to one CPU with the --cpus 1
flag,
specify the maximum amount of RAM (in megabytes) that we'll allow the VM
to use with --memory 1024
, and similarly limit the video ram
to a maximum of 128MB with --vram 128
.
Setting --acpi on
will allow us to suspend and power down the VM
from the VirtualBox CLI. To make the virtual network devices on the VM
accessible from the Server, we'll create a NIC and set it to use VirtualBox's
builtin NAT and DHCP with --nic1 nat
. We'll set this NIC to be
activated automatically whenever the VM is powered on with
--cableconnected1 on
. There are myriad ways to create virtual NICs
for VMs under VirtualBox, but this is the simplest for our purposes. If you
have a bunch of other VMs and plan on emulating something more complex, e.g.,
VPC, you'll need to consult the VirtualBox documentation for the appropriate
configuration. A full list of available NIC
types can be found in the VirtualBox documentation.
In order to use SSH and WireGuard, we'll need to set up some port-forwarding rules within VirtualBox. A port-forwarding rule can be set by appending the following flag:
--natpf${NIC-NUMBER} "$RULE_NAME,$PROTOCOL,$SERVER_IP,$SERVER_PORT,$VM_IP,$VM_PORT"
IP addresses that are omitted will be set to 0.0.0.0
. If your
Server firewall is set up conservatively, this isn't an issue, and since we
can't assume we'll know what IP address VirtualBox will give the VM, leaving it
out keeps things simple. An example SSH port-forwarding rule could be:
--natpf1 "ssh,tcp,,8222,,22"
,
where 8222
is the port on your Server that will accept connections
for forwarding to the VM. This is the rule I'll use for SSH. Be sure to use a
non-privileged, unused port here. You can check whether or not the port is
in use with netstat -tulp
.
We'll also be forwarding a port for WireGuard with
--natpf1 "wg0tun0,udp,,8242,,8242"
The same rules apply for WireGuard as do for SSH: the VM port must be the same
port that is used by the WireGuard device on the VM, and the Server port must
be non-privileged and unused. More than one port forwardiing rule may be
specified at once by using multiple --natpf$N
flags.
Since Amazon Linux 2 is based on CentOS, which is in turn based on Red Hat,
we need to set the OS type to RedHat_64
.
Finally, the VM needs a block device to boot from. We have a surgically altered
VDI image that will stand in for a hard drive, so we'll add
--boot1 disk
to the modifyvm
command. We'll add
the actual image file later on.
Here's the whole command put together:
$ vboxmanage modifyvm aml2-demo --cpus 1 --memory 1024 --vram 128 --acpi on \ --boot1 disk --nic1 nat --cableconnected1 on --natpf1 "ssh,tcp,,8222,,22" \ --natpf1 "wg0tun0,udp,,8242,,8242" --ostype RedHat_64
Now we need to attach the edited VDI to the VM we've just configured with
storagectl
and storageattach
. We'll name the
connection satac
and make it a SATA type connection.
$ vboxmanage storagectl aml2-demo --name satac --add sata $ cp aml2.vdi ./vbox/aml2-demo/ $ vboxmanage storageattach aml2-demo --storagectl satac --port 0 \ --device 0 --type hdd --medium vbox/aml2-demo/aml2.vdi
All of these settings are also available from the VirtualBox GUI, for readers who do not need the Server to be headless.
If you've botched the configuration of the VM and want to start over, you can
delete the VM with vboxmanage unregistervm aml2-demo --delete
.
It is important to set the --delete
flag, or you will end up with
a dangling reference that will prevent you from starting over. I found this out
the hard way. If you end up doing this, you need to delete the VirtualBox
registry which is stored in $XDG_CONFIG_HOME
. Typically, you can
just run rm -rv ~/.config/VirtualBox
and start over
from the beginning of this section.
Starting The VM
The VM can now be powered on with:
$ vboxmanage startvm aml2-demo --type headless
Or:
$ vboxheadless --startvm aml2-demo Waiting for VM "aml2-demo" to power on... VM "aml2-demo" has been successfully started.
On first boot, cloud config will set the authorized keys and password for
ec2-user
to the values that were added to the cloud config file.
After giving the VM a moment to boot and get itself set up, inbound SSH
connections from the Server via the new port-forwarding rule should work.
$ ssh -vv -i .ssh/vmhost_ed25519 -p 8222 ec2-user@localhost __| __|_ ) _| ( / Amazon Linux 2 AMI ___|\___|___| https://aws.amazon.com/amazon-linux-2/ No packages needed for security; 4 packages available Run "sudo yum update" to apply all updates. [ec2-user@localhost ~]$
Troubleshooting
If you weren't able to establish an SSH connection from the Server, double
check the usual suspects: check your firewall settings, ensure you're using
the port that you set in the port-forwarding rule, and check the VM log for
clues: less ~/vbox/aml2-demo/Logs/VBox.log
.
If you're still stuck, and can attach a display to your Server, you can use the VirtualBox GUI to sort things out manually.
To do so, power off the VM and restart it in GUI mode:
$ vboxmanage controlvm aml2-demo poweroff 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% vboxmanage startvm aml2-backstage --type gui
This will launch the VirtualBox GUI and connect it's virtual display to your VM, giving you a grub menu like this:
Press "e" to edit the default boot option. This will allow you to edit the kernel cmdline.
Editing the grub config options to add init=/bin/sh
to
the cmdline, will allow you to boot the VM in single-user-mode.
Hit "Ctrl-X" to boot the edited cmdline and enter single-user-mode,
where you can edit passwords, edit
WireGuard
Installing WireGuard On The VM
The official Amazon Linux 2 package repositories are pretty sparce, so it's necessary to pull in some external repos if you want to install WireGuard. First, enable the optional CentOS 7 repo:
$ sudo -i # as root on vm $ yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm Loaded plugins: langpacks, priorities, update-motd epel-release-latest-7.noarch.rpm | 15 kB 00:00:00 Examining /var/tmp/yum-root-WDyQMU/epel-release-latest-7.noarch.rpm: epel-release-7-14.noarch Marking /var/tmp/yum-root-WDyQMU/epel-release-latest-7.noarch.rpm to be installed Resolving Dependencies --> Running transaction check ---> Package epel-release.noarch 0:7-14 will be installed ... Installed: epel-release.noarch 0:7-14 Complete!
Then, install the official WireGuard repo for CentOS 7 using the instructions on the WireGuard website:
$ curl -o /etc/yum.repos.d/jdoss-wireguard-epel-7.repo \ https://copr.fedorainfracloud.org/coprs/jdoss/wireguard/repo/epel-7/jdoss-wireguard-epel-7.repo % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 364 100 364 0 0 957 0 --:--:-- --:--:-- --:--:-- 955
Finally, install the WireGuard kernel drivers and userland tools from the WireGuard repo.
$ yum install wireguard-dkms wireguard-tools Loaded plugins: langpacks, priorities, update-motd amzn2-core ... --> Running transaction check ...
Since Amazon Linux 2 doesn't ship with package-signing keys for the WireGuard
repo pre-installed, yum
will ask some questions about importing
public keys to its keyring. Answer "yes" to those questions to continue
the install.
... Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7)" Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-14.noarch (installed) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 ... Installed: wireguard-dkms.noarch 1:1.0.20210606-2.el7 ... Complete!
With installation complete, WireGuard kernel modules and userland tools should now be available.
Configuring WireGuard
To actually use WireGuard on the VM, it needs keys, configuration, and
a virtual NIC. The above installation process will have already created
the wg0
, but you can substitute any name you want.
$ cd /etc/wireguard $ umask 077 $ wg genkey > wg0key $ wg pubkey < wg0key | tee wg0key.pub VM_NEW_WIREGUARD_PUBLIC_KEY
That last command will dump the VM's new WireGuard public key to the standard output.
Save this key somewhere; you'll need it later.
Create the wg0
NIC on the VM:
ip link add dev wg0 type wireguard wg set wg0 private-key /etc/wireguard/wg0key listen-port 8242 ip addr add 10.0.0.42/24 dev wg0;
That will create the NIC, bind it to a port (8242) on your VM's main NIC
through which it will forward packets, assign it the IP address of
10.0.0.42
,
and allow it to route all packets to and from IPs in the range of
10.0.0.1
- 10.0.0.255
.
In order to enable WireGuard connections between the VM and the Laptop, we need to add our Laptop as a peer using the following syntax:
$ wg set $VM_WGDEV_NAME peer $LAPTOP_PUBKEY endpoint \ $SERVER_VBOX_NAT_IP:$SERVER_WGDEV_LISTEN_PORT allowed-ips $LAPTOP_WGDEV_IP/32
We can get the Laptop's public key by running sudo wg
on the Laptop, giving us output that looks something like this:
interface: wg0 public key: WIREGUARD_PUBLIC_KEY private key: (hidden) listening port: 77777
Since VirtualBox does it's own NAT and DHCP, we'll need the IP assigned to the Server by VirtualBox. We can grep the VirtualBox logs to find it.
$ grep -i NAT vbox/aml2-demo/Logs/VBox.log 00:00:00.155386 Driver <string> = "NAT" (cb=4) 00:00:00.315620 NAT: Guest address guess set to 10.0.2.15 by initialization 00:00:00.315680 NAT: resolv.conf: nameserver ::1 00:00:00.315684 NAT: resolv.conf: nameserver 127.0.0.1 00:00:00.315693 NAT: DNS#0: 10.0.2.2 00:00:00.315720 NAT: Set redirect TCP 0.0.0.0:8222 -> 0.0.0.0:22 00:00:00.315739 NAT: Set redirect UDP 0.0.0.0:8242 -> 0.0.0.0:8242 00:00:10.227905 NAT: Old socket recv size: 128KB 00:00:10.228353 NAT: Old socket send size: 128KB 00:00:34.159829 NAT: Link up 00:00:39.199732 NAT: IPv6 not supported 00:00:40.011811 NAT: resolv.conf: nameserver ::1 00:00:40.011833 NAT: resolv.conf: nameserver 127.0.0.1 00:00:40.011853 NAT: DNS#0: 10.0.2.2 00:00:40.011860 NAT: DHCP offered IP address 10.0.2.15
The Server's VirtualBox IP address is on the lines that say DNS#0
.
To find the listen-port of WireGuard on the Server, we can run
sudo wg
on the Server just as we did on the Laptop. In this case,
it's 8888
.
Finally, to get the IP address assigned to the WireGuard NIC on the Laptop,
we can look at the output of ifconfig
on the Laptop:
... TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 wg0: flags=209<UP,POINTOPOINT,RUNNING,NOARP> mtu 1420 inet 10.0.0.7 netmask 255.255.255.0 destination 10.0.0.7 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 1000 (UNSPEC) ...
Putting it all together, we get:
$ wg set wg0 peer WIREGUARD_PUBLIC_KEY endpoint \ 10.0.2.2:8888 allowed-ips 10.0.0.7/32
To persist these settings across boots, we'll save the current settings to a WireGuard config file that can be reloaded by an init script:
$ wg showconf wg0 > /etc/wireguard/wg0.conf
Now we can bring up the WireGuard NIC:
$ ip link set wg0 up;
Next, we'll use the public key that was created on the VM to add the VM as a peer on the Laptop using the same syntax:
$ sudo wg set $LAPTOP_WGDEV_NAME peer $VM_PUBLIC_KEY endpoint \ $SERVER_WGDEV_IP:8242 allowed-ips $VM_WGDEV_IP/32
To add the VM as a peer and save it to the config file on the Laptop, run:
$ sudo wg set wg0 peer VM_NEW_WIREGUARD_PUBLIC_KEY endpoint \ 10.0.0.8:8242 allowed-ips 10.0.0.42/32 $ sudo wg showconf wg0 > /etc/wireguard/wg0.conf
If everything was configured correctly, you should now be able to open an SSH connection to the VM from the Laptop like so:
$ ssh -i .ssh/aml2-endpoint_ed25519 ec2-user@10.0.0.42 Last login: Mon Nov 29 23:09:02 2021 from 10.0.2.2 __| __|_ ) _| ( / Amazon Linux 2 AMI ___|\___|___| https://aws.amazon.com/amazon-linux-2/ No packages needed for security; 4 packages available Run "sudo yum update" to apply all updates. [ec2-user@localhost ~]$
I say "if", because, by now, we've introduced a significant amount of
complexity. The above SSH connection is tunneled through WireGuard over
WireGuard via VirtualBox's port-forwarding rules. Errors might not be
immediately obvious, and error messages are unlikely to be terribly helpful.
If you have trouble here, double check the output of wg
and
ifconfig
on all three systems. Make sure all the ports,
IP addresses and ranges, and public keys are correct.
The WireGuard quick-start page
has some informative examples that can help sort out potential issues.
I recommend against using wg-quick
or WireGuard init scripts that
may have been provided by your distribution, as these will be based on
assumptions about your WireGuard setup that do not match this use case and
can lead to unintended side-effects.
Configuring Hostnames
Typing IP addresses every time we want to use SSH or test a page in the browser is a drag, so lets set some hostnames.
On the VM, edit
$ diff -u /etc/hosts -127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 +127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 aml2.demo ::1 localhost6 localhost6.localdomain6
Now add the same hostname, and the IP address of the VM's WireGuard NIC to your Laptop's hosts file:
$ diff -u /etc/hosts +10.0.0.42 aml2.demo
We can now use that hostname to connect via SSH from the Laptop:
$ ssh -i .ssh/aml2-endpoint_ed25519 ec2-user@aml2.demo Last login: Tue Nov 30 02:24:00 2021 from 10.0.0.2 __| __|_ ) _| ( / Amazon Linux 2 AMI ___|\___|___| https://aws.amazon.com/amazon-linux-2/ No packages needed for security; 4 packages available Run "sudo yum update" to apply all updates. [ec2-user@localhost ~]$
Since WireGuard acts as a virtual network device, any packets that get sent to that hostname can use normal ports; no more fiddling with binding apps to unprivileged ports, or appending port numbers to URLs in the browser.
Persisting WireGuard Settings Across Boots
To avoid manually setting up the WireGuard NIC every time we power-cycle the VM,
init
needs to be informed how to initialize it. The following
BSD-style init script uses a workaround to wait for other network devices
to be initialized by systemd. A more robust solution that makes proper use
of systemd services would be desirable for production setups. For now though,
we'll just append this to
# Workaround for potential parallel execution issues, because we \ # need inet and netifs to be working before we can set up WireGuard. sleep 15; echo "Initalizing wg0 NIC..." # Start the wg0 interface to enable vpn set -x; ip link add wg0 type wireguard; ip addr add 10.0.0.42/24 dev wg0; wg syncconf wg0 /etc/wireguard/wg0.conf; ip link set wg0 up;
To enable the
$ chmod -v +x /etc/rc.local $ systemctl enable rc-local.service
Conclusion
With the final step of adding WireGuard initialization to the VM's init scripts complete, you now have a local Amazon Linux 2 VM which, with the help of WireGuard, can be used just like a public ec-2 VPS. Testing your applications on it is a simple as starting it up and connecting to it from any machine within your WireGuard VPN.
More to read
You might also like to read Jenkins, Docker build and Amazon EC2 Container Registry, or First day with Docker.
More technical posts are also available on the DjaoDjin blog, as well as business lessons we learned running a SaaS application hosting platform.
Receive news about DjaoDjin in your inbox.