Skip to main content

Installing FreeBSD on OVH

While OVH has a number of Linux-based options for their low-end VPS offerings, I wanted to try installing FreeBSD. As far as I can tell, OVH doesn't offer the ability to provide a bootable .iso or .img installer image for their VPS offerings (unlike their dedicated server instances). Fortunately, their VPS offers a recovery console with SSH access along with the use of dd, gzip/gunzip, and xz. This grants SSH access to the recovery console, allowing me to write the disk image directly to the virtual drive which I learned from murf.

The following was all done within a fairly stock FreeBSD 11 install (ZFS-on-root).

My first problem was that the official downloadable .raw hard-drive image was something like 20.1GB which was just slightly larger than my available 20.0GB of space so I couldn't directly write the image provided on the FreeBSD site.

Time to do it the hard way and build my own image.

Preliminary reconnaissance

First, we need to gather some network information from the OVH install. I'm assuming your initial machine image was either an Ubuntu or Debian install. If not, modify accordingly, but the goal is to obtain the following from your management console and your /etc/networks.d/* or /etc/network/* file:

  • your static IPv4 address (should be available in your management console or as the address ${EXTERNAL_IPv4} line in your interface file, or by issuing ipconfig eth0 and looking for the IPv4 address)
  • your static IPv6 address (should be available in your management console or from ipconfig eth0 referenced later as ${EXTERNAL_IPv6} )
  • your default gateway (should be available in your interface file from the line that reads post-up /sbin/ip route add ${GATEWAY_IPv4} dev eth0 or from the output of route | grep default )
  • your DNS server (should be available in your interface file from the line that reads dns-nameserver ${DNS_NAMESERVER} or nameserver ${DNS_NAMESERVER} )
  • your DNS search suffix (should be available in your interface file from the line that reads dns-search ${DNS_SEARCH} however this may be optional)
With these items noted, we can use them later on when creating the corresponding configurations in FreeBSD.

Create a local image file

First, create a drive-image file that is the right size:

user@localhost$ truncate -s 20G $HOME/freebsd.img

This will create a 20GB image file that should fit exactly in the available space on the OVH VPS. If you're using the lowest-end VPS, change that to 10G to make a 10GB drive instead.

Create a device for the file

In order to install to this file as a disk, FreeBSD needs to recognize it. This can be done with the mdconfig command as root. This needs to be done as root so first su - to root:

user@localhost$ su -
Password: ********

On most systems, there won't already be a md0 device, but it's good to check first:

mdconfig -l

On most systems, that will return no pre-existing devices so you can use 0 as the device-number, but if other md devices exist, add 1 to the highest device number returned.

Create the (in this case md0 ) disk-backed memory device:

mdconfig -f ~user/freebsd.img -u 0

This will create a md0 device to which FreeBSD can be installed. Note that if you already have a md0 device, change the -u 0 to a higher number that doesn't already exist

Install FreeBSD

Before installing, it's good to specify the root-pool as something other than the default pools you likely have:

export ZFSBOOT_POOL_NAME=ovhzpool
bsdinstall

With the md0 device created we can run bsdinstall and choose md0 as our target drive.

For the Keymap Selection, continue with the default keymap unless you have reason not to.

For the hostname, this post uses "ovh".

For your mirror selection, choose something geographically close.

If you're using UFS on your local machine or installing UFS on your OVH server, feel free to investigate whether the other automated installers work for you. However, if your current/host setup uses ZFS like this guide, choosing "Guided Root-on-ZFS" or the "Manual Disk Setup (experts)" in bsdinstall will find your existing zpools & GELI devices and try to forcibly detach them , killing your local system in the process, requiring a reboot. So for ZFS-on-existing-ZFS, use the "Shell" option. Also, make note of the warning/instructions:

Use this shell to set up partitions for the new system. When finished, mount the system at /mnt and place an fstab file for the new system at /tmp/bsdinstall_etc/fstab . Then type 'exit'. You can also enter the partition editor at any time by entering bsdinstall partedit.

confirm that our ZFSBOOT_POOL_NAME is set correctly

export ZFSBOOT_POOL_NAME=ovhzpool

Clear any existing traces of a partition table. If it doesn't have an existing partition table, it may complain, but since the goal is to nuke it if it does exist, any such error can be safely ignored.

gpart destroy -F md0

Create a GPT partition table

gpart create -s gpt md0

Create a 512k boot partition

gpart add -t freebsd-boot -s 512k -l boot md0

Install the boot-loader code on the first (only so far) partition

gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 md0

Create a ZFS data partition, filling the rest of the drive

gpart add -t freebsd-zfs -l ${ZFSBOOT_POOL_NAME} -a 1M md0

Create an encrypted disk (optionally add -T to skip passing TRIM commands to the SSD as this can leak information about how much space is actually in use)

geli init -e AES-XTS -l 128 -s 4096 -b -g gpt/${ZFSBOOT_POOL_NAME}

At this point, it will prompt you to enter and confirm your GELI password. Next, attach this GELI device:

geli attach gpt/${ZFSBOOT_POOL_NAME}

Now, create a pool on that encrypted device. With a single-but-larger disk, I'd set copies=2 on my root pool to get a little redundancy by default on everything. But since 20GB is a little tight, I set it selectively on my datasets for user data and assume I can regenerate anything in system datasets. Your own needs will determine whether you make everything redundant, just your user data, or nothing.

zpool create -R /mnt -O canmount=off -O mountpoint=/ -O atime=off -O compression=on ${ZFSBOOT_POOL_NAME} gpt/${ZFSBOOT_POOL_NAME}.eli
zfs create -o mountpoint=/ ${ZFSBOOT_POOL_NAME}/ROOT
zpool set bootfs=${ZFSBOOT_POOL_NAME}/ROOT ${ZFSBOOT_POOL_NAME}
zfs create -o copies=2 ${ZFSBOOT_POOL_NAME}/home
zfs create -o canmount=off ${ZFSBOOT_POOL_NAME}/usr
zfs create ${ZFSBOOT_POOL_NAME}/usr/local
zfs create ${ZFSBOOT_POOL_NAME}/usr/local/jails
zfs create ${ZFSBOOT_POOL_NAME}/usr/obj
zfs create ${ZFSBOOT_POOL_NAME}/usr/src
zfs create ${ZFSBOOT_POOL_NAME}/usr/ports
zfs create ${ZFSBOOT_POOL_NAME}/usr/ports/distfiles
zfs create -o canmount=off ${ZFSBOOT_POOL_NAME}/var
zfs create ${ZFSBOOT_POOL_NAME}/var/log
zfs create ${ZFSBOOT_POOL_NAME}/var/tmp
zfs create ${ZFSBOOT_POOL_NAME}/tmp

Because of a quirk in the installer, it requires /usr/freebsd-dist/MANIFEST to be present in the chroot environment. It also seems to clean this up if the installer hiccups for any reason, so I recommend putting a copy into your local directory. It may already be on your install media, or you can download a copy from the corresponding directory on the FTP site. Check your mirror, architecture, and release, but it should be something like

mkdir freebsd-dist
cd freebsd-dist
fetch ftp://ftp8.freebsd.org/pub/FreeBSD/releases/amd64/amd64/11.0-RELEASE/MANIFEST
cd ..

and then copying it into /usr/

cp -va freebsd-dist /usr/

That way, you'll have a local copy in the event something goes awry and you have to recreate the /usr/freebsd-dist/MANIFEST again.

Finally, exit the shell to return to the installer.

exit

The rest of the installer should be fairly straight-forward. At the end of the install, it will prompt to drop to a shell. Do that.

First, create/edit /etc/rc.conf to include the following lines, making use of the information gleaned in the reconnaissance step above.

ifconfig_vtnet0="inet $EXTERNAL_IPv4 netmask 255.255.255.255 broadcast $EXTERNAL_IPv4"
static_routes="net1 net2"
route_net1="$GATEWAY_IPv4 -interface vtnet0"
route_net2="default $GATEWAY_IPv4 "
ifconfig_vtnet0_ipv6="inet6 $EXTERNAL_IPv6 prefixlen 64"
ipv6_defaultrouter="$GATEWAY_IPv6"

If you plan to use jails, don't let syslogd listen on all addresses and it's worthwhile to set up a loopback interface to use while you're in the /etc/rc.conf

syslogd_flags="-ss"
cloned_interfaces="lo1"

Next up, edit /etc/resolv.conf using the DNS information gathered during reconnaissance to contain the nameserver and optionally the search

nameserver $DNS_NAMESERVER
search $DNS_SEARCH

Assuming you set up a user during the install and that they're a member of the wheel group to let you su - and perform administrative commands, it's worth editing the /etc/ssh/sshd_config to allow SSH login and prohibit root login by adding/modifying these lines:

ListenAddress $EXTERNAL_IP
PermitRootLogin no

You may also want to add your SSH public key to $USER/.ssh/authorized_keys for your non-root $USER so you can SSH in without a password.

With changes done in the chroot we can now exit the chroot and return to the parent shell.

exit

Freeing up the image

Before shipping the image over to the server, it first needs to be shut down cleanly. Start by unmounting all the new ZFS items under. The following finds them and creates a series of commands that can be used to unmount them:

mount | awk '/\/mnt/{print "umount " $3}' | sort -r

If they all look good, you can execute them.

mount | awk '/\/mnt/{print "umount " $3}' | sort -r | sh

ZFS still has these pools active so disconnect them

zpool export $ZFSBOOT_POOL_NAME

Detach the GELI devices if needed.

geli detach /dev/gpt/${ZFSBOOT_POOL_NAME}.eli

With all filesystems disassociated from the md0 it can now be disconnected too.

mdconfig -d -u 0

Can now leave the root shell and return to your unprivileged user

exit

Reconnecting

In the event you want to make further modifications, it's helpful to know how to reattach the md0, attach the GELI volume, and import the ZFS pool again. So as root

mdconfig -f ~user/freebsd.img -u 0
geli attach gpt/$ZFSBOOT_POOL_NAME
zpool import -R /mnt $ZFSBOOT_POOL_NAME

From here, you can make changes and then repeat the steps for disconnecting the disk image.

Sending the image over

Start by compressing the disk image to save bandwidth. My 20GB image compressed to ~400MB, or roughly 2%. I keep the original image around in case I need to reconnect to it to make changes I forgot about. The gzip process takes a little while.

gzip --keep freebsd.img

Log into your VPS console and restart your VPS in rescue mode.

Once it has restarted, you should receive information via email on how to log into that console. Use this information to SSH into the rescue image.

ssh root@$EXTERNAL_IPv4

Your normal root drive will be mounted under the rescue /mnt/ First you have to find out its name. It might be vdb1, sdb1, or something similar so it will need to be unmounted before overwriting. Once done, disconnect.

mount | grep /mnt
umount /mnt/vdb1
exit

Next, upload the image to the drive in question (make sure you specify the correct root device based on the mount point, such as vdb or sdb ).

ssh root@$EXTERNAL_IPv4 "gunzip | dd of=/dev/sdb bs=1M" < freebsd.img.gz

or, if you have pv installed, you can get a rough estimate of the time needed

pv freebsd.img.gz | ssh root@$EXTERNAL_IPv4 "gunzip | dd of=/dev/sdb bs=1M"

Depending on your internet speed, this may take a while. However, once the image has finished transferring, you should be able to go into your management console and reboot the server. If all has gone well, your server should start booting.

Note that, since we have an encrypted drive, we need to log into the KVM console to enter the boot password. It should then boot in that KVM console to the login prompt. If you can get in there, then you can attempt to SSH in directly.

ssh $USER@$EXTERNAL_IPv4

If you installed your public key in $USER/.ssh/authorized_keys it should let you right in. If you didn't, you can enter your credentials and should hopefully be up and running.