While
OVH
has a number of Linux-based options for their low-end VPS offerings, I
wanted to try installing
FreeBSD.
As far as I can tell, OVH doesn't offer the ability to provide a bootable
.iso
or
.img
installer image for their VPS offerings (unlike their dedicated server
instances).
Fortunately, their VPS offers a recovery console with SSH access along with
the use of
dd,
gzip/gunzip,
and
xz.
This grants SSH access to the recovery console, allowing me to write the disk
image directly to the virtual drive which I learned from
murf.
The following was all done within a fairly stock FreeBSD 11 install (ZFS-on-root).
My first problem was that the official downloadable
.rawhard-drive image
was something like 20.1GB which was just
slightly
larger than my available 20.0GB of space so I couldn't directly write the
image provided on the FreeBSD site.
Time to do it the hard way and build my own image.
Preliminary reconnaissance
First, we need to gather some network information from the OVH install.
I'm assuming your initial machine image was either an Ubuntu or Debian
install.
If not, modify accordingly, but the goal is to obtain the following
from your management console and your
/etc/networks.d/*
or
/etc/network/*
file:
your static IPv4 address (should be available in your management
console or as the
address ${EXTERNAL_IPv4}
line in your interface file, or by issuing
ipconfig eth0
and looking for the IPv4 address)
your static IPv6 address (should be available in your management
console or from
ipconfig eth0
referenced later as
${EXTERNAL_IPv6}
)
your default gateway (should be available in your interface file
from the line that reads
post-up /sbin/ip route add ${GATEWAY_IPv4} dev eth0
or from the output of
route | grep default
)
your DNS server (should be available in your interface file from
the line that reads
dns-nameserver ${DNS_NAMESERVER}
or
nameserver ${DNS_NAMESERVER}
)
your DNS search suffix (should be available in your interface file from
the line that reads
dns-search ${DNS_SEARCH}
however this may be optional)
With these items noted, we can use them later on when creating the
corresponding configurations in FreeBSD.
Create a local image file
First, create a drive-image file that is the right size:
This will create a 20GB image file that should fit exactly in the available
space on the OVH VPS.
If you're using the lowest-end VPS, change that to 10G to make a 10GB
drive instead.
Create a device for the file
In order to install to this file as a disk, FreeBSD needs to recognize it.
This can be done with the
mdconfig
command as root.
This needs to be done as
root
so first
su -
to root:
On most systems, there won't already be a
md0
device, but it's good to check first:
On most systems, that will return no pre-existing devices so you can use
0
as the device-number, but if other
md
devices exist, add 1 to the highest device number returned.
Create the (in this case
md0
) disk-backed memory device:
This will create a
md0
device to which FreeBSD can be installed.
Note that if you already have a
md0
device, change the
-u 0
to a higher number that doesn't already exist
Install FreeBSD
Before installing, it's good to specify the root-pool as something other
than the default pools you likely have:
With the
md0
device created we can run
bsdinstall
and choose
md0
as our target drive.
For the Keymap Selection, continue with the default keymap unless you have
reason not to.
For the hostname, this post uses "ovh".
For your mirror selection, choose something geographically close.
If you're using UFS on your local machine or installing UFS on your OVH
server, feel free to investigate whether the other automated installers
work for you.
However, if your current/host setup uses ZFS like this guide,
choosing "Guided Root-on-ZFS" or the "Manual Disk Setup (experts)" in
bsdinstall
will find your existing zpools & GELI devices and try to forcibly
detach them
,
killing your local system in the process, requiring a reboot.
So for ZFS-on-existing-ZFS, use the "Shell" option.
Also, make note of the warning/instructions:
Use this shell to set up partitions for the new system.
When finished, mount the system at
/mnt
and place an fstab file for the new system at
/tmp/bsdinstall_etc/fstab
.
Then type 'exit'.
You can also enter the partition editor at any time by entering
bsdinstall partedit.
confirm that our ZFSBOOT_POOL_NAME is set correctly
Clear any existing traces of a partition table.
If it doesn't have an existing partition table, it may complain, but
since the goal is to nuke it if it does exist, any such error can be
safely ignored.
Create a GPT partition table
Create a 512k boot partition
Install the boot-loader code on the first (only so far) partition
Create a ZFS data partition, filling the rest of the drive
Create an encrypted disk
(optionally add -T to skip passing TRIM commands to the SSD
as this can leak information about how much space is actually in use)
At this point, it will prompt you to enter and confirm your GELI
password.
Next, attach this GELI device:
Now, create a pool on that encrypted device.
With a single-but-larger disk, I'd set
copies=2
on my root pool to get a little redundancy by default on everything.
But since 20GB is a little tight, I set it selectively on my datasets
for user data and assume I can regenerate anything in system datasets.
Your own needs will determine whether you make everything redundant,
just your user data, or nothing.
Because of a quirk in the installer, it requires
/usr/freebsd-dist/MANIFEST
to be present in the
chroot
environment.
It also seems to clean this up if the installer hiccups for any reason,
so I recommend putting a copy into your local directory.
It may already be on your install media, or you can download a copy
from the corresponding directory on the FTP site.
Check your mirror, architecture, and release, but it should be something
like
and then copying it into
/usr/
That way, you'll have a local copy in the event something goes awry and
you have to recreate the
/usr/freebsd-dist/MANIFEST
again.
Finally,
exit
the shell to return to the installer.
The rest of the installer should be fairly straight-forward.
At the end of the install, it will prompt to drop to a shell.
Do that.
First, create/edit
/etc/rc.conf
to include the following lines, making use of the information gleaned
in the
reconnaissance
step above.
If you plan to use jails, don't let syslogd listen on all addresses
and it's worthwhile to set up a loopback interface to use while you're
in the
/etc/rc.conf
Next up, edit
/etc/resolv.conf
using the DNS information gathered during
reconnaissance
to contain the
nameserver
and optionally the
search
Assuming you set up a user during the install and that they're a member
of the
wheel
group to let you
su -
and perform administrative commands, it's worth editing the
/etc/ssh/sshd_config
to allow SSH login and prohibit root login by adding/modifying these
lines:
You may also want to add your SSH public key to
$USER/.ssh/authorized_keys
for your non-root
$USER
so you can SSH in without a password.
With changes done in the
chroot
we can now exit the
chroot
and return to the parent shell.
Freeing up the image
Before shipping the image over to the server, it first needs to be shut
down cleanly.
Start by unmounting all the new ZFS items under.
The following finds them and creates a series of commands that can be
used to unmount them:
If they all look good, you can execute them.
ZFS still has these pools active so disconnect them
Detach the GELI devices if needed.
With all filesystems disassociated from the
md0
it can now be disconnected too.
Can now leave the root shell and return to your unprivileged user
Reconnecting
In the event you want to make further modifications, it's helpful to
know how to reattach the
md0,
attach the GELI volume, and import the ZFS pool again.
So as root
Start by compressing the disk image to save bandwidth.
My 20GB image compressed to ~400MB, or roughly 2%.
I keep the original image around in case I need to reconnect to it to
make changes I forgot about.
The
gzip
process takes a little while.
Log into your VPS console and restart your VPS in rescue mode.
Once it has restarted, you should receive information via email on how
to log into that console.
Use this information to SSH into the rescue image.
Next, upload the image to the drive in question (make sure you specify
the correct root device based on the mount point, such as
vdb
or
sdb
).
Depending on your internet speed, this may take a while.
However, once the image has finished transferring, you should be able to
go into your management console and reboot the server.
If all has gone well, your server should start booting.
Note that, since we have an encrypted drive, we need to log into the KVM
console to enter the boot password.
It should then boot in that KVM console to the login prompt.
If you can get in there, then you can attempt to SSH in directly.
ssh $USER@$EXTERNAL_IPv4
If you installed your public key in
$USER/.ssh/authorized_keys
it should let you right in.
If you didn't, you can enter your credentials and should hopefully be up
and running.