Skip to content
Snippets Groups Projects
Forked from an inaccessible project.
user avatar
Brian Candler authored
6298f942
History
Name Last commit Last update
ansible
vm
.gitignore
README.md
run.sh

TL;DR

For a complete build from scratch of the VTP VM image:

# download packer_x.x.x_linux_amd64.zip from https://www.packer.io/downloads.html
unzip packer_x.x.x_linux_amd64.zip
./packer build -only=qemu -var headless=true vm/vtp.json

If you are inside the VTP environment and want to re-run a playbook:

cd /root/vtp/ansible
ansible-playbook -i hosts <playbook>.yml

Overview

The VTP virtual machine is built using a scripted install process. This results in an image which can be distributed to trainers and either run on their local hardware or in a third-party cloud.

The final flow is expected to be something like this:

            |
            v
       Build Ubuntu
       16.04+ZFS image
            |
            v
          Local
       configuration
       (bridges, DNS,
        dynamips etc)
            |
            v
       Pull in LXD
       master image
            |
            v
     Make master clones
            |
            v
    Customise masters
   |        |        |
   v        v        v
 Nagios Smokeping  etc...
   |        |        |
   v        v        v
 Clones   Clones   Clones
       \    |    /
        v   v   v
    Bootable raw image
            |
            v
       Compressed
          qcow2

Packer

The build-from-ISO process is driven by packer

Download the binary into the top level of this repo and unzip it. Then run using:

./packer build [-force] [-only=qemu] [-var headless=true] vm/vtp.json
  • -force will delete the previous build if it still exists

  • -only=qemu is if there are multiple builders defined and you only want to run one. For example, in future we could include additional builders for EC2 and GCE.

  • -var headless=true is required if you are running on a system with no graphics (e.g. ssh to remote box without -X for X11 forwarding)

The configuration consists of a json file which controls packer, and a preseed file which controls the ubuntu installer. The files here are derived from:

Viewing progress at the console

If you have an X11 environment (either you are working at a graphical console, or you used ssh -X to enable X11 forwarding) then a console window is created automatically.

If this doesn't work, then you can set -var headless=true and it will run in a disconnected VNC session. If you wish to monitor the progress, look through the output for this message:

==> qemu: Found available VNC port: 5915

(the number is dynamic), and then from a laptop run

vncviewer -Shared <hostname>:<display>

e.g. vncviewer -Shared 10.10.0.241:15. For packer 0.11+ you will need to set the option "vnc_bind_address": "[::]" for this to work.

Debugging

If things go wrong with packer, see the debugging guide You get more information if you run:

PACKER_LOG=1 ./packer build foo.json

Disk usage

After an initial build of just the OS, the qcow2 file was around 2GB. Examining the image showed that only 1.2GB was used for the root filesystem; there may be temporary files which were deleted, or there may be data in the swap space.

The default install used 512MB of RAM and created 512MB of swap. This can be increased with e.g. "qemuargs": [ [ "-m", "768M" ] ]

TODO: minimise the image size by running zerofree on all filesystems (this cannot be done while root is mounted read-write), and wiping the swap space.

Normally packer runs a qemu-img convert pass to remove empty space in the image. This can be disabled by "skip_compaction": true: this speeds up the build a bit, but increases the qcow2 file size from 2GB to 2.8GB

Adding "disk_compression": true reduces the size of the qcow2 image to about 800MB, at the cost of slowing down the build.

Ubuntu ISO installation

For an automated Ubuntu install there are two options: preseeding or kickstart

The kickstart files are simpler and can be created using system-config-kickstart on an Ubuntu 16.04 desktop machine. However the Ubuntu kickstart implementation offers only a subset of features and is really just a frontend to selected preseed variables. Hence we are using preseeding directly.

Preseeding

If a question pops up during the installation, answer it manually to allow the installation to continue. Then when you have booted into the system, run:

debconf-get-selections --installer >installer.log

(from package debconf-utils) to find the preseed option name; then you can update preseed.cfg to answer the question automatically.

Example: after adding the empty zfs partition, I got the following dialog:

                        [!!] Partition disks

No file system is specified for partition #3 of Virtual disk 1 (vda).

If you do not go back to the partitioning menu and assign a file
system to this partition, it won't be used at all.

Go back to the menu?    <Yes>  <No>

The desired behaviour is to enter <No>. This one actually required going through the debian-installer source code to find:

d-i partman-basicmethods/method_only boolean false

Filesystems

Installing Ubuntu with zfs root filesystem is not yet supported by the installer: 1 2 3

For our purposes it's sufficient to have an ext4 root filesystem and create a separate ZFS partition for LXD. We lose the ability to snapshot the root and to have compression on the root.

Examining the image

QCOW2 images can be loopback-mounted with a bit of help from nbd:

modprobe nbd
qemu-nbd -c /dev/nbd0 output-qemu/vtp.qcow2
mount /dev/nbd0p1 /mnt
...
umount /mnt
qemu-nbd -d /dev/nbd0

More details at http://en.wikibooks.org/wiki/QEMU/Images

Speed

The initial packer build-from-ISO phase is slow. If you have to start from scratch it can easily take 10-15 minutes. Normally, packer would be used to make a "plain" OS image which some other tool would provision, e.g. vagrant. However, vagrant doesn't have support for qemu out-of-the-box (it can use libvirt with a plugin), so for now we are building the entire final VTP image in packer.

Provisioners

Provisioners perform post-installation changes.

Some initial steps (such as configuring zfs and lxd) are most easily done as a shell command run within the guest.

For the remaining changes we prefer to use ansible: since ansible is idempotent, it is relatively easy to iterate development of the configuration without always having to rebuild the VM image from scratch.

We "git clone" the ansible configuration inside the VM, so that it is easy to make adjustments and commit them back.

Optimisations

Minimising the image size

  1. TODO: run zerofree on the root filesystem
  2. TODO: wipe and recreate the swap partition
  3. TODO: recompress the qcow2 image after 1 and 2
  4. TODO: move /usr and /lib onto zfs; let zfs de-duplication share blocks between the main OS and the LXC container gold master (assuming they are both 16.04)

Speeding up the build

  1. TODO: optionally point to a nearby apt-cacher (would involve a different preseed.cfg? make sure the image doesn't remember it!)
  2. TODO: optionally point to a nearby LXC image store
  3. TODO: modify packer so that we can pass -kernel, -initrd, -append options (preferably using paths within the ISO image for the first two); this would save all the blind typing.