# TL;DR For a complete build from scratch of the VTP VM image: ~~~ # download packer_x.x.x_linux_amd64.zip from https://www.packer.io/downloads.html unzip packer_x.x.x_linux_amd64.zip # cd /path/to/where/you/want/packer/ && unzip ~/packer_x.x.x_linux_amd64.zip # Check the parameters in packer_files/vtp.json, especially: "qemuargs": [ [ "-m", "1024M" ] ], ... as 1G is probably not enough to build a typical NMM platforn. Once you're ready: sudo /path/to/where/you/unpacked/packer build -only=qemu -var headless=true packer_files/vtp.json ~~~ If you are inside the VTP environment and want to re-run a playbook: ~~~ cd /root/vtp/ansible ansible-playbook -i hosts .yml ~~~ # Assumptions It's assumed, at this stage, that the environment you're building on is a 14.04/16.04 physical host configured as per workshop-kit/ansible, including: - br-lan is up and running - dhcpd is up and running Note: if you already have VMs/containers running on the physical node, be aware that the VTP VM's own containers will be attached to br-lan on the *physical* node, and thus may conflict with each other at IP level). # Overview The VTP virtual machine is built using a scripted install process. This results in an image which can be distributed to trainers and either run on their local hardware or in a third-party cloud. The final flow is expected to be something like this: ~~~~~~~~~~~~~~~~~~~~~~~~~~~ | v Build Ubuntu 16.04+ZFS image | v Local configuration (bridges, DNS, dynamips etc) | v Pull in LXD master image | v Make master clones | v Customise masters | | | v v v Nagios Smokeping etc... | | | v v v Clones Clones Clones \ | / v v v Bootable raw image | v Compressed qcow2 ~~~~~~~~~~~~~~~~~~~~~~~~~~~ # Packer The build-from-ISO process is driven by [packer](https://www.packer.io/) [Download](https://www.packer.io/downloads.html) the binary into the top level of this repo and unzip it. Then run using: ~~~ ./packer build [-force] [-only=qemu] [-var headless=true] packer_files/vtp.json ~~~ * `-force` will delete the previous build if it still exists * `-only=qemu` is if there are multiple [builders](https://www.packer.io/docs/templates/builders.html) defined and you only want to run one. For example, in future we could include additional builders for EC2 and GCE. * `-var headless=true` is required if you are running on a system with no graphics (e.g. ssh to remote box without -X for X11 forwarding) The configuration consists of a json file which controls packer, and a preseed file which controls the ubuntu installer. The files here are derived from: * The puppetlabs [ubuntu 16.04 templates](https://github.com/puppetlabs/puppetlabs-packer/tree/master/templates/ubuntu-16.04), in particular the vmware-iso one * The packer [qemu builder](https://www.packer.io/docs/builders/qemu.html) example config * The ubuntu [example preseed](https://help.ubuntu.com/lts/installation-guide/example-preseed.txt) file ## Viewing progress at the console If you have an X11 environment (either you are working at a graphical console, or you used ssh -X to enable X11 forwarding) then a console window is created automatically. If this doesn't work, then you can set `-var headless=true` and it will run in a disconnected VNC session. If you wish to monitor the progress, look through the output for this message: ~~~ ==> qemu: Found available VNC port: 5915 ~~~ (the number is dynamic), and then from a laptop run ~~~ vncviewer -Shared : ~~~ e.g. `vncviewer -Shared 10.10.0.241:15`. For packer 0.11+ you will need to set the option `"vnc_bind_address": "[::]"` for this to work. ## Debugging If things go wrong with packer, see the [debugging guide](https://www.packer.io/docs/other/debugging.html) You get more information if you run: ~~~ PACKER_LOG=1 ./packer build foo.json ~~~ ## Disk usage After an initial build of just the OS, the qcow2 file was around 2GB. Examining the image showed that only 1.2GB was used for the root filesystem; there may be temporary files which were deleted, or there may be data in the swap space. The default install used 512MB of RAM and created 512MB of swap. This can be increased with e.g. `"qemuargs": [ [ "-m", "768M" ] ]` TODO: minimise the image size by running zerofree on all filesystems (this cannot be done while root is mounted read-write), and wiping the swap space. Normally packer runs a qemu-img convert pass to remove empty space in the image. This can be disabled by `"skip_compaction": true`: this speeds up the build a bit, but increases the qcow2 file size from 2GB to 2.8GB Adding `"disk_compression": true` reduces the size of the qcow2 image to about 800MB, at the cost of slowing down the build. # Ubuntu ISO installation For an [automated Ubuntu install](https://help.ubuntu.com/lts/installation-guide/amd64/ch04s06.html) there are two options: [preseeding](https://help.ubuntu.com/lts/installation-guide/amd64/apb.html) or [kickstart](https://help.ubuntu.com/community/KickstartCompatibility) The kickstart files are simpler and can be created using [system-config-kickstart](http://packages.ubuntu.com/xenial/system-config-kickstart) on an Ubuntu 16.04 desktop machine. However the Ubuntu kickstart implementation offers only a subset of features and is really just a frontend to selected preseed variables. Hence we are using preseeding directly. ## Preseeding If a question pops up during the installation, answer it manually to allow the installation to continue. Then when you have booted into the system, run: debconf-get-selections --installer >installer.log (from package `debconf-utils`) to find the preseed option name; then you can update preseed.cfg to answer the question automatically. Example: after adding the empty zfs partition, I got the following dialog: ~~~ [!!] Partition disks No file system is specified for partition #3 of Virtual disk 1 (vda). If you do not go back to the partitioning menu and assign a file system to this partition, it won't be used at all. Go back to the menu? ~~~ The desired behaviour is to enter ``. This one actually required going through the debian-installer [source code](https://wiki.debian.org/DebianInstaller/CheckOut) to find: ~~~ d-i partman-basicmethods/method_only boolean false ~~~ ## Filesystems Installing Ubuntu with zfs root filesystem is not yet supported by the installer: [1](https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Native-ZFS-Root-Filesystem) [2](https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS) [3](http://dotfiles.tnetconsulting.net/articles/2016/0327/ubuntu-zfs-native-root.html) For our purposes it's sufficient to have an ext4 root filesystem and create a separate ZFS partition for LXD. We lose the ability to snapshot the root and to have compression on the root. ## Examining the image QCOW2 images can be loopback-mounted with a bit of help from nbd: modprobe nbd qemu-nbd -c /dev/nbd0 output-qemu/vtp.qcow2 mount /dev/nbd0p1 /mnt ... umount /mnt qemu-nbd -d /dev/nbd0 More details at ## Speed The initial packer build-from-ISO phase is slow. If you have to start from scratch it can easily take 10-15 minutes. Normally, packer would be used to make a "plain" OS image which some other tool would provision, e.g. [vagrant](https://www.vagrantup.com/docs/). However, vagrant doesn't have support for qemu out-of-the-box (it can use libvirt with a plugin), so for now we are building the entire final VTP image in packer. # Provisioners [Provisioners](https://www.packer.io/docs/templates/provisioners.html) perform post-installation changes. Some initial steps (such as configuring zfs and lxd) are most easily done as a shell command run within the guest. For the remaining changes we prefer to use ansible: since ansible is idempotent, it is relatively easy to iterate development of the configuration without always having to rebuild the VM image from scratch. We "git clone" the ansible configuration inside the VM, so that it is easy to make adjustments and commit them back. # Optimisations ## Minimising the image size 1. TODO: run zerofree on the root filesystem 2. TODO: wipe and recreate the swap partition 3. TODO: recompress the qcow2 image after 1 and 2 4. TODO: move /usr and /lib onto zfs; let zfs de-duplication share blocks between the main OS and the LXC container gold master (assuming they are both 16.04) ## Speeding up the build 1. TODO: optionally point to a nearby apt-cacher (would involve a different preseed.cfg? make sure the image doesn't remember it!) 2. TODO: optionally point to a nearby LXC image store 3. TODO: modify packer so that we can pass -kernel, -initrd, -append options (preferably using paths within the ISO image for the first two); this would save all the blind typing.