Launch VMs on ESXI (Ubuntu Xenial in an as-automated-as-possible fashion).
ansible-playbook main.yml -i hosts --ask-pass
when prompted, enter
- up: to spin up a new set of VMs
- down: to destroy an existing set of VMs
VMs are defined under vars/vms.yml
ESXI hosts are defined under hosts
> esxi
. If you'd rather not store the admin/root ESXI user on the local "hosts" file, pass it as a parameter during execution:
ansible-playbook main.yml -i hosts --ask-pass --user=USERNAME
---
vm_list:
-
hostname: web01
fqdn: web01.home.local
ip: 192.168.1.50
clone_vm: ubuntu_xenial # Pre-existing "clone" VM
os: ubuntu-64 # VM OS Version
up: true # Whether VM is ON/OFF
hardware:
cpus: 1 # Limited by host specs
disk: 8 # Large enough to store pkgs + data
memory: 2048 # Limited by host specs
via Ansible:
ansible-playbook get_iso.yml -i hosts --ask-pass
Manually:
Run these commands on each ESXI box (with "root" level privileges)
mkdir -p /vmfs/volumes/datastore1/images
cd !$
wget -q http://releases.ubuntu.com/xenial/ubuntu-16.04.1-server-amd64.iso
This is the portion that can't be currently automated. It involves spinning up a very resource-limited VM.
Login to the "clone" and execute the following commands:
sudo apt-get update
sudo apt-get install -y cloud-init
sudo shutdown now
The basic premise here is quite simple, but a bit tricky:
Manual Steps
- Manually create a "virgin" VM with cloud-init pre-installed (this will become obvious in a bit) and use it as the on-going "clone" or "template" VM
- Ensure there's at least a "sudo"-enabled user that you can use to login to the VM and that the default "OpenSSH Server" is installed during initial provisioning
- Keep memory, cpu and disk size to a minimun
Automated Steps
- Create a VM folder on the ESXI host
- Locally create a cloud-init valid user-data file, package this file into an ISO image and upload it to the ESXI folder created in the previous step
- Use the built-in vmkfstools tool to a) clone the pre-existing VM's disk and b) resize it
- Create a new .vmx with the new VM specs (memory, cpu), ensure that the file "attaches" a CD drive pointed to the physical location of the newly uploaded ISO image
- Register the newly create VM with the vim-cmd tool, use the same tool to automatically start the VM
- On first-boot, the VM will "execute" the instructions on the user-data file found on the ISO on the attached CD (i.e., create the ansible user, grant it "sudo" rights, assign a fixed IP address and a hostname, etc.). These steps are enough to get the VM to be visible on the network and for Ansible to take over and manage
- For the newly-assigned IP address to be active a reboot is in order, hence the last step of the user-data instructions is to perform a reboot
- Your new VMs are now ready
- An ESXI host (doh!) with ssh access enabled
- An Ansible-ready local machine
- A
vars/vms.yml
file listing VM specs
I've been a fan of ESXI for the longest time, but I've never had a chance to work with it on the job. I'm also a Mac mini fan and have accumulated a few over the years: they look sleek, consume little physical space, consume even less energy and are extremely quiet -- hence, they are the ideal home server lab nodes.
A few years ago, I found out that you could run the free version of ESXI on a Mac mini and have managed to run pretty much everything on it (from Windows XP clients, to OSX Lion as local DNS servers, with a few Linux distros thrown in for good measure).
However, the VMWare mantra --like the mantra of all traditional IT vendors-- is to still do things in a "point-n-click" way (the "pets" vs "cattle" argument). I've been using the AWS cloud and Vagrant for years and wanted a way to (as much as possible) drive the creation of VMs in a "cloud-like", "code-driven" fashion. Hence, this is a "pet" project (no pun intended) to get ESXI to behave as much as possible as my personal home cloud, and in the process combine the tech artifacts that I love (Mac Minis, Ansible, Ubuntu, & some good-old Bash).
There are a few solutions out there that do something similar but require a licensed-copy of vCenter -- it proved to be quite a challenge to get all these pieces to work together just using free ESXI, so I'm quite satisfied with the current results. There's always room for improvement so I'll continue "polishing up" what I have so far.