Copyright @ Lenovo US
KVM is something new to me, and it sounds awesome. The experience I want to have is a local dev that I can copy & paste from some base image in case I forgot to take a snapshot at milestone. This way I feel comfortable to stand up a sandbox, try out crazy things, then discard it entirely at done, and repeat. So this means something minimal, fast.
First thing first, get kvm
and a few tools:
$ apt kvm libvirt-bin bridge-utils cloud-utils cloud-init libguestfs-tools
Cloud image
Don't bother with full blown .iso
, use cloud images,
eg. xenial 16.04, my de faco at writing. It's < 300M
and boots fast. Key to know at this point is the file format ←
use qemu-img info [.img]
for file format:
.
$ wget [.img url]
To get file format:
$ qemu-img info [.img]
---------------------
image: xenial-server-cloudimg-amd64-disk1.img
file format: qcow2
virtual size: 2.2G (2361393152 bytes)
disk size: 277M
cluster_size: 65536
Format specific information:
compat: 0.10
refcount bits: 16
Resize disk
One caveat caught me is that snapshot using backing file inherits
the maximum disk space from its base image. Looking at the output
above, virtual size
(2.2G
) is the maxium disk space this image can
grow to; disk size
is just the .img
file's size on disk when you
do ls -lh
. We want to increase base image's virtual size and here is
how-to.
-
Resize original image in place. This will add
20G
tovirtual size
. But this is not sufficient, the disk inside this image needs to be expanded also (see step #3 below). So think of this step as a wish to the disk size I want to have, and step 3 is the actual implementation to make it a reality.shell $ qemu-img resize orig.img +20G $ qemu-img info orig.img <-- confirm new "virtual size"
-
Make a copy:
shell $ cp orig.img orig-copy.qcow2
-
Resize disk inside the qcow2 image, and save (in this case, we save newly expanded image back to
orig.img
, but you can save to any file name you want.):shell $ sudo virt-resize --expand /dev/sda1 orig-copy.qcow2 orig.img
Backing files
Backing files is awesome! The idea is also referred as external snapshots. A few useful references to understand this concept — Advanced snapshots w/ libvirt by Redhat, QEMU snapshot doc, and a blog whose diagram I'm copying below which explained well what these snapshots are related. In a nutshell, external snapshot keeps a pointer to its base image (backing files). Any new writes will then be applied to the snapshot image, not the backing file → this feels like git commits and branches, isn't it?
An example diagram is shown below in which 3 guests are cloned from a base image, which is then updated, and a 4th guest is then cloned off the updated base image. With all 5 virtual machines, the storage needs is only about 4.4 GB (the size of the base image).An example diagram is shown below in which 3 guests are cloned from a base image, which is then updated, and a 4th guest is then cloned off the updated base image. With all 5 virtual machines, the storage needs is only about 4.4 GB (the size of the base image).
To create one using backing file:
$ qemu-img create -f qcow2 -b resized-orig.img mydev.snap
To verify:
(dev) fengxia@fengxia-lenovo:~/workspace/tmp$ qemu-img info mydev.snap
image: mydev.snap
file format: qcow2
virtual size: 22G (23836229632 bytes)
disk size: 3.4G
cluster_size: 65536
backing file: resized-orig.img <<-- here!
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
Then in KVM xml, use mydev.snap
as your primary disk:
<disk type='file' device='disk'>
<driver name="qemu" type="qcow2"/>
<source file="/home/fengxia/workspace/tmp/mydev.snap"/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</disk>
cloud-init
Using cloud images, however, is tricky, because it doesn't allow user
login (SSH only) and is expecting a cloud-init
. Without it,
snapshots we made above will boot, but you can't login (tried "ubuntu,
passw0rd", "ubuntu, ubuntu", "ubuntu, [no password]", none works).
To use cloud-init, we need to create a user-data
file which
is actually a cloud-config
in YAML format. cloud-init
can use other formats. Take a look.
A minimal version of cloud-config
is shown below, which allows
ubuntu
user login using the password
value you defined here, eg. whatever001
.
#cloud-config
password: whatever001
chpasswd:
expire: False
ssh_pwauth: True
Now how cloud-init
works? Essentially you make user-data
into
a disk or iso that can be mounted to
your VM at boot. Your VM's OS image should have had cloud-init
installed (and configured?) so when it boots it will search for
user-data & meta-data, and run their instructions.
cloud-init in raw
$ cloud-localds -m local my-seed.img my-user-data [my-meta-data]
When using cloud-localds
, make sure to use -m local
so to enable
the NoCloud
data source (otherwise, booting will stuck with
error url_helper.py[WARNING]: Calling
'http://169.254.169.254/2009-04-04/meta-data/instance-id failed...
because cloud-init
by default will expect a server somewhere serving
user-data and meta-data files. NoCloud
says they are on a local disk).
Example as used in KVM's xml. Make sure slot=
index is unique,
and <target dev=
index is unique.
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/home/fengxia/workspace/tmp/my-seed.img'/>
<backingStore/>
<target dev='vdb' bus='virtio'/>
<alias name='virtio-disk1'/>
<address type='pci'1 domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
</disk>
cloud-init in ISO
$ genisoimage --output my-seed.iso -volid cidata -joliet -rock my-user-data [my-meta-data]
The key here is -volid
value must be cidata
!
Example KVM xml below. Again, <target dev=
index should be unique.
<disk type='file' device='cdrom'>
<source file='/home/fengxia/workspace/tmp/my-seed.iso'/>
<target dev='vdb' bus='ide'/>
<readonly/>
</disk>
Sum it up
So back to our mission — to use cloud image as base and external snapshots as our dev sandbox:
- Download a cloud image
- Resize image
- Create a snapshot with backing file
- Add
.snap
as a disk in kvm xml - Create
user-data
- Create
seed.img
from user-data - Add
seed.img
as a disk in kvm xml virsh create [your xml]
Enjoy.
helper files
Everything you need is here.To start a kvm from scratch. This will download a 16.04 amd64 cloud image by default.
$ python startmykvm.py --help
usage: startmykvm.py [-h] [--backing [BACKING]] [--user-data [USER_DATA]]
[--cloudimg [CLOUDIMG]]
[--download-cloudimg [DOWNLOAD_CLOUDIMG]] [--delete]
xml
Create a new KVM for me
positional arguments:
xml My XML template.
optional arguments:
-h, --help show this help message and exit
--backing [BACKING], -b [BACKING]
Backfile to use when creating a snapshot.
--user-data [USER_DATA], -u [USER_DATA]
Cloud-config user-data file.
--cloudimg [CLOUDIMG], -c [CLOUDIMG]
A cloud image file.
--download-cloudimg [DOWNLOAD_CLOUDIMG], -d [DOWNLOAD_CLOUDIMG]
URL to download cloud image. The downloaded file will
be deleted at the end of this bootstrap.
--delete, -D Set to delete VM defined in xml.
To start a kvm reusing an existing backing file:
$ python startmykvm.py -b <backing>.qcow2 mydev.xml
— by Feng Xia