Product SiteDocumentation Site

4.3. Lifecycle control

TBD

4.3.1. Provisioning

Provisioning refers to the task of creating new guest domains, typically using some form of operating system installation media. There are a wide variety of ways in which a guest can be provisioned, but the choices available will vary according to the hypervisor and type of guest domain being provisioned. It is not uncommon for an application to support several different provisioning methods.

4.3.1.1. APIs for provisioning

There are up to three APIs involved in provisioning guests. The virDomainCreateXML command will create and immediately boot a new transient guest domain. When this guest domain shuts down, all trace of it will disappear. The virDomainDefineXML command will store the configuration for a persistent guest domain. The virDomainCreate command will boot a previously defined guest domain from its persistent configuration. One important thing to note, is that the virDomainDefineXML command can be used to turn a previously booted transient guest domain, into a persistent domain. This can be useful for some provisioning scenarios that will be illustrated later.
4.3.1.1.1. Booting a transient guest domain
To boot a transient guest domain, simply requires a connection to libvirt and a string containing the XML document describing the required guest configuration. The following example assumes that conn is an instance of the virConnectPtr object.
	    
virDomainPtr dom;
const char *xmlconfig = "<domain>........</domain>";

dom = virConnectCreateXML(conn, xmlconfig, 0);

if (!dom) {
    fprintf(stderr, "Domain creation failed");
    return;
}

fprintf(stderr, "Guest %s has booted", virDomainName(dom));
virDomainFree(dom);
If the domain creation attempt succeeded, then the returned virDomainPtr will be a handle to the guest domain. This must be released later when no longer needed by using the virDomainFree method. Although the domain was booted successfully, this does not guarantee that the domain is still running. It is entirely possible for the guest domain to crash, in which case attempts to use the returned virDomainPtr object will generate an error, since transient guests cease to exist when they shutdown (whether a planned shutdown, or a crash). To cope with this scenario requires use of a persistent guest.
4.3.1.1.2. Defining and booting a persistent guest domain
Before a persistent domain can be booted, it must have its configuration defined. This again requires a connection to libvirt and a string containing the XML document describing the required guest configuration. The virDomainPtr object obtained from defining the guest, can then be used to boot it. The following example assumes that conn is an instance of the virConnectPtr object.
	    
virDomainPtr dom;
const char *xmlconfig = "<domain>........</domain>";

dom = virConnectDefineXML(conn, xmlconfig, 0);

if (!dom) {
    fprintf(stderr, "Domain definition failed");
    return;
}

if (virDomainCreate(dom) < 0) {
    virDomainFree(dom);
    fprintf(stderr, "Cannot boot guest");
    return;
}

fprintf(stderr, "Guest %s has booted", virDomainName(dom));
virDomainFree(dom);

4.3.1.2. New guest provisioning techniques

This section will first illustrate two configurations that allow for a provisioning approach that is comparable to those used for physical machines. It then outlines a third option which is specific to virtualized hardware, but has some interesting benefits. For the purposes of illustration, the examples that follow will use a XML configuration that sets up a KVM fully virtualized guest, with a single disk and network interface and a video card using VNC for display.
	  
<domain type='kvm'>
  <name>demo</name>
  <uuid>c7a5fdbd-cdaf-9455-926a-d65c16db1809</uuid>
  <memory>500000</memory>
  <vcpu>1</vcpu>
  .... the <os> block will vary per approach ...
    <clock offset='utc'/>
    <on_poweroff>destroy</on_poweroff>
    <on_reboot>restart</on_reboot>
    <on_crash>destroy</on_crash>
    <devices>
      <emulator>/usr/bin/qemu-kvm</emulator>
      <disk type='file' device='disk'>
	<source file='/var/lib/libvirt/images/demo.img'/>
	<driver name='qemu' type='raw'/>
	<target dev='hda'/>
      </disk>
      <interface type='bridge'>
	<mac address='52:54:00:d8:65:c9'/>
	<source bridge='br0'/>
      </interface>
      <input type='mouse' bus='ps2'/>
      <graphics type='vnc' port='-1' listen='127.0.0.1'/>
    </devices>
  </domain>
TIP: Be careful in the choice of initial memory allocation, since too low a value may cause mysterious crashes and installation failures. Some operating systems need as much as 600 MB of memory for initial installation, though this can often be reduced post-install.
4.3.1.2.1. CDROM/ISO image provisioning
All full virtualization technologies have support for emulating a CDROM device in a guest domain, making this an obvious choice for provisioning new guest domains. It is, however, fairly rare to find a hypervisor which provides CDROM devices for paravirtualized guests.
The first obvious change required to the XML configuration to support CDROM installation, is to add a CDROM device. A guest domains' CDROM device can be pointed to either a host CDROM device, or to a ISO image file. The next change is to determine what the BIOS boot order should be, with there being two possible options. If the hard disk is listed ahead of the CDROM device, then the CDROM media won't be booted unless the first boot sector on the hard disk is blank. If the CDROM device is listed ahead of the hard disk, then it will be necessary to alter the guest config after install to make it boot off the installed disk. While both can be made to work, the first option is easiest to implement.
The guest configuration shown earlier would have the following XML chunk inserted:
	    
<os>
  <type arch='x86_64' machine='pc'>hvm</type>
  <boot dev='hd'/>
  <boot dev='cdrom'/>
</os>
NB, this assumes the hard disk boot sector is blank initially, so that the first boot attempt falls through to the CD-ROM drive. It will also need a CD-ROM drive device added
	    
<disk type='file' device='cdrom'>
  <source file='/var/lib/libvirt/images/rhel5-x86_64-dvd.iso'/>
  <target dev='hdc' bus='ide'/>
</disk>
With the configuration determined, it is now possible to provision the guest. This is an easy process, simply requiring a persistent guest to be defined, and then booted.
	    
const char *xml = "<domain>....</domain>";
virDomainPtr dom;

dom = virDomainDefineXML(conn, xml);
if (!dom) {
    fprintf(stderr, "Unable to define persistent guest configuration");
    return;
}

if (virDomainCreate(dom) < 0) {
    fprintf(stderr, "Unable to boot guest configuration");
}
If it was not possible to guarantee that the boot sector of the hard disk is blank, then provisioning would have been a two step process. First a transient guest would have been booted using CD-ROM drive as the primary boot device. Once that completed, then a persistent configuration for the guest would be defined to boot off the hard disk.
4.3.1.2.2. PXE boot provisioning
Some newer full virtualization technologies provide a BIOS that is able to use the PXE boot protocol to boot of the network. If an environment already has a PXE boot provisioning server deployed, this is a desirable method to use for guest domains.
PXE booting a guest obviously requires that the guest has a network device configured. The LAN that this network card is attached to, also needs a PXE / TFTP server available. The next change is to determine what the BIOS boot order should be, with there being two possible options. If the hard disk is listed ahead of the network device, then the network card won't PXE boot unless the first boot sector on the hard disk is blank. If the network device is listed ahead of the hard disk, then it will be necessary to alter the guest config after install to make it boot off the installed disk. While both can be made to work, the first option is easiest to implement.
The guest configuration shown earlier would have the following XML chunk inserted:
	    
<os>
  <type arch='x86_64' machine='pc'>hvm</type>
  <boot dev='hd'/>
  <boot dev='network'/>
</os>
NB, this assumes the hard disk boot sector is blank initially, so that the first boot attempt falls through to the NIC. With the configuration determined, it is now possible to provision the guest. This is an easy process, simply requiring a persistent guest to be defined, and then booted.
	    
const char *xml = "<domain>....</domain>";
virDomainPtr dom;

dom = virDomainDefineXML(conn, xml);
if (!dom) {
    fprintf(stderr, "Unable to define persistent guest configuration");
    return;
}

if (virDomainCreate(dom) < 0) {
    fprintf(stderr, "Unable to boot guest configuration");
}
If it was not possible to guarantee that the boot sector of the hard disk is blank, then provisioning would have been a two step process. First a transient guest would have been booted using network as the primary boot device. Once that completed, then a persistent configuration for the guest would be defined to boot off the hard disk.
4.3.1.2.3. Direct kernel boot provisioning
Paravirtualization technologies emulate a fairly restrictive set of hardware, often making it impossible to use the provisioning options just outlined. For such scenarios it is often possible to boot a new guest domain directly from an kernel and initrd image stored on the host file system. This has one interesting advantage, which is that it is possible to directly set kernel command line boot arguments, making it very easy to do fully automated installation. This advantage can be compelling enough that this technique is used even for fully virtualized guest domains with CD-ROM drive/PXE support.
The one complication with direct kernel booting is that provisioning becomes a two step process. For the first step, it is necessary to configure the guest XML configuration to point to a kernel/initrd.
	    
<os>
  <type arch='x86_64' machine='pc'>hvm</type>
  <kernel>/var/lib/libvirt/boot/f11-x86_64-vmlinuz</kernel>
  <initrd>/var/lib/libvirt/boot/f11-x86_64-initrd.img</initrd>
  <cmdline>method=http://download.fedoraproject.org/pub/fedora/linux/releases/11/x86_64/os console=ttyS0 console=tty</cmdline>
</os>
Notice how the kernel command line provides the URL of download site containing the distro install tree matching the kernel/initrd. This allows the installer to automatically download all its resources without prompting the user for install URL. It could also be used to provide a kickstart file for completely unattended installation. Finally, this command line also tells the kernel to activate both the first serial port and the VGA card as consoles, with the latter being the default. Having kernel messages duplicated on the serial port in this manner can be a useful debugging avenue. Of course valid command line arguments vary according to the particular kernel being booted. Consult the kernel vendor/distributor's documentation for valid options.
The last XML configuration detail before starting the guest, is to change the 'on_reboot' element action to be 'destroy'. This ensures that when the guest installer finishes and requests a reboot, the guest is instead powered off. This allows the management application to change the configuration to make it boot off, just installed, the hard disk again. The provisioning process can be started now by creating a transient guest with the first XML configuration
	    
const char *xml = "<domain>....</domain>";
virDomainPtr dom;

dom = virDomainCreateXML(conn, xml);
if (!dom) {
    fprintf(stderr, "Unable to boot transient guest configuration");
    return;
}
Once this guest shuts down, the second phase of the provisioning process can be started. For this phase, the 'OS' element will have the kernel/initrd/cmdline elements removed, and replaced by either a reference to a host side bootloader, or a BIOS boot setup. The former is used for Xen paravirtualized guests, while the latter is used for fully virtualized guests.
The phase 2 configuration for a Xen paravirtualized guest would thus look like:
	    
<bootloader>/usr/bin/pygrub</bootloader>
<os>
  <type arch='x86_64' machine='pc'>xen</type>
</os>
while a fully-virtualized guest would use:
	    
<bootloader>/usr/bin/pygrub</bootloader>
<os>
  <type arch='x86_64' machine='pc'>hvm</type>
  <boot dev='hd'/>
</os>
With the second phase configuration determined, the guest can be recreated, this time using a persistent configuration
	    
const char *xml = "<domain>....</domain>";
virDomainPtr dom;

dom = virDomainCreateXML(conn, xml);
if (!dom) {
    fprintf(stderr, "Unable to define persistent guest configuration\n");
    return;
}

if (virDomainCreate(dom) < 0) {
    fprintf(stderr, "Unable to boot persistent guest\n");
    return;
}

fprintf(stderr, "Guest provisoning complete, OS is running\n");