/etc/xen/xl.conf - XL Global/Host Configuration
The xl.conf file allows configuration of hostwide xl
toolstack options.
For details of per-domain configuration options please see xl.cfg(5).
The config file consists of a series of KEY=VALUE
pairs.
A value VALUE
is one of:
A string, surrounded by either single or double quotes.
A number, in either decimal, octal (using a 0
prefix) or hexadecimal (using an 0x
prefix).
A NUMBER
interpreted as False
(0
) or True
(any other value).
A list of VALUES
of the above types. Lists are homogeneous and are not nested.
The semantics of each KEY
defines which form of VALUE
is required.
If set to "on" then xl
will automatically reduce the amount of memory assigned to domain 0 in order to free memory for new domains.
If set to "off" then xl
will not automatically reduce the amount of domain 0 memory.
If set to "auto" then auto-ballooning will be disabled if the dom0_mem
option was provided on the Xen command line.
You are strongly recommended to set this to "off"
(or "auto"
) if you use the dom0_mem
hypervisor command line to reduce the amount of memory given to domain 0 by default.
Default: "auto"
If disabled hotplug scripts will be called from udev, as it used to be in the previous releases. With the default option, hotplug scripts will be launched by xl directly.
Default: 1
Sets the path to the lock file used by xl to serialise certain operations (primarily domain creation).
Default: /var/lock/xl
Configures the default hotplug script used by virtual network devices.
The old vifscript option is deprecated and should not be used.
Default: /etc/xen/scripts/vif-bridge
Configures the default bridge to set for virtual network devices.
The old defaultbridge option is deprecated and should not be used.
Default: xenbr0
Configures the default backend to set for virtual network devices.
Default: 0
Configures the default gateway device to set for virtual network devices.
Default: None
Configures the default script used by Remus to setup network buffering.
Default: /etc/xen/scripts/remus-netbuf-setup
Configures the default output format used by xl when printing "machine readable" information. The default is to use the JSON
http://www.json.org/ syntax. However for compatibility with the previous xm
toolstack this can be configured to use the old SXP
(S-Expression-like) syntax instead.
Default: json
Configures the name of the first block device to be used for temporary block device allocations by the toolstack. The default choice is "xvda".
If this option is enabled then when a guest is created there will be an guarantee that there is memory available for the guest. This is an particularly acute problem on hosts with memory over-provisioned guests that use tmem and have self-balloon enabled (which is the default option). The self-balloon mechanism can deflate/inflate the balloon quickly and the amount of free memory (which xl info
can show) is stale the moment it is printed. When claim is enabled a reservation for the amount of memory (see 'memory' in xl.conf(5)) is set, which is then reduced as the domain's memory is populated and eventually reaches zero. The free memory in xl info
is the combination of the hypervisor's free heap memory minus the outstanding claims value.
If the reservation cannot be meet the guest creation fails immediately instead of taking seconds/minutes (depending on the size of the guest) while the guest is populated.
Note that to enable tmem type guests, one needs to provide tmem
on the Xen hypervisor argument and as well on the Linux kernel command line.
Note that the claim call is not attempted if superpages
option is used in the guest config (see xl.cfg(5)).
Default: 1
0
No claim is made. Memory population during guest creation will be attempted as normal and may fail due to memory exhaustion.
1
Normal memory and freeable pool of ephemeral pages (tmem) is used when calculating whether there is enough memory free to launch a guest. This guarantees immediate feedback whether the guest can be launched due to memory exhaustion (which can take a long time to find out if launching massively huge guests).
Global masks that are applied when creating guests and pinning vcpus to indicate which cpus they are allowed to run on. Specifically, vm.cpumask
applies to all guest types, vm.hvm.cpumask
applies to HVM guests and vm.pv.cpumask
applies to PV guests.
The hard affinity of guest's vcpus are logical-AND'ed with respective masks. If the resulting affinity mask is empty, operation will fail.
Use --ignore-global-affinity-masks to skip applying global masks.
The default value for these masks are all 1's, i.e. all cpus are allowed.
Due to bug(s), these options may not interact well with other options concerning CPU affinity. One example is CPU pools. Users should always double check that the required affinity has taken effect.