XL - Xen management tool, based on LibXenlight
xl subcommand [args]
The xl program is the new tool for managing Xen guest domains. The program can be used to create, pause, and shutdown domains. It can also be used to list current domains, enable or pin VCPUs, and attach or detach virtual block devices.
The basic structure of every xl command is almost always:
xl subcommand [OPTIONS] domain-id
Where subcommand is one of the subcommands listed below, domain-id is the numeric domain id, or the domain name (which will be internally translated to domain id), and OPTIONS are subcommand specific options. There are a few exceptions to this rule in the cases where the subcommand in question acts on all domains, the entire machine, or directly on the Xen hypervisor. Those exceptions will be clear for each of those subcommands.
Most xl operations rely upon xenstored and xenconsoled: make sure you start the script /etc/init.d/xencommons at boot time to initialize all the daemons needed by xl.
In the most common network configuration, you need to setup a bridge in dom0 named xenbr0 in order to have a working network in the guest domains. Please refer to the documentation of your Linux distribution to know how to setup the bridge.
If you specify the amount of memory dom0 has, passing dom0_mem to Xen, it is highly recommended to disable autoballoon. Edit /etc/xen/xl.conf and set it to 0.
Most xl commands require root privileges to run due to the communications channels used to talk to the hypervisor. Running as non root will return an error.
Some global options are always available:
Verbose.
Dry run: do not actually execute the command.
Force execution: xl will refuse to run some commands if it detects that xend is also running, this option will force the execution of those commands, even though it is unsafe.
Always use carriage-return-based overwriting for printing progress messages without scrolling the screen. Without -t, this is done only if stderr is a tty.
The following subcommands manipulate domains directly. As stated previously, most commands take domain-id as the first parameter.
This command is deprecated. Please use xl trigger
in preference
Indicate an ACPI button press to the domain. button is may be 'power' or 'sleep'. This command is only available for HVM domains.
The create subcommand takes a config file as first argument: see xl.cfg for full details of that file format and possible options. If configfile is missing XL creates the domain starting from the default value for every option.
configfile has to be an absolute path to a file.
Create will return as soon as the domain is started. This does not mean the guest OS in the domain has actually booted, or is available for input.
If the -F option is specified, create will start the domain and not return until its death.
OPTIONS
No console output.
Use the given configuration file.
Leave the domain paused after it is created.
Run in foreground until death of the domain.
Attach to domain's VNC server, forking a vncviewer process.
Pass VNC password to vncviewer via stdin.
Attach console to the domain as soon as it has started. This is useful for determining issues with crashing domains and just as a general convenience since you often want to watch the domain boot.
It is possible to pass key=value pairs on the command line to provide options as if they were written in the configuration file; these override whatever is in the configfile.
NB: Many config options require characters such as quotes or brackets which are interpreted by the shell (and often discarded) before being passed to xl, resulting in xl being unable to parse the value correctly. A simple work-around is to put all extra options within a single set of quotes, separated by semicolons. (See below for an example.)
EXAMPLES
xl create DebianLenny
This creates a domain with the file /etc/xen/DebianLenny, and returns as soon as it is run.
xl create hvm.cfg 'cpus="0-3"; pci=["01:05.1","01:05.2"]'
This creates a domain with the file hvm.cfg, but additionally pins it to cpus 0-3, and passes through two PCI devices.
Update the saved configuration for a running domain. This has no immediate effect but will be applied when the guest is next restarted. This command is useful to ensure that runtime modifications made to the guest will be preserved when the guest is restarted.
Since Xen 4.5 xl has improved capabilities to handle dynamic domain configuration changes and will preserve any changes made a runtime when necessary. Therefore it should not normally be necessary to use this command any more.
configfile has to be an absolute path to a file.
OPTIONS
Use the given configuration file.
It is possible to pass key=value pairs on the command line to provide options as if they were written in the configuration file; these override whatever is in the configfile. Please see the note under create on handling special characters when passing key=value pairs on the command line.
Attach to domain domain-id's console. If you've set up your domains to have a traditional log in console this will look much like a normal text log in screen.
Use the key combination Ctrl+] to detach the domain console.
OPTIONS
Connect to a PV console or connect to an emulated serial console. PV consoles are the only consoles available for PV domains while HVM domains can have both. If this option is not specified it defaults to emulated serial for HVM guests and PV console for PV guests.
Connect to console number NUM. Console numbers start from 0.
Immediately terminate the domain domain-id. This doesn't give the domain OS any chance to react, and is the equivalent of ripping the power cord out on a physical machine. In most cases you will want to use the shutdown command instead.
OPTIONS
Allow domain 0 to be destroyed. Because domain cannot destroy itself, this is only possible when using a disaggregated toolstack, and is most useful when using a hardware domain separated from domain 0.
Converts a domain name to a domain id.
Converts a domain id to a domain name.
Change the domain name of domain-id to new-name.
Dumps the virtual machine's memory for the specified domain to the filename specified, without pausing the domain. The dump file will be written to a distribution specific directory for dump files. Such as: /var/lib/xen/dump.
Displays the short help message (i.e. common commands).
The --long option prints out the complete set of xl subcommands, grouped by function.
Prints information about one or more domains. If no domains are specified it prints out information about all domains.
OPTIONS
The output for xl list is not the table view shown below, but instead presents the data in as a JSON data structure.
Also prints the domain UUIDs, the shutdown reason and security labels.
Also prints the cpupool the domain belong to.
Also prints the domain NUMA node affinity.
EXAMPLE
An example format for the list is as follows:
Name ID Mem VCPUs State Time(s)
Domain-0 0 750 4 r----- 11794.3
win 1 1019 1 r----- 0.3
linux 2 2048 2 r----- 5624.2
Name is the name of the domain. ID the numeric domain id. Mem is the desired amount of memory to allocate to the domain (although it may not be the currently allocated amount). VCPUs is the number of virtual CPUs allocated to the domain. State is the run state (see below). Time is the total run time of the domain as accounted for by Xen.
STATES
The State field lists 6 states for a Xen domain, and which ones the current domain is in.
The domain is currently running on a CPU.
The domain is blocked, and not running or runnable. This can be caused because the domain is waiting on IO (a traditional wait state) or has gone to sleep because there was nothing else for it to do.
The domain has been paused, usually occurring through the administrator running xl pause. When in a paused state the domain will still consume allocated resources like memory, but will not be eligible for scheduling by the Xen hypervisor.
The guest OS has shut down (SCHEDOP_shutdown has been called) but the domain is not dying yet.
The domain has crashed, which is always a violent ending. Usually this state can only occur if the domain has been configured not to restart on crash. See xl.cfg(5) for more info.
The domain is in process of dying, but hasn't completely shutdown or crashed.
NOTES
The Time column is deceptive. Virtual IO (network and block devices) used by domains requires coordination by Domain0, which means that Domain0 is actually charged for much of the time that a DomainU is doing IO. Use of this time value to determine relative utilizations by domains is thus very suspect, as a high IO workload may show as less utilized than a high CPU workload. Consider yourself warned.
Specify the maximum amount of memory the domain is able to use, appending 't' for terabytes, 'g' for gigabytes, 'm' for megabytes, 'k' for kilobytes and 'b' for bytes.
The mem-max value may not correspond to the actual memory used in the domain, as it may balloon down its memory to give more back to the OS.
Set the domain's used memory using the balloon driver; append 't' for terabytes, 'g' for gigabytes, 'm' for megabytes, 'k' for kilobytes and 'b' for bytes.
Because this operation requires cooperation from the domain operating system, there is no guarantee that it will succeed. This command will definitely not work unless the domain has the required paravirt driver.
Warning: There is no good way to know in advance how small of a mem-set will make a domain unstable and cause it to crash. Be very careful when using this command on running domains.
Migrate a domain to another host machine. By default xl relies on ssh as a transport mechanism between the two hosts.
OPTIONS
Use <sshcommand> instead of ssh. String will be passed to sh. If empty, run <host> instead of ssh <host> xl migrate-receive [-d -e].
On the new host, do not wait in the background (on <host>) for the death of the domain. See the corresponding option of the create subcommand.
Send <config> instead of config file from creation.
Print huge (!) amount of debug during the migration process.
Enable Remus HA or COLO HA for domain. By default xl relies on ssh as a transport mechanism between the two hosts.
NOTES
Remus support in xl is still in experimental (proof-of-concept) phase. Disk replication support is limited to DRBD disks.
COLO support in xl is still in experimental (proof-of-concept) phase. All options are subject to change in the future.
COLO disk configuration looks like:
disk = ['...,colo,colo-host=xxx,colo-port=xxx,colo-export=xxx,active-disk=xxx,hidden-disk=xxx...']
The supported options are:
COLO network configuration looks like:
vif = [ '...,forwarddev=xxx,...']
The supported options are:
OPTIONS
Checkpoint domain memory every MS milliseconds (default 200ms).
Disable memory checkpoint compression.
Use <sshcommand> instead of ssh. String will be passed to sh. If empty, run <host> instead of ssh <host> xl migrate-receive -r [-e].
On the new host, do not wait in the background (on <host>) for the death of the domain. See the corresponding option of the create subcommand.
Use <netbufscript> to setup network buffering instead of the default script (/etc/xen/scripts/remus-netbuf-setup).
Run Remus in unsafe mode. Use this option with caution as failover may not work as intended.
Replicate memory checkpoints to /dev/null (blackhole). Generally useful for debugging. Requires enabling unsafe mode.
Disable network output buffering. Requires enabling unsafe mode.
Disable disk replication. Requires enabling unsafe mode.
Enable COLO HA. This conflicts with -i and -b, and memory checkpoint compression must be disabled.
Pause a domain. When in a paused state the domain will still consume allocated resources such as memory, but will not be eligible for scheduling by the Xen hypervisor.
Reboot a domain. This acts just as if the domain had the reboot command run from the console. The command returns as soon as it has executed the reboot action, which may be significantly before the domain actually reboots.
For HVM domains this requires PV drivers to be installed in your guest OS. If PV drivers are not present but you have configured the guest OS to behave appropriately you may be able to use the -F option trigger a reset button press.
The behavior of what happens to a domain when it reboots is set by the on_reboot parameter of the domain configuration file when the domain was created.
OPTIONS
If the guest does not support PV reboot control then fallback to sending an ACPI power event (equivalent to the reset option to trigger.
You should ensure that the guest is configured to behave as expected in response to this event.
Build a domain from an xl save state file. See save for more info.
OPTIONS
Do not unpause domain after restoring it.
Do not wait in the background for the death of the domain on the new host. See the corresponding option of the create subcommand.
Enable debug messages.
Attach to domain's VNC server, forking a vncviewer process.
Pass VNC password to vncviewer via stdin.
Saves a running domain to a state file so that it can be restored later. Once saved, the domain will no longer be running on the system, unless the -c or -p options are used. xl restore restores from this checkpoint file. Passing a config file argument allows the user to manually select the VM config file used to create the domain.
Leave domain running after creating the snapshot.
Leave domain paused after creating the snapshot.
List count of shared pages.
OPTIONS
List specifically for that domain. Otherwise, list for all domains.
Gracefully shuts down a domain. This coordinates with the domain OS to perform graceful shutdown, so there is no guarantee that it will succeed, and may take a variable length of time depending on what services must be shutdown in the domain.
For HVM domains this requires PV drivers to be installed in your guest OS. If PV drivers are not present but you have configured the guest OS to behave appropriately you may be able to use the -F option trigger a power button press.
The command returns immediately after signally the domain unless that -w flag is used.
The behavior of what happens to a domain when it reboots is set by the on_shutdown parameter of the domain configuration file when the domain was created.
OPTIONS
Shutdown all guest domains. Often used when doing a complete shutdown of a Xen system.
Wait for the domain to complete shutdown before returning.
If the guest does not support PV shutdown control then fallback to sending an ACPI power event (equivalent to the power option to trigger.
You should ensure that the guest is configured to behave as expected in response to this event.
Send a <Magic System Request> to the domain, each type of request is represented by a different letter. It can be used to send SysRq requests to Linux guests, see sysrq.txt in your Linux Kernel sources for more information. It requires PV drivers to be installed in your guest OS.
Send a trigger to a domain, where the trigger can be: nmi, reset, init, power or sleep. Optionally a specific vcpu number can be passed as an argument. This command is only available for HVM domains.
Moves a domain out of the paused state. This will allow a previously paused domain to now be eligible for scheduling by the Xen hypervisor.
Enables the vcpu-count virtual CPUs for the domain in question. Like mem-set, this command can only allocate up to the maximum virtual CPU count configured at boot for the domain.
If the vcpu-count is smaller than the current number of active VCPUs, the highest number VCPUs will be hotplug removed. This may be important for pinning purposes.
Attempting to set the VCPUs to a number larger than the initially configured VCPU count is an error. Trying to set VCPUs to < 1 will be quietly ignored.
Some guests may need to actually bring the newly added CPU online after vcpu-set, go to SEE ALSO section for information.
Lists VCPU information for a specific domain. If no domain is specified, VCPU information for all domains will be provided.
Set hard and soft affinity for a vcpu of <domain-id>. Normally VCPUs can float between available CPUs whenever Xen deems a different run state is appropriate.
Hard affinity can be used to restrict this, by ensuring certain VCPUs can only run on certain physical CPUs. Soft affinity specifies a preferred set of CPUs. Soft affinity needs special support in the scheduler, which is only provided in credit1.
The keyword all can be used to apply the hard and soft affinity masks to all the VCPUs in the domain. The symbol '-' can be used to leave either hard or soft affinity alone.
For example:
xl vcpu-pin 0 3 - 6-9
will set soft affinity for vCPU 3 of domain 0 to pCPUs 6,7,8 and 9, leaving its hard affinity untouched. On the othe hand:
xl vcpu-pin 0 3 3,4 6-9
will set both hard and soft affinity, the former to pCPUs 3 and 4, the latter to pCPUs 6,7,8, and 9.
Specifying -f or --force will remove a temporary pinning done by the operating system (normally this should be done by the operating system). In case a temporary pinning is active for a vcpu the affinity of this vcpu can't be changed without this option.
Prints information about guests. This list excludes information about service or auxiliary domains such as dom0 and stubdoms.
EXAMPLE
An example format for the list is as follows:
UUID ID name
59e1cf6c-6ab9-4879-90e7-adc8d1c63bf5 2 win
50bc8f75-81d0-4d53-b2e6-95cb44e2682e 3 linux
Attach to domain's VNC server, forking a vncviewer process.
OPTIONS
Pass VNC password to vncviewer via stdin.
Send debug keys to Xen. It is the same as pressing the Xen "conswitch" (Ctrl-A by default) three times and then pressing "keys".
Reads the Xen message buffer, similar to dmesg on a Linux system. The buffer contains informational, warning, and error messages created during Xen's boot process. If you are having problems with Xen, this is one of the first places to look as part of problem determination.
OPTIONS
Clears Xen's message buffer.
Print information about the Xen host in name : value format. When reporting a Xen bug, please provide this information as part of the bug report. See http://wiki.xen.org/xenwiki/ReportingBugs on how to report Xen bugs.
Sample output looks as follows:
host : scarlett
release : 3.1.0-rc4+
version : #1001 SMP Wed Oct 19 11:09:54 UTC 2011
machine : x86_64
nr_cpus : 4
nr_nodes : 1
cores_per_socket : 4
threads_per_core : 1
cpu_mhz : 2266
hw_caps : bfebfbff:28100800:00000000:00003b40:009ce3bd:00000000:00000001:00000000
virt_caps : hvm hvm_directio
total_memory : 6141
free_memory : 4274
free_cpus : 0
outstanding_claims : 0
xen_major : 4
xen_minor : 2
xen_extra : -unstable
xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler : credit
xen_pagesize : 4096
platform_params : virt_start=0xffff800000000000
xen_changeset : Wed Nov 02 17:09:09 2011 +0000 24066:54a5e994a241
xen_commandline : com1=115200,8n1 guest_loglvl=all dom0_mem=750M console=com1
cc_compiler : gcc version 4.4.5 (Debian 4.4.5-8)
cc_compile_by : sstabellini
cc_compile_domain : uk.xensource.com
cc_compile_date : Tue Nov 8 12:03:05 UTC 2011
xend_config_format : 4
FIELDS
Not all fields will be explained here, but some of the less obvious ones deserve explanation:
A vector showing what hardware capabilities are supported by your processor. This is equivalent to, though more cryptic, the flags field in /proc/cpuinfo on a normal Linux machine: they both derive from the feature bits returned by the cpuid command on x86 platforms.
Available memory (in MB) not allocated to Xen, or any other domains, or claimed for domains.
When a claim call is done (see xl.conf) a reservation for a specific amount of pages is set and also a global value is incremented. This global value (outstanding_claims) is then reduced as the domain's memory is populated and eventually reaches zero. Most of the time the value will be zero, but if you are launching multiple guests, and claim_mode is enabled, this value can increase/decrease. Note that the value also affects the free_memory - as it will reflect the free memory in the hypervisor minus the outstanding pages claimed for guests. See xl info claims parameter for detailed listing.
The Xen version and architecture. Architecture values can be one of: x86_32, x86_32p (i.e. PAE enabled), x86_64, ia64.
The Xen mercurial changeset id. Very useful for determining exactly what version of code your Xen system was built from.
OPTIONS
List host NUMA topology information
Executes the xentop command, which provides real time monitoring of domains. Xentop is a curses interface, and reasonably self explanatory.
Prints the current uptime of the domains running.
Prints information about outstanding claims by the guests. This provides the outstanding claims and currently populated memory count for the guests. These values added up reflect the global outstanding claim value, which is provided via the info argument, outstanding_claims value. The Mem column has the cumulative value of outstanding claims and the total amount of memory that has been right now allocated to the guest.
EXAMPLE
An example format for the list is as follows:
Name ID Mem VCPUs State Time(s) Claimed
Domain-0 0 2047 4 r----- 19.7 0
OL5 2 2048 1 --p--- 0.0 847
OL6 3 1024 4 r----- 5.9 0
Windows_XP 4 2047 1 --p--- 0.0 1989
In which it can be seen that the OL5 guest still has 847MB of claimed memory (out of the total 2048MB where 1191MB has been allocated to the guest).
Xen ships with a number of domain schedulers, which can be set at boot time with the sched= parameter on the Xen command line. By default credit is used for scheduling.
Set or get credit scheduler parameters. The credit scheduler is a proportional fair share CPU scheduler built from the ground up to be work conserving on SMP hosts.
Each domain (including Domain0) is assigned a weight and a cap.
OPTIONS
Specify domain for which scheduler parameters are to be modified or retrieved. Mandatory for modifying scheduler parameters.
A domain with a weight of 512 will get twice as much CPU as a domain with a weight of 256 on a contended host. Legal weights range from 1 to 65535 and the default is 256.
The cap optionally fixes the maximum amount of CPU a domain will be able to consume, even if the host system has idle CPU cycles. The cap is expressed in percentage of one physical CPU: 100 is 1 physical CPU, 50 is half a CPU, 400 is 4 CPUs, etc. The default, 0, means there is no upper cap.
NB: Many systems have features that will scale down the computing power of a cpu that is not 100% utilized. This can be in the operating system, but can also sometimes be below the operating system in the BIOS. If you set a cap such that individual cores are running at less than 100%, this may have an impact on the performance of your workload over and above the impact of the cap. For example, if your processor runs at 2GHz, and you cap a vm at 50%, the power management system may also reduce the clock speed to 1GHz; the effect will be that your VM gets 25% of the available power (50% of 1GHz) rather than 50% (50% of 2GHz). If you are not getting the performance you expect, look at performance and cpufreq options in your operating system and your BIOS.
Restrict output to domains in the specified cpupool.
Specify to list or set pool-wide scheduler parameters.
Timeslice tells the scheduler how long to allow VMs to run before pre-empting. The default is 30ms. Valid ranges are 1ms to 1000ms. The length of the timeslice (in ms) must be higher than the length of the ratelimit (see below).
Ratelimit attempts to limit the number of schedules per second. It sets a minimum amount of time (in microseconds) a VM must run before we will allow a higher-priority VM to pre-empt it. The default value is 1000 microseconds (1ms). Valid range is 100 to 500000 (500ms). The ratelimit length must be lower than the timeslice length.
COMBINATION
The following is the effect of combining the above options:
Set or get credit2 scheduler parameters. The credit2 scheduler is a proportional fair share CPU scheduler built from the ground up to be work conserving on SMP hosts.
Each domain (including Domain0) is assigned a weight.
OPTIONS
Specify domain for which scheduler parameters are to be modified or retrieved. Mandatory for modifying scheduler parameters.
A domain with a weight of 512 will get twice as much CPU as a domain with a weight of 256 on a contended host. Legal weights range from 1 to 65535 and the default is 256.
Restrict output to domains in the specified cpupool.
Set or get rtds (Real Time Deferrable Server) scheduler parameters. This rt scheduler applies Preemptive Global Earliest Deadline First real-time scheduling algorithm to schedule VCPUs in the system. Each VCPU has a dedicated period and budget. VCPUs in the same domain have the same period and budget. While scheduled, a VCPU burns its budget. A VCPU has its budget replenished at the beginning of each period; Unused budget is discarded at the end of each period.
OPTIONS
Specify domain for which scheduler parameters are to be modified or retrieved. Mandatory for modifying scheduler parameters.
Specify vcpu for which scheduler parameters are to be modified or retrieved.
Period of time, in microseconds, over which to replenish the budget.
Amount of time, in microseconds, that the VCPU will be allowed to run every period.
Restrict output to domains in the specified cpupool.
EXAMPLE
1) Use -v all to see the budget and period of all the VCPUs of all the domains:
xl sched-rtds -v all
Cpupool Pool-0: sched=RTDS
Name ID VCPU Period Budget
Domain-0 0 0 10000 4000
vm1 1 0 300 150
vm1 1 1 400 200
vm1 1 2 10000 4000
vm1 1 3 1000 500
vm2 2 0 10000 4000
vm2 2 1 10000 4000
Without any arguments, it will output the default scheduing parameters for each domain:
xl sched-rtds
Cpupool Pool-0: sched=RTDS
Name ID Period Budget
Domain-0 0 10000 4000
vm1 1 10000 4000
vm2 2 10000 4000
2) Use, for instance -d vm1, -v all to see the budget and period of all VCPUs of a specific domain (vm1):
xl sched-rtds -d vm1 -v all
Name ID VCPU Period Budget
vm1 1 0 300 150
vm1 1 1 400 200
vm1 1 2 10000 4000
vm1 1 3 1000 500
To see the parameters of a subset of the VCPUs of a domain, use:
xl sched-rtds -d vm1 -v 0 -v 3
Name ID VCPU Period Budget
vm1 1 0 300 150
vm1 1 3 1000 500
If no -v is speficified, the default scheduling parameter for the domain are shown:
xl sched-rtds -d vm1
Name ID Period Budget
vm1 1 10000 4000
3) Users can set the budget and period of multiple VCPUs of a specific domain with only one command, e.g., "xl sched-rtds -d vm1 -v 0 -p 100 -b 50 -v 3 -p 300 -b 150".
To change the parameters of all the VCPUs of a domain, use -v all, e.g., "xl sched-rtds -d vm1 -v all -p 500 -b 250".
Xen can group the physical cpus of a server in cpu-pools. Each physical CPU is assigned at most to one cpu-pool. Domains are each restricted to a single cpu-pool. Scheduling does not cross cpu-pool boundaries, so each cpu-pool has an own scheduler. Physical cpus and domains can be moved from one cpu-pool to another only by an explicit command. Cpu-pools can be specified either by name or by id.
Create a cpu pool based an config from a ConfigFile or command-line parameters. Variable settings from the ConfigFile may be altered by specifying new or additional assignments on the command line.
See the xlcpupool.cfg(5) manpage for more information.
OPTIONS
Use the given configuration file.
List CPU pools on the host. If -c is specified, xl prints a list of CPUs used by cpu-pool.
Deactivates a cpu pool. This is possible only if no domain is active in the cpu-pool.
Renames a cpu-pool to newname.
Adds one or more CPUs or NUMA nodes to cpu-pool. CPUs and NUMA nodes can be specified as single CPU/node IDs or as ranges.
For example:
(a) xl cpupool-cpu-add mypool 4
(b) xl cpupool-cpu-add mypool 1,5,10-16,^13
(c) xl cpupool-cpu-add mypool node:0,nodes:2-3,^10-12,8
means adding CPU 4 to mypool, in (a); adding CPUs 1,5,10,11,12,14,15 and 16, in (b); and adding all the CPUs of NUMA nodes 0, 2 and 3, plus CPU 8, but keeping out CPUs 10,11,12, in (c).
All the specified CPUs that can be added to the cpupool will be added to it. If some CPU can't (e.g., because they're already part of another cpupool), an error is reported about each one of them.
Removes one or more CPUs or NUMA nodes from cpu-pool. CPUs and NUMA nodes can be specified as single CPU/node IDs or as ranges, using the exact same syntax as in cpupool-cpu-add above.
Moves a domain specified by domain-id or domain-name into a cpu-pool. Domain-0 can't be moved to another cpu-pool.
Splits up the machine into one cpu-pool per numa node.
Most virtual devices can be added and removed while guests are running, assuming that the necessary support exists in the guest. The effect to the guest OS is much the same as any hotplug event.
Create a new virtual block device. This will trigger a hotplug event for the guest.
Note that only PV block devices are supported by block-attach. Requests to attach emulated devices (eg, vdev=hdc) will result in only the PV view being available to the guest.
OPTIONS
The domain id of the guest domain that the device will be attached to.
A disc specification in the same format used for the disk variable in the domain config file. See http://xenbits.xen.org/docs/unstable/misc/xl-disk-configuration.txt.
Detach a domain's virtual block device. devid may be the symbolic name or the numeric device id given to the device by domain 0. You will need to run xl block-list to determine that number.
Detaching the device requires the cooperation of the domain. If the domain fails to release the device (perhaps because the domain is hung or is still using the device), the detach will fail. The --force parameter will forcefully detach the device, but may cause IO errors in the domain.
List virtual block devices for a domain.
Insert a cdrom into a guest domain's existing virtial cd drive. The virtual drive must already exist but can be current empty.
Only works with HVM domains.
OPTIONS
How the device should be presented to the guest domain; for example "hdc".
the target path in the backend domain (usually domain 0) to be exported; Can be a block device or a file etc. See target in docs/misc/xl-disk-configuration.txt.
Eject a cdrom from a guest's virtual cd drive. Only works with HVM domains.
OPTIONS
How the device should be presented to the guest domain; for example "hdc".
Creates a new network device in the domain specified by domain-id. network-device describes the device to attach, using the same format as the vif string in the domain config file. See xl.cfg and http://xenbits.xen.org/docs/unstable/misc/xl-network-configuration.html for more informations.
Note that only attaching PV network interface is supported.
Removes the network device from the domain specified by domain-id. devid is the virtual interface device number within the domain (i.e. the 3 in vif22.3). Alternatively the mac address can be used to select the virtual interface to detach.
List virtual network interfaces for a domain.
List virtual channel interfaces for a domain.
Creates a new vtpm device in the domain specified by domain-id. vtpm-device describes the device to attach, using the same format as the vtpm string in the domain config file. See xl.cfg for more information.
Removes the vtpm device from the domain specified by domain-id. devid is the numeric device id given to the virtual trusted platform module device. You will need to run xl vtpm-list to determine that number. Alternatively the uuid of the vtpm can be used to select the virtual device to detach.
List virtual trusted platform modules for a domain.
List all the assignable PCI devices. These are devices in the system which are configured to be available for passthrough and are bound to a suitable PCI backend driver in domain 0 rather than a real driver.
Make the device at PCI Bus/Device/Function BDF assignable to guests. This will bind the device to the pciback driver. If it is already bound to a driver, it will first be unbound, and the original driver stored so that it can be re-bound to the same driver later if desired. If the device is already bound, it will return success.
CAUTION: This will make the device unusable by Domain 0 until it is returned with pci-assignable-remove. Care should therefore be taken not to do this on a device critical to domain 0's operation, such as storage controllers, network interfaces, or GPUs that are currently being used.
Make the device at PCI Bus/Device/Function BDF assignable to guests. This will at least unbind the device from pciback. If the -r option is specified, it will also attempt to re-bind the device to its original driver, making it usable by Domain 0 again. If the device is not bound to pciback, it will return success.
Hot-plug a new pass-through pci device to the specified domain. BDF is the PCI Bus/Device/Function of the physical device to pass-through.
Hot-unplug a previously assigned pci device from a domain. BDF is the PCI Bus/Device/Function of the physical device to be removed from the guest domain.
If -f is specified, xl is going to forcefully remove the device even without guest's collaboration.
List pass-through pci devices for a domain.
Create a new USB controller in the domain specified by domain-id, usbctrl-device describes the device to attach, using form KEY=VALUE KEY=VALUE ...
where KEY=VALUE has the same meaning as the usbctrl description in the domain config file. See xl.cfg for more information.
Destroy a USB controller from the specified domain. devid is devid of the USB controller.
Hot-plug a new pass-through USB device to the domain specified by domain-id, usbdev-device describes the device to attach, using form KEY=VALUE KEY=VALUE ...
where KEY=VALUE has the same meaning as the usbdev description in the domain config file. See xl.cfg for more information.
Hot-unplug a previously assigned USB device from a domain. controller=devid and port=number is USB controller:port in guest where the USB device is attached to.
List pass-through usb devices for a domain.
List tmem pools. If -l is specified, also list tmem stats.
Freeze tmem pools.
Thaw tmem pools.
Change tmem settings.
OPTIONS
Weight (int)
Cap (int)
Compress (int)
De/authenticate shared tmem pool.
OPTIONS
Specify uuid (abcdef01-2345-6789-1234-567890abcdef)
0=auth,1=deauth
Get information about how much freeable memory (MB) is in-use by tmem.
FLASK is a security framework that defines a mandatory access control policy providing fine-grained controls over Xen domains, allowing the policy writer to define what interactions between domains, devices, and the hypervisor are permitted. Some example of what you can do using XSM/FLASK: - Prevent two domains from communicating via event channels or grants - Control which domains can use device passthrough (and which devices) - Restrict or audit operations performed by privileged domains - Prevent a privileged domain from arbitrarily mapping pages from other domains.
You can find more details on how to use FLASK and an example security policy here: http://xenbits.xen.org/docs/unstable/misc/xsm-flask.txt
Determine if the FLASK security module is loaded and enforcing its policy.
Enable or disable enforcing of the FLASK access controls. The default is permissive, but this can be changed to enforcing by specifying "flask=enforcing" or "flask=late" on the hypervisor's command line.
Load FLASK policy from the given policy file. The initial policy is provided to the hypervisor as a multiboot module; this command allows runtime updates to the policy. Loading new security policy will reset runtime changes to device labels.
Intel Haswell and later server platforms offer shared resource monitoring and control technologies. The availability of these technologies and the hardware capabilities can be shown with psr-hwinfo.
See http://xenbits.xen.org/docs/unstable/misc/xl-psr.html for more information.
Show Platform Shared Resource (PSR) hardware information.
OPTIONS
Show Cache Monitoring Technology (CMT) hardware information.
Show Cache Allocation Technology (CAT) hardware information.
Intel Haswell and later server platforms offer monitoring capability in each logical processor to measure specific platform shared resource metric, for example, L3 cache occupancy. In the Xen implementation, the monitoring granularity is domain level. To monitor a specific domain, just attach the domain id with the monitoring service. When the domain doesn't need to be monitored any more, detach the domain id from the monitoring service.
Intel Broadwell and later server platforms also offer total/local memory bandwidth monitoring. Xen supports per-domain monitoring for these two additional monitoring types. Both memory bandwidth monitoring and L3 cache occupancy monitoring share the same set of underlying monitoring service. Once a domain is attached to the monitoring service, monitoring data can be shown for any of these monitoring types.
attach: Attach the platform shared resource monitoring service to a domain.
detach: Detach the platform shared resource monitoring service from a domain.
Show monitoring data for a certain domain or all domains. Current supported monitor types are: - "cache-occupancy": showing the L3 cache occupancy(KB). - "total-mem-bandwidth": showing the total memory bandwidth(KB/s). - "local-mem-bandwidth": showing the local memory bandwidth(KB/s).
Intel Broadwell and later server platforms offer capabilities to configure and make use of the Cache Allocation Technology (CAT) mechanisms, which enable more cache resources (i.e. L3 cache) to be made available for high priority applications. In the Xen implementation, CAT is used to control cache allocation on VM basis. To enforce cache on a specific domain, just set capacity bitmasks (CBM) for the domain.
Intel Broadwell and later server platforms also offer Code/Data Prioritization (CDP) for cache allocations, which support specifying code or data cache for applications. CDP is used on a per VM basis in the Xen implementation. To specify code or data CBM for the domain, CDP feature must be enabled and CBM type options need to be specified when setting CBM, and the type options (code and data) are mutually exclusive.
Set cache capacity bitmasks(CBM) for a domain. For how to specify cbm please refer to http://xenbits.xen.org/docs/unstable/misc/xl-psr.html.
OPTIONS
Specify the socket to process, otherwise all sockets are processed.
Set code CBM when CDP is enabled.
Set data CBM when CDP is enabled.
Show CAT settings for a certain domain or all domains.
xl is mostly command-line compatible with the old xm utility used with the old Python xend. For compatibility, the following options are ignored:
We need better documentation for:
Transcendent Memory.
The following man pages:
xl.cfg(5), xlcpupool.cfg(5), xentop(1)
And the following documents on the xen.org website:
http://xenbits.xen.org/docs/unstable/misc/xl-network-configuration.html http://xenbits.xen.org/docs/unstable/misc/xl-disk-configuration.txt http://xenbits.xen.org/docs/unstable/misc/xsm-flask.txt http://xenbits.xen.org/docs/unstable/misc/xl-psr.html
For systems that don't automatically bring CPU online:
http://wiki.xen.org/wiki/Paravirt_Linux_CPU_Hotplug
Send bugs to xen-devel@lists.xen.org, see http://wiki.xen.org/xenwiki/ReportingBugs on how to send bug reports.