Utilizing an ISO image in a VM's cdrom drive is fairly easy to do but because of the limited size of the Control Domain's (dom0) operating system partition it's difficult to download ISO images to /opt/xensource/packages/iso and it isn't really recommended to put them there anyway. In this tutorial we'll create a CD repository using an additional hard drive on Dom0.
First we need to know the device name of the disk.
[root@cloud1 media]# cat /proc/partitions major minor #blocks name 8 0 976762584 sda 8 1 4194304 sda1 8 2 4194304 sda2 8 3 968371393 sda3 8 16 234431064 sdb
Utilizing an ISO image in a VM's cdrom drive is fairly easy to do but because of the limited size of the Control Domain's (dom0) operating system partition it's difficult to download ISO images to /opt/xensource/packages/iso and it isn't really recommended to put them there anyway. In this tutorial we'll create a CD repository using an local Logical Volume.
First we need to know the name of the LVM Volume Group. This is taken from the Storage Repository's UUID. To get this we'll use xe host-list.
[root@cloud1 ~]# xe sr-list type=lvm uuid ( RO) : 36bf480a-5df9-4453-50f0-2bac4a86cb42 name-label ( RW): localsr-cloud1 name-description ( RW): host ( RO): cloud1.acs.edcc.edu type ( RO): lvm content-type ( RO): user
Using xe sr-list type=lvm shows only our local Storage Repository which has the UUID of 36bf480a-5df9-4453-50f0-2bac4a86cb42. We'll now use the vgs command to give us the names of all Volume Groups including VG_XenStorage-36bf480a-5df9-4453-50f0-2bac4a86cb42 which matches our SR UUID.
Utilizing an ISO image in a VM's cdrom drive is fairly easy to do but because of the limited size of the Control Domain's (dom0) operating system partition it's difficult to download ISO images to /opt/xensource/packages/iso and it isn't really recommended to put them there anyway. In this tutorial we'll create a CD repository using an NFS share.
In our example we'll be using a share on the cloud0 host named /media/NFSISO. To set this up on cloud0 you'd log into cloud0 as root and add this line to the /etc/exports file of your NFS server.
I'd recommend that you secure your NFS share more tightly than I've done here but for the purpose of this tutorial we'll go with it. We need to make a directory that we can mount our NFS share on first.
I'm using a Xen Virtual Server to provide my Linux students with their own machines with admin rights. This has prompted interest in Xen from a lot of people just starting out in Virtualization. Following is a quick explaination of Xen and how to get a Virtual Machine up and running as fast as possible.
Xen is a hypervisor meaning that it runs above the hardware but below any OS. Traditionally when you "virtualized" an OS you'd have a computer that you logged into which you installed virtualization software on such as VMWare workstation or VirtualBox. With this software you'd start the Virtual Machine from it's GUI and install the Guest OS via CDROM. In this case you have a Host Machine (the real physical machine) and a Guest Machine (the virtualized OS). With a hypervisor ALL operating systems are virtualized. This might seem a bit strange or impossible but is very powerful and extremely effecient. The side effect is that Xen can be very complex to set up. Let me explain the Xen boot process.
Xen Boot Process
- Machine runs code in Master Boot Record
- Bootloader loads the OS kernel
- Xen lodges itself in memory and loads the rest of the kernel in a Virtual Machine
- The user logs into the first Virtual Machine and starts, stops and restarts the other Virtual Machines from there
The name for the first Virtual Machine is Dom0 - it's the privileged Domain so it has direct access to the physical hardware. All subsequent Virtual Machines that are started are called DomU - unprivileged Domains. To manage a Xen Virtual Server you log into the Privileged Domain (Dom0) and use various commands to administer the Unprivileged Domains (DomUs).
Two modes of Virtualization
Virtualizers work in one of two modes - paravirtualization or hardware (full) virtualization. The difference being that a paravirtualized DomU OS knows it's being virtualized and has extensions to allow and assist in this. Paravirtualized Operating Systems are very fast and effecient. However there are times when you won't be virtualizing an OS that has these extensions such as Windows. In this case you need to use a CPU that has hardware vitualization support and run Xen in HVM (Hardware Virtualization Mode).
- Runs on a lot of hardware - x86, x86-64, Itanium and PowerPC 970 with or without hardware Virtualization support
- DomUs can be Linux, NetBSD and Solaris
- Very fast
- Requires Intel or AMD cpus with Virtualization Support built in
- DomUs can be most any unmodified OS including Windows
- Not so fast
To get around the speed issues with Full Virtualization there are paravirtualized drivers that have been written for many Operating Systems including Windows for disk access and network cards. This allows Full Virtualization to reach the speeds of paravirtualization in these two areas without requiring further modification to the Operating System. The Linux KVM Hypervisor runsin Full Virtualization mode all the time and thus needs paravirtualized drivers.
1. First we need to add the YUM repository holding the updated Xen. You will need to be logged in as root to carry out these instructions
wget http://www.gitco.de/linux/i386/centos/5/CentOS-GITCO.repo -O /etc/yum.repos.d/CentOS-GITCO.repo
2. Uninstall and reinstall the Virtualization group
yum groupremove Virtualization
yum clean all
yum groupinstall -y Virtualization
1. First we need to download the YUM repository file for the updated Xen. Then uninstall the old Virtualization group and reinstall it. This will upgrade the packages.
wget http://www.gitco.de/linux/x86_64/centos/5/CentOS-GITCO.repo -O /etc/yum.repos.d/gitco.repo
yum groupremove Virtualization
yum groupinstall -y Virtualization
Yum will probably want to upgrade some other files along with the ones we've chosen.
Warning! If you get an error message from grubby this is bad!
Installing: kernel-xen ####################### [ 9/13]
grubby fatal error: unable to find a suitable template
This means that your grub.conf file couldn't be written to for whaterver reason so you won't be able to successfully reboot. If you get this message you need to edit your /boot/grub/grub.conf file and make the kernel lines match the kernel you installed.
Get your installed kernel version:
[ root@vs / ] rpm -q kernel-xen
Now edit your /boot/grub/grub.conf to match this
# grub.conf generated by anaconda
# Note that you do not have to rerun grub after making changes to this file
# NOTICE: You have a /boot partition. This means that
# all kernel and initrd paths are relative to /boot/, eg.
# root (hd0,0)
# kernel /vmlinuz-version ro root=/dev/vgsys/lvroot
# initrd /initrd-version.img
title CentOS (2.6.18-128.4.1.el5xen)
module /vmlinuz-2.6.18-128.4.1.el5xen ro root=/dev/vgsys/lvroot rhgb quiet
4. Reboot - no really I mean it.
5. Try it out by using the xm dmesg command
\ \/ /___ _ __ |___ / |___ / / _ \
\ // _ \ '_ \ |_ \ |_ \| | | |
/ \ __/ | | | ___) | ___) | |_| |
/_/\_\___|_| |_| |____(_)____(_)___/
(XEN) Latest ChangeSet: unavailable
(XEN) Command line:
(XEN) Video information:
(XEN) VGA is text mode 80x25, font 8x16
(XEN) VBE/DDC methods: V2; EDID transfer time: 2 seconds
(XEN) Disc information:
(XEN) Found 1 MBR signatures
(XEN) Found 1 EDD information structures
(XEN) Xen-e820 RAM map:
That's about all. If you have any questions drop a comment here.
Download xen-tools, install rpmstrap and install xen-tools. If there's a newever version of xen-tools available substitute that filename
yum install -y rpmstrap
tar -xzvpf xen-tools-3.9.tar.gz
I've not using xen-tools that much but I wanted to put together a tutorial anyway. Let me know how it goes.
I've been working on ways of getting information to the XCP/Xenserver Admins eyes faster than the standard xe commandline tool provides. This tool - lshosts is a rewrite of lshostvms.sh which showed each host and how many running VMs were on it, something I often would like to know. While rewriting it to include some of the better structure of my newer tools I started adding features. Now it displays either the Host's name-label or UUID, the number of running VMs, the CPU type, CPU cores, CPU speed, Total Memory, Free Memory and Network backend type.
As an added bonus I've added a -c option so the output is in CSV format. All future commands should have this option and I'll be retrofitting older commands when I get time.
Download it from the XCP Downloads section. http://grantmcwilliams.com/tech/virtualization/downloads/category/4-xen-cloud-platform
Sometimes I get a stuck Virtual Machine that just won't go down and it's usually due to a lack of memory in the VM. When I issue a shutdown command from within the VM it starts the shutdown process but hangs part way through. Executing xe vm-shutdown --force uuid=<insert UUID here> does nothing but lock up the terminal. If this happens to you follow the steps below to forcefully shut the VM down.
- xe task-list (find the pending tasks UUID)
- xe task-cancel uuid=<task UUID>
- xe vm-list (note the VM's UUID)
- list_domains (find the VM's UUID and note the domain id)
- /opt/xensource/debug/destroy_domain -domid XX (where XX is the domain id from step 2)
- xe vm-shutdown --force uuid=<UUID from step 1>
The steps above in script form (if you trust me). Step one has to be entered in manually. The rest can be copied and pasted.
TASK=$(xe task-list status=pending --minimal)
xe task-cancel uuid="$TASK"
VMUUID=$(xe vm-list name-label="$name" --minimal)
DOMID=$(xe vm-list uuid="$VMUUID" params=dom-id --minimal) /opt/xensource/debug/destroy_domain -domid "$DOMID" xe vm-shutdown --force uuid="$VMUUID"
Scenario: In the Dom0 (Host) you have a file that you export to the DomU (Guest) and it appears as an entire hard drive and you want to make it larger.
Example- Dom0: /srv/xen/diskimage.img -> DomU: /dev/xvda
If you're using diskimages for your DomU drives you may need to increase their size if any of the DomU partitions become full.
Resize the Xen Diskimage in Dom0
1. Create a backup of the diskimage from Dom0
2. Shutdown the DomU
3. Add extra space to the diskimage using dd. This will add 1GB to the DomU image. Adjust count= depending on how much you want to add. If you want a sparse file use seek= to define the entire disk size.
dd if=/dev/zero bs=1M count=1024 >> ./diskimage.img
or if you want a sparse file
dd if=/dev/zero bs=1 count=0 seek=1G >> ./diskimage.img
4. Boot the domU
Your disk should now be larger. You will need to use traditional tools inside the DomU to make the partitions and filesystems larger.
Following are examples for Partitions and LVM.
Expanding DomU Partitions from within DomU
In this example we're using /dev/xvda as the example DomU device name, change this depending on your setup. Note this tutorial only works for resizing the last partition on the diskimage drive.
1. Start the DomU and log in as root
2. Start fdisk /dev/xvda
3. Delete the last partition and recreate it with the start cylinder being identical as before and the ending cylinder being the default (end of disk)
4. Exit fdisk
5. You may have to reboot the DomU before going on.
5. Resize the filesystem on the partition - resize2fs /dev/xvda1
That's really it! You can only hot resize if the Filesystem is getting larger. If you need to shrink it then you'll have to take the Volume offline first. Isn't this easier than dealing with partitions that are too small?
If the partition you want to resize is in the middle of the DomU drive you're in a bit of a pickle. For example if you want to resize / you have problems.
- /boot - /dev/xvda1
- / - /dev/xvda2
- /var - /dev/xvda3
This is the primary reason to using LVM. The solution to this problem isn't very elegant. You basically need to make another disk image and attatch it to the DomU in exactly the same manner as you attached /dev/xvda. The new drive should appear as /dev/xvdb if that's the way we entered it in the DomU config. Once it's done you need to restart DomU, fdisk and format the drive. Once formatted you can mount it and copy all of /var over, change /etc/fstab to map /var to /dev/xvdb1 and reboot the DomU again. Once rebooted you can delete /dev/xvda3 and resize /dev/xvda2.
This process is really no different than if you had a real server but you don't have to install a physical hard drive. I think this shows why LVM is such an improvement over physical partitions.