On occasion you may want to have access to a VM's Virtual Disk Image from the Control Domain (or another VM). An example of this would be access files in a VM disk that due to corruption will no longer boot. An XCP local Storage Repository is created from an LVM Volume Group. Each Virtual Disk Image represents one Logical Volume. However, mounting the Logical Volume directly isn't possible because there's  VHD (Virtual Hard Drive) container inside the Logical Volume. Inside the VHD is the real VM disk's partitions. The hierarchy goes something like this  Control Domain Volume Group -> Control Domain Logical Volume -> VHD Container -> VM disk -> VM disk physical Volume -> VM Disk Volume Group -> VM Disk Logical Volume -> VM Disk filesystem. You can see there's quite a few layers here but mounting the VM's disk filesystem IS possible. Here's how.

Since the Control Domain is really a VM itself (privileged VM) we can create a Virtual Block Device using the VM's Virtual Disk Image. From there we use kpartx to map the partitions to Control Domain device nodes.

 

1. Finding VDI UUID number

[root@testcloud1 ~]# xe vbd-list
uuid ( RO)             : b68a332b-4155-f1c4-b224-18fb465dc8e4
          vm-uuid ( RO): fec94868-0449-3616-39b9-08c3b27dab70
    vm-name-label ( RO): Fedora17
         vdi-uuid ( RO): 199619f0-e483-4618-bbd4-bdc9c524bde1
            empty ( RO): false
           device ( RO): 

 

2. Finding the Control Domain UUID number 

[root@testcloud1 ~]# xe vm-list is-control-domain=true
uuid ( RO)           : 2dfc3f33-4afa-47dd-8af6-21877326f8e4
     name-label ( RW): Control domain on host: testcloud1
    power-state ( RO): running

 

3. Creating a virtual block device for the VDI on our Control Domain. This returns the VBD UUID. 

[root@testcloud1 ~]# xe vbd-create device=autodetect vm-uuid=2dfc3f33-4afa-47dd-8af6-21877326f8e4   vdi-uuid=199619f0-e483-4618-bbd4-bdc9c524bde1
11740055-f8d4-3d84-1f90-fb1f4b646fd6

  

4. Plugging in the Virtual Block Device gives us a /dev/sm/backend/<sr-uuid>/<vdi-uuid> device which fdisk will list. 

[root@testcloud1 ~]# xe vbd-plug uuid=11740055-f8d4-3d84-1f90-fb1f4b646fd6 

 

5. Showing the VBD attached to the control domain AND to the Fedora17 VM 

[root@testcloud1 ~]# xe vbd-list 
uuid ( RO)             : b68a332b-4155-f1c4-b224-18fb465dc8e4
          vm-uuid ( RO): fec94868-0449-3616-39b9-08c3b27dab70
    vm-name-label ( RO): Fedora17
         vdi-uuid ( RO): 199619f0-e483-4618-bbd4-bdc9c524bde1
            empty ( RO): false
           device ( RO): 


uuid ( RO)             : 11740055-f8d4-3d84-1f90-fb1f4b646fd6
          vm-uuid ( RO): 2dfc3f33-4afa-47dd-8af6-21877326f8e4
    vm-name-label ( RO): Control domain on host: testcloud1
         vdi-uuid ( RO): 199619f0-e483-4618-bbd4-bdc9c524bde1
            empty ( RO): false
           device ( RO): sm/backend/ec2eb4df-040c-2a75-1c9f-69b953ac9e8d/199619f0-e483-4618-bbd4-bdc9c524bde1


6. Showing the Device Node using ls and fdisk

[root@testcloud1 ~]# ls -l /dev/sm/backend/ec2eb4df-040c-2a75-1c9f-69b953ac9e8d/199619f0-e483-4618-bbd4-bdc9c524bde1
brw------- 1 root root 253, 0 Jan 12 00:52 /dev/sm/backend/ec2eb4df-040c-2a75-1c9f-69b953ac9e8d/199619f0-e483-4618-bbd4-bdc9c524bde1

 

[root@testcloud1 ~]# fdisk -l $(ls /dev/sm/backend/ec2eb4df-040c-2a75-1c9f-69b953ac9e8d/199619f0-e483-4618-bbd4-bdc9c524bde1)

Disk /dev/sm/backend/ec2eb4df-040c-2a75-1c9f-69b953ac9e8d/199619f0-e483-4618-bbd4-bdc9c524bde1: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

                                                                                     Device Boot      Start         End      Blocks   Id  System
/dev/sm/backend/ec2eb4df-040c-2a75-1c9f-69b953ac9e8d/199619f0-e483-4618-bbd4-bd  *           1         980     7863296   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sm/backend/ec2eb4df-040c-2a75-1c9f-69b953ac9e8d/199619f0-e483-4618-bbd4-bd            980        1045      524288   82  Linux swap / Solaris
[root@testcloud1 ~]# kpartx -a $(ls /dev/sm/backend/ec2eb4df-040c-2a75-1c9f-69b953ac9e8d/199619f0-e483-4618-bbd4-bdc9c524bde1)
[root@testcloud1 ~]# ls /dev/mapper/
199619f0-e483-4618-bbd4-bdc9c524bde1p1
199619f0-e483-4618-bbd4-bdc9c524bde1p2
control
VG_XenStorage--ec2eb4df--040c--2a75--1c9f--69b953ac9e8d-MGT
VG_XenStorage--ec2eb4df--040c--2a75--1c9f--69b953ac9e8d-VHD--199619f0--e483--4618--bbd4--bdc9c524bde1
VG_XenStorage--ec2eb4df--040c--2a75--1c9f--69b953ac9e8d-VHD--6c663b1b--d9c4--45bd--9071--3f6b4668d164

 

7. Use kpartx to create /dev/mapper entries for each partition.

[root@testcloud1 ~]# kpartx -a $(ls /dev/sm/backend/ec2eb4df-040c-2a75-1c9f-69b953ac9e8d/199619f0-e483-4618-bbd4-bdc9c524bde1)

 

8. kpartx creates /dev/mapper nodes

[root@testcloud1 ~]# ls /dev/mapper/
199619f0-e483-4618-bbd4-bdc9c524bde1p1
199619f0-e483-4618-bbd4-bdc9c524bde1p2
control
VG_XenStorage--ec2eb4df--040c--2a75--1c9f--69b953ac9e8d-MGT
VG_XenStorage--ec2eb4df--040c--2a75--1c9f--69b953ac9e8d-VHD--199619f0--e483--4618--bbd4--bdc9c524bde1
VG_XenStorage--ec2eb4df--040c--2a75--1c9f--69b953ac9e8d-VHD--6c663b1b--d9c4--45bd--9071--3f6b4668d164

 

9. Mount the partitions

[root@testcloud1 ~]# mount /dev/mapper/199619f0-e483-4618-bbd4-bdc9c524bde1p1 /media/Fedora17-rootfs/

 

[root@testcloud1 ~]# ls /media/Fedora17-rootfs/
bin  boot  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

 

Notes: Be careful with mounting VDI's from VMs on the Control Domain. Best practice is to make sure that the VDI is not accessible from the Control Domain AND the VM at the same time. Mounting to two locations at once is a great way of corrupting your disk by having two different Operating Systems writing to the disk at the same time.