Thursday, November 20, 2008

Is it possible to resize the storage for a xen guest in RHEL5, If yes how?

This is a million dollar question. The answer can be "yes" and "no" and highly depends upon the configuration of the guest backend and how it has been partitioned inside the guest. Most people want to resize without initiating a reboot of the guest system, but most of these people make wrong choice while configuring the guest backend intially and end up having to reboot the guest or not being able to resize partitions inside the guest.

A lot things need to be taken into consideration to decide how to resize the guest storage. Before proceeding into how to resize, it's good explain different types of backends that can be used for guest.

1 - Block device. A block device in Dom0 can be used as a backend for the guest. It can be raw partitions, LVMs, raid devices and etc. It can even be a unpartitioned disk as a whole (like "sda" which is not recommended).

2 - File based storage. A file built with a specific size using zeros in Dom0 can be used as the backend of the guest. While using a file based storage it can be any of the below two types.

2.1 - Sparse file. While using a sparse file, disk blocks are not pre-allocated while creating the file, but are allocated only when data is written to the disk fom the guest. This is not recommended for production uses due to performance issues.

2.2 - Fully allocated file. Entire blocks are allocated while creating the file based image. This gives more performance and is recommended for production usage if using block devices are not an option.

I would explore different types of storage configurations and how resizing can be done in those scenarios. I prefer to abstain from explaining the nasty methods of resizing partitions using "parted" or "fdisk" inside the guest. So I prefer to say, if LVM is not used inside the guest, resizing a partition is not possible. Only new partitions can be created after extending the backend. If resizing using parted or fdisk is preferred for anyone, it's upto them.

Different Scenarios and How to resize.
----------------------------------------------

1- LVM is used in both Host and Guest. The backend for the guest is an LVM device in Dom0 and this has been repartitioned in guest using LVM. There are two ways to resize it.

1.1 - Create a new LV in Dom0 and attach it to the guest as a second disk. Repartition the second disk in guest, extend the Volume Group using the new disk, then extend the LV using the additinal free space in the VG. This method does not require a reboot of the guest and is preferred for most xen users.

1.2 - Extend the LV device which is already attached to the guest in Dom0. After the LV is extended in Dom0, the guest should be rebooted to see the new size. There is currently no way to let the guest now that the size of backend has changed without a reboot. Once the guest is rebooted, it would show the new size as free. Create a new partition using the free space, make it a PV, extend the Volume Group using that PV, then extend the LV using the free space in VG. Most people don't like this method since it requires a guest reboot, but most people resize the LVM in Dom0 expecting that the guest would recognize the new space without a reboot and end up rebooting the guest and keep blasting the company that delivers the product.

2 - Raw partition - Eg, sda1 - is used in Host as the backend of the guest and LVM is used inside the guest.

2.1 - Attach a new partition to the guest - Eg, sdb1 - as a second disk. Repartition the second disk in guest, extend the Volume Group using the new disk, then extend the LV using the additinal free space in the VG. This method does not require a reboot of the guest and is preferred.

2.2 - The other method may be to extend the raw partition in the host using parted or fdisk. Reboot the guest to see the new size and extend LVM inside the guest. This is not preferred and may be dangerous.

3 - Fully allocated file based images are used as the guest backend.

3.1 - Create a new fully allocated file based image in Dom0 and attach it to the guest as a second disk (see 3.2 for details on how to create it). Repartition the second disk in guest, extend the Volume Group using the new disk, then extend the LV using the additinal free space in the VG. This method does not require a reboot of the guest and is preferred.

3.2 - Extend the fully allocated file image in Dom0 which is already attached to the guest. It's recommended to shutdown the guest while doing this.

A fully allocated 5 GB /vm/images/guest.img disk is created using the below command initially while creating the guest.

# dd if=/dev/zero of=/vm/images/guest.img bs=1M count=5120

To extend and make it 10G without losing data, the below command can be executed which is the safest method, I think.

# dd if=/dev/zero bs=1M count=5120 >> /vm/images/guest.img

OR

# dd if=/dev/zero of=/vm/images/guest.img bs=1M count=5120 oflag=append

Then create new partitions inside the guest and extend the LVs which already exist or use the new partitions individually.

4 - Sparse File based images are used as the guest backend.

4.1 - Create a new sparse file image in Dom0 and attach it to the guest as a second disk (see 4.2 for more details on how to create it). Repartition the second disk in guest, extend the Volume Group using the new disk, then extend the LV using the additinal free space in the VG. This method does not require a reboot of the guest and is preferred.

4.2 - Extend the sparse file image in Dom0 which is already attached to the guest It's recommended to shutdown the guest while doing this.

A sparse file image with 5 GB of size - Eg, /vm/images/guest.img - is created using the below command initially while creating the guest.

# dd if=/dev/zero of=/vm/images/guest.img bs=1M count=0 seek=5120 conv=notrunc

To extend and make this 10G without losing data, the below command can be executed which is the safest method, I think.

# dd if=/dev/zero of=/vm/images/guest.img bs=1M count=0 seek=10240 conv=notrunc

Then create new partitions inside the guest and extend the LVs which already exists or use the new partitions individually or to create new VGs and LVs.

Note: Sparse files are not recommended for production system due to preformance reasons. Always use fully allocated file based images.

- In all x.1 above, it's ok to use all possible options. Eg, in 4.1, new LVM in Dom0 can be created and attached to the guest, a new raw partition can be created and attached to the guest and a new fully allocated file based image can be attached to the guest to extend the volumes inside it. This is applicable for all x.1 explained above. I used only one option for my convenience.

- The task "attach it to the guest as a second disk" can be achieved by following either of the below two methods. This is applicable only for x.1 above, not x.2.

1 - virt-manager -> Open -> View -> Details -> Hardware -> Add -> Storage Device -> Simple File/Normal Partition -> Device Type - Virtual Disk. This is the hassle free method.

2 - Edit guest configuration file and add the second disk details to the configuration file. See examples from the sample configuration file. Can also be attached live by the xm command as below.

# xm block-attach

Eg, to attach a new lvm block device as xvdb to guest named "guest1" with read-write, below command need to be used.

# xm block-attach guest1 phy:/dev/VolGroup00/LV1 /dev/xvdb w

or

# virsh attach-disk guest1 --driver phy /dev/VolGroup00/LV1 xvdb

To attach a new file based image as xvdb to guest named guest1 with read-write, below command can be used.

# xm block-attach guest1 tap:aio:/vm/images/image1.img /dev/xvdb w

Or

# virsh attach-disk guest1 --driver tap --subdriver aio /vm/images/image1.img xvdb

- Reszing of guest LVs can be achieved without a reboot if x.1 is followed, but a reboot of the guest is necessary if x.2 is followed for the guest to see the new disk size.

- The online attaching of disks may not work as expected for fully virtualized guests which doesn't have PV drivers installed.

Monday, November 17, 2008

How to migrate guests using virsh commands?

Usually "xm migrate -l" is used to migrate a guest from one system to other system. There is no option in virt-manager to migrate a guest from one host to another. Libvirt based virsh command can be used to do this. The syntax of "virsh migrate" is a bit confusing to a lot of beginners. Details given below would help in solving those confusions.

- There are two systems - HostA and HostB. HostA is the source machine and HostB is the destination machine.

1 - If you are currently logged into HostA as root, below command can be used to migrate a guest to HostB.

# virsh migrate --live xen+ssh://HostB

Replace HostB with its ip or FQDN. You would be asked for the root password of HostB. Upon entering the right password for HostB, migration would happen successfully.

2 - If you are currently logged into a third system in the network which has "virsh" command available in it, the below command can be used to migrate a guest from HostA to HostB.

# virsh --connect xen+ssh://HostA migrate --live xen+ssh://HostB

Replace HostA and HostB with its ip addresses or FQDNs. You would be asked for the root password of HostA first, then the HostB. Upon entering the right password for both hosts, migration would happen successfully.

3 - If you are currently logged into HostB and want to migrate a guest from HostA to HostB, can this be done using virsh? Try it out yourself.

Libvirt connection over ssh (xen+ssh) is the the method used in the above example. Libvirt remote TLS connection can also be established using certificates. Since that needs a bit more deails to setup, that is apt for another doc.