Welcome to End Point’s blog

Ongoing observations by End Point people

Shrink XFS partition: Almost possible with LVM

If you happen to have reached this page because you're trying to shrink an XFS filesystem let's put things straight: sorry, that's not possible.

But before you go away you should know there's still hope and I'd like to show you how I used a little workaround to avoid reinstalling a Red Hat Enterprise Linux 7 or CentOS 7 VM using XFS dump/restore on-the-fly and LVM capabilities, both standard choices for the regular RHEL/CentOS 7 setup.

First of all let's clarify the situation I found myself in. For various reasons I had a CentOS 7 VM with everything already configured and working, installed not many days ago to test new software we're evaluating.

The VM itself is hosted on a dedicated server we manage on our own, so I had a certain degree of freedom to what I could do without the need of paying any additional fee. You may not be in this same situation, but you can probably try some similar solution for little money if you're using an "hourly-billed" VPS provider.

The problem was that, even if everything was working and configured, the virtual hard disk device attached to the machine was too big and on the wrong storage area of the virtualization hosting server.

There was also another minor glitch: The VM itself was using an old virtualization device driver (IDE-based) instead of the new VIRTIO one. Since I knew that the virtualized OS CentOS 7 is capable of using VIRTIO based devices I also took the chance to fix this.

Unfortunately, XFS is not capable of being shrunk at the moment (and for the foreseeable future) so what I needed to do was to:

  1. add a new virtual device correctly using VIRTIO and stored in the right storage area of the virtualization host server
  2. migrate all the filesystems to the new virtual device
  3. set the VM OS to be working from the newly-built partition
  4. dismiss the old virtual device

In my specific case this translated to connect to the virtualization hosting server, create a new LVM logical volume to host the virtual disk device for the VM and then add the new virtual device to the VM configuration. Unfortunately in order to have the VM see the new virtual device I had to shut it down.

While connected to the virtualization host server I also downloaded and attached to the VM the latest ISO of SysRescueCD which is a data rescue specialized Linux distribution. I'm specifically using this distro since it's one of the few which offers the XFS dump/restore tools on the live ISO.

Now the VM was ready to be booted with the SysRescueCD Live OS and then I could start working my way through all the needed fixes. If you're doing something similar, of course please make sure you have offsite backups and have double-checked that they're readable before doing anything else.

First of all inspect your dmesg output and find what is the source virtual device and what's the new target virtual device. In my case the source was /dev/sda and the target was /dev/vda

dmesg | less

Then create a partition on the new device (eg: /dev/vda1) as Linux type for the /boot partition; this should be of the same size as the source /boot partition (eg: /dev/sda1) and dedicate all the remaining space to a new LVM type partition (eg: /dev/vda2)

fdisk /dev/vda
# [create /boot and LVM partitions]

You could also mount and copy the /boot files or re-create them entirely if you need to change the /boot partition size. Since I kept /boot exactly the same so I could use ddrescue (a more verbose version of classic Unix dd).

ddrescue /dev/sda1 /dev/vda1

The next step is supposed to migrate the MBR and should be working but in my case the boot phase kept failing so I also needed to reinstall the bootloader via the CentOS 7 rescue system (not covered in this tutorial but briefly mentioned near the end of the article).

ddrescue -i0 -s512 /dev/sda /dev/vda

Then create the target LVM volumes.

pvcreate /dev/vda2
vgcreate fixed_centos_VG /dev/vda2
lvcreate -L 1G -n swap fixed_centos_VG
lvcreate -l 100%FREE -n root fixed_centos_VG
vgchange -a y fixed_centos_VG

Create the target XFS filesystem.

mkfs.xfs -L root /dev/fixed_centos_VG/root

And then create the swap partition.

mkfs.swap /dev/fixed_centos_VG/swap

Next create the needed mountpoints and mount the old source and the new empty filesystems.

mkdir /mnt/disk_{wrong,fixed}
mount /dev/fixed_centos_VG/root /mnt/disk_fixed
vgchange -a y wrong_centos_VG
mount /dev/centos_VG/root /mnt/disk_wrong

Now here's the real XFS magic. We'll use xfsdump and xfsrestore to copy the filesystem content (files, directory, special files) without having to care about files permission, type, extended ACLs or anything else. Plus since it's only moving the content of the filesystem we won't need to have a partition of the same size and it won't take as long as copying the entire block device as the process will just have to go through real used space.

xfs_dump -J - /mnt/disk_wrong | xfs_restore -J - /mnt/disk_fixed

If you want a more verbose output, leave out the -J option. After the process is done, be sure to carefully verify that everything is in place in the new partition.

ls -lhtra /mnt/disk_fixed/

Then unmount the disks and deactivate the LVM VGs.

umount /mnt/disk_{old,fixed}
vgchange -a n centos_VG
vgchange -a n fixed_centos_VG

At this point in order to avoid changing anything inside the virtualized OS (fstab, grub and so on), let's remove the old VG and rename the newer one with the same name the old one had.

vgremove centos_VG
pvremove /dev/sda2
vgrename {fixed_,}centos_VG

You should be now able to shutdown the VM again, detach the old disk and start the new VM which will be using the new smaller virtual device.

If the boot phase keeps failing, boot the CentOS installation media in rescue mode and after chroot-ing inside your installation run grub-install /dev/vda (targeting your new main device) to reinstall grub.

Only after everything is working as expected, proceed to remove the old unneeded device and remove it from the virtualization host server.


Rob said...

I needed a few more details to get this procedure to work. I used dd to copy the boot partition. I then tried to use grub2-install to install grub2 to the new boot partition, but it said the xfs file system did not leave room for grub2 at the beginning of the disk, so I used ext4 for the boot partition. After I copied the data over to the new logical volume,

[ I used rsync:

#rsync -axvAX /mnt/old /mnt/new

-I also used the xfsdump command as an exercise. On my system the command doesn't have an underscore. ]

I could not chroot into the mounted root file system. It kept asking for /bin/zsh. But there is no /bin/zsh in the root file system. It turns out you need to provide chroot with a command to execute. I used /bin/bash as the shell:

-The root file system, the (xfs logical volume) was mounted on /mnt.

#chroot /mnt /bin/bash.

I also needed to mount some of the directories of the SysRescueCD to the root file system:

#mount --bind /dev /mnt/dev
#mount --bind /sys /mnt/sys
#mount --bind /proc /mnt/proc
#mount --bind /run /mnt/run

-These directories are used by the grub2-mkconfig command:

#grub2-mkconfig -o /boot/grub2/grub.cfg

(I had mounted the boot partition to /mnt/boot after I mounted the root file system to /mnt.)

-Then I installed grub2 to the beginning of the disk:

#grub2-install /dev/sda

-Then it still didn't work! Finally, after reading another forum I discovered:

the /etc/fstab file needs to be edited.

It was still referencing the old UUID number of the original boot partition since it was simply copied over to the new logical volume. I changed it to the new boot partition's UUID and it finally worked!!!

It took me about a week to finally cobble together all the additional details. Using xfsdump or rsync works and I thank Mr. Calo for putting together this tutorial. Hopefully the few extra details I provided might be of help to someone having trouble

Emanuele 'Lele' Calo' said...

Thank you for adding all those details Rob, I'm sure they'll help some other readers to overcome their difficulties :)

Sing-Cheong Chen said...

Found typo in your commands.

The "xfs_dump" should be "xfsdump" and "xfs_restore" should be "xfsrestore"

Emanuele 'Lele' Calo' said...

Hello Sing-Cheong Chen and thanks for your comment.

The command name actually depends on the Linux Distro and version you're using.. I was also quite surprised when I found out about this :)

Thanks anyway for your comment!