http://www.datadisk.co.uk/html_docs/redhat/rh_lvm.htm
http://www.voleg.info/linux-mirror-system-disk.html
https://www.thegeekdiary.com/centos-rhel-7-how-to-create-and-remove-the-lvm-mirrors-using-lvconvert/
For RHEL9 and higher
After booting on a clone, the server will start in single mode user. No pv or vg is visible
You have to update the file /etc/lvm/devices/system.devices to match the disk WWN
[root@rhtest ~]# multipath -ll mpatha (36005076xxxxxxxxxxx000000000017b9) dm-0 IBM,2145 size=60G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
[root@rhtest ~]# cat /etc/lvm/devices/system.devices # LVM uses devices listed in this file. # Created by LVM command vgimportdevices pid 65293 at Wed May 21 10:23:35 2025 HOSTNAME=rhtest VERSION=1.1.10 IDTYPE=mpath_uuid IDNAME=part3-mpath-36005076xxxxxxxxxxx00000000001616 DEVNAME=/dev/mapper/mpatha3 PVID=JzZ6Nf4VUZFbBcNle1TX9miPydVISnYT
Also update the multipathing binding if you want to change order
[root@rhtest ~]# cat /etc/multipath/bindings # Multipath bindings, Version : 1.0 # NOTE: this file is automatically maintained by the multipath program. # You should not need to edit this file in normal circumstances. # # Format: # alias wwid # mpatha 36005076xxxxxxxxxxx00000000001616 mpathb 36005076xxxxxxxxxxx000000000017b9
After updating all files, import VG devices.
For example: lvmdevices –adddev /dev/mapper/mpathc
Now reboot. Everything is OK
Here as you can see Not using device, it's a duplicate VG, so not possible to import the VG
[root@rhtest ~]# pvscan WARNING: Not using device /dev/sdc1 for PV GuRFbq-coDt-Bg6l-lraR-JYRZ-zbQx-8mknGe. WARNING: Not using device /dev/sdd3 for PV m8MZp2-r66w-lR11-ilJ7-bjPS-zopv-BFGFuc. WARNING: PV GuRFbq-coDt-Bg6l-lraR-JYRZ-zbQx-8mknGe prefers device /dev/sdb1 because device is used by LV. WARNING: PV m8MZp2-r66w-lR11-ilJ7-bjPS-zopv-BFGFuc prefers device /dev/sda3 because device is used by LV. PV /dev/sdb1 VG rhel_redhat-data lvm2 [<30.00 GiB / 1020.00 MiB free] PV /dev/sda3 VG rhel_redhattest1 lvm2 [22.41 GiB / 0 free] Total: 2 [<52.41 GiB] / in use: 2 [<52.41 GiB] / in no VG: 0 [0 ]
To import the VG, you have to rename it during import like this
[root@rhtest ~]# vgimportclone -n vg01 /dev/sdc1 [root@rhtest ~]# vgimportclone -n vg02 /dev/sdd3 [root@rhtest ~]# vgs VG #PV #LV #SN Attr VSize VFree rhel_redhat-data 1 1 0 wz--n- <30.00g 1020.00m rhel_redhattest1 1 2 0 wz--n- 22.41g 0 vg01 1 1 0 wz--n- <30.00g 1020.00m vg02 1 2 0 wz--n- 22.41g 0 [root@rhtest ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert rhel_redhat-data rhel_redhat-data -wi-ao---- 29.00g root rhel_redhattest1 -wi-ao---- <20.01g swap rhel_redhattest1 -wi-ao---- 2.40g rhel_redhat-data vg01 -wi------- 29.00g root vg02 -wi------- <20.01g swap vg02 -wi------- 2.40g
Now you're able to mount the filesystems, just enable first the 2 new VGs
[root@rhtest ~]# vgchange -ay vg01 [root@rhtest ~]# vgchange -ay vg02
Check filesystem type
[root@rhtest ~]# lsblk -o FSTYPE,MOUNTPOINT,NAME /dev/sdd FSTYPE MOUNTPOINT NAME sdd vfat ├─sdd1 xfs ├─sdd2 LVM2_member └─sdd3 xfs / ├─rhel_redhattest1-root swap [SWAP] └─rhel_redhattest1-swap
Check the filesystem using xfs_repair for xfs and fsck for ext4
[root@rhtest ~]# xfs_repair -v /dev/mapper/vg01-rhel_redhat--data Phase 1 - find and verify superblock... - block cache size set to 369264 entries Phase 2 - using internal log - zero log... zero_log: head block 25428 tail block 25424 ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this.
If error to mount, I've first done a : xfs_repair -L <LV>
Mount the filesystem
[root@rhtest ~]# mount -t xfs /dev/mapper/vg02-root /mnt3 mount: /mnt3: wrong fs type, bad option, bad superblock on /dev/mapper/vg02-root, missing codepage or helper program, or other error. [root@rhtest ~]# mount -o nouuid /dev/mapper/vg02-root /mnt3
Create a new disk for linux, rescan and create a partition type 8e00 (LVM) using gdisk
[root@lnxa081 centos_75]# rescan-scsi-bus.sh -a [root@lnxa081 centos_75]# gdisk /dev/mapper/mpathb GPT fdisk (gdisk) version 0.8.6 Partition table scan: MBR: not present BSD: not present APM: not present GPT: not present Creating new GPT entries. Command (? for help): n Partition number (1-128, default 1): First sector (34-83886046, default = 2048) or {+-}size{KMGTP}: Last sector (2048-83886046, default = 83886046) or {+-}size{KMGTP}: Current type is 'Linux filesystem' Hex code or GUID (L to show codes, Enter = 8300): 8e00 Changed type of partition to 'Linux LVM' Command (? for help): w Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING PARTITIONS!! Do you want to proceed? (Y/N): y OK; writing new GUID partition table (GPT) to /dev/mapper/mpathb. Warning: The kernel is still using the old partition table. The new table will be used at the next reboot. The operation has completed successfully. [root@lnxa081 centos_75]# gdisk -l /dev/mapper/mpathb GPT fdisk (gdisk) version 0.8.6 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/mapper/mpathb: 83886080 sectors, 40.0 GiB Logical sector size: 512 bytes Disk identifier (GUID): 6E544892-1B5E-40AA-8551-00D674313F21 Partition table holds up to 128 entries First usable sector is 34, last usable sector is 83886046 Partitions will be aligned on 2048-sector boundaries Total free space is 2014 sectors (1007.0 KiB) Number Start (sector) End (sector) Size Code Name 1 2048 83886046 40.0 GiB 8E00 Linux LVM
Create a VG, and LV, filesystem:
[root@lnxa081 centos_75]# ls /dev/mapper/mpathb* /dev/mapper/mpathb [root@lnxa081 centos_75]# partprobe -s [root@lnxa081 centos_75]# ls /dev/mapper/mpathb* /dev/mapper/mpathb /dev/mapper/mpathb1 [root@lnxa081 centos_75]# pvcreate /dev/mapper/mpathb1 Physical volume "/dev/mapper/mpathb1" successfully created. [root@lnxa081 centos_75]# pvs PV VG Fmt Attr PSize PFree /dev/mapper/mpathb1 datavg lvm2 a-- <40.00g <40.00g
[root@lnxa081 ~]# pvs --segments -o +lv_name,lv_size PV VG Fmt Attr PSize PFree Start SSize LV LSize /dev/mapper/mpathwp1 vg_container01 lvm2 a-- 5.00t 5.00t 0 1310719 0 /dev/mapper/mpathxp1 vg_container01 lvm2 a-- 5.00t 4.00t 0 262146 lv_container001 11.00t /dev/mapper/mpathxp1 vg_container01 lvm2 a-- 5.00t 4.00t 262146 1048573 0 /dev/mapper/mpathyp1 vg_container01 lvm2 a-- 5.00t 0 0 1310719 lv_container001 11.00t /dev/mapper/mpathzp1 vg_container01 lvm2 a-- 5.00t 0 0 1310719 lv_container001 11.00t
[root@lnxa081 centos_75]# vgcreate datavg /dev/mapper/mpathb1 Volume group "datavg" successfully created [root@lnxa081 centos_75]# vgs VG #PV #LV #SN Attr VSize VFree datavg 1 0 0 wz--n- <40.00g <40.00g
[root@lnxa081 centos_75]# lvcreate -n postgres1lv -L20G datavg Logical volume "postgres1lv" created. [root@lnxa081 centos_75]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert postgres1lv datavg -wi-a----- 20.00g
Filesystems most used type are: xfs, ext4, and coming btrfs
[root@lnxa081 centos_75]# mkfs.xfs /dev/mapper/datavg-postgres1lv meta-data=/dev/mapper/datavg-postgres1lv isize=512 agcount=16, agsize=327680 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=5242880, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@lnxa081 centos_75]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Fri May 18 07:49:08 2018 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # UUID=fe7f5254-1302-4bd5-8e3e-1a6046ce2943 / btrfs subvol=root 0 0 UUID=39651e3e-f8a3-4e46-ab7f-6b3ea7bb5991 /boot xfs defaults 0 0 UUID=fe7f5254-1302-4bd5-8e3e-1a6046ce2943 /home btrfs subvol=home 0 0 UUID=58d659e7-72ff-4434-8803-7e4a5df78b91 swap swap defaults 0 0 /dev/mapper/datavg-postgres1lv /postgres1 xfs defaults 0 0 [root@lnxa081 centos_75]# mkdir /postgres1 [root@lnxa081 centos_75]# mount /postgres1 [root@lnxa081 centos_75]# df -h | grep post /dev/mapper/datavg-postgres1lv 20G 33M 20G 1% /postgres1
Extend XFS filesystem using xfs_growfs
In one command:
[root@lnxa081 centos_75]# lvextend -L +1G --resizefs /dev/mapper/datavg-postgres1lv
Or 2 commands
[root@lnxa081 centos_75]# lvextend -L +1G /dev/mapper/datavg-postgres1lv Size of logical volume datavg/postgres1lv changed from 20.00 GiB (5120 extents) to 21.00 GiB (5376 extents). Logical volume datavg/postgres1lv successfully resized. [root@lnxa081 centos_75]# xfs_growfs -d /dev/mapper/datavg-postgres1lv meta-data=/dev/mapper/datavg-postgres1lv isize=512 agcount=16, agsize=327680 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0 spinodes=0 data = bsize=4096 blocks=5242880, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 5242880 to 5505024 [root@lnxa081 centos_75]# df -h | grep post /dev/mapper/datavg-postgres1lv 20G 33M 20G 1% /postgres1
Change the default reserved blocks percentage cache size for filesystems, to use more space (default value is 5%):
[root@rh-tsm ~]# tune2fs -l /dev/mapper/TSMDB-TSMDB02_lv tune2fs 1.42.9 (28-Dec-2013) Filesystem volume name: <none> Last mounted on: /TSMDATABASE/DATA/db2 Filesystem UUID: e52277be-f820-4906-b6a3-1b30e0e88934 Filesystem magic number: 0xEF53 Filesystem revision : 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: unsigned_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 8126464 Block count: 32505856 Reserved block count #: 1625292 Free blocks: 6970825 Free inodes: 8126353 First block: 0 Block size: 4096 Fragment size: 4096 Group descriptor size: 64 Reserved GDT blocks: 1024 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stride: 8 RAID stripe width: 8 Flex block group size: 16 Filesystem created: Mon Jan 23 14:14:45 2017 Last mount time: Mon Jan 23 15:03:52 2017 Last write time: Mon Jan 23 15:03:52 2017 Mount count: 2 Maximum mount count: -1 Last checked: Mon Jan 23 14:14:45 2017 Check interval: 0 (<none>) Lifetime writes: 97 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: d81553db-6a9f-42de-978d-36c420a52b9f Journal backup: inode blocks [root@rh-tsm ~]# tune2fs -m 3 /dev/mapper/TSMDB-TSMDB02_lv tune2fs 1.42.9 (28-Dec-2013) Setting reserved blocks percentage to 3% (975175 blocks)
To see the number of reserved blocks on a mounted XFS file system:
# xfs_io -x -c "resblks" /root/test
We can use this command to change the reserved number of blocks on a mounted XFS file system (replace <blocks> with an integer number):
# xfs_io -x -c "resblks <blocks>" /root/test
To compute the percentage of reserved blocks, one must get the total number of blocks in the file system by multiplying the agcount and agsize numbers together. Those values are obtained via this command:
# xfs_info /root/test meta-data=/dev/vda2 isize=256 agcount=4, agsize=6400 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 finobt=0 data = bsize=4096 blocks=25600, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=0 log =internal bsize=4096 blocks=853, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
The reserved block percentage would be: 100 * “reserved blocks” / (agsize * agcount)
Users may be surprised to see values less than 1%, whereas older file systems typically defaulted to 5%.
lvconvert with option -m1 for mirror and -m0 to remove a mirror copy
Create a mirror LV
# lvconvert -m1 datavg/my_lv
Check synchronization status
# lvs -a -o name,copy_percent,devices datavg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdb1(0) # lvs --all --segments -o +devices LV VG Attr #Str Type SSize Devices root centos -wi-ao---- 1 linear 17.47g /dev/sda2(512) swap centos -wi-ao---- 1 linear 2.00g /dev/sda2(0) testlv datavg rwi-aor--- 2 raid1 1.00g testlv_rimage_0(0),testlv_rimage_1(0) [testlv_rimage_0] datavg iwi-aor--- 1 linear 1.00g /dev/sdb(0) [testlv_rimage_1] datavg iwi-aor--- 1 linear 1.00g /dev/sdc(1) [testlv_rmeta_0] datavg ewi-aor--- 1 linear 4.00m /dev/sdb(256) [testlv_rmeta_1] datavg ewi-aor--- 1 linear 4.00m /dev/sdc(0)
Once synchronized, you can remove one copy you have to specify the disk to remove
# lvconvert -m0 datavg/my_lv /dev/sda1 # lvs -a -o name,copy_percent,devices datavg LV Copy% Devices datalv /dev/sdb1(1)
To Repair a mirror after suffering a disk failure.
# lvconvert --repair
To Merges a snapshot into its origin volume.
# lvconvert --merge
To Create a snapshot from existing logical volume using another existing logical volume as its origin
# lvconvert -s # lvconvert --snapshot
To split off mirror images to form a new logical volume
# lvconvert --splitmirrors Images
Instead using ‘lvconvert’ mirroring command, we use here ‘pvmove‘ command with option ‘-n‘ (logical volume name) method to mirror data between two devices.
# pvmove -n /dev/datavg/data1lv /dev/vdb1 /dev/sda1
The command is one of the simplest way to mirror the data between two devices, but in real environment Mirroring is used more often than pvmove.
List filesystems type
manu-opensuse:~ # df -Th Filesystem Type Size Used Avail Use% Mounted on devtmpfs devtmpfs 7.8G 4.0K 7.8G 1% /dev tmpfs tmpfs 7.8G 487M 7.3G 7% /dev/shm tmpfs tmpfs 7.8G 1.9M 7.8G 1% /run tmpfs tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup /dev/sda3 btrfs 40G 35G 5.1G 88% / /dev/sda1 vfat 511M 8.4M 503M 2% /boot/efi /dev/sda3 btrfs 40G 35G 5.1G 88% /boot/grub2/i386-pc /dev/sda4 xfs 25G 18G 7.4G 71% /home /dev/sda3 btrfs 40G 35G 5.1G 88% /root /dev/sda3 btrfs 40G 35G 5.1G 88% /.snapshots /dev/mapper/libraryvg-uncryptlv ext4 192G 175G 8.6G 96% /library/uncrypt
List only disks and partitions, with UUID
manu-opensuse:~ # lsblk -f NAME FSTYPE LABEL UUID FSAVAIL FSUSE% MOUNTPOINT sda ├─sda1 vfat F926-FC70 502.7M 2% /boot/efi ├─sda2 ext3 a2e132cf-d69c-448c-bc59-9e833bebb95c ├─sda3 btrfs a3da64e9-f198-4cb2-adcd-01ec0541cba9 5G 87% / ├─sda4 xfs 4c91d4e3-89f7-4b17-9507-84b17c69d777 7.3G 71% /home ├─sda5 swap 859cf5c6-a1ba-4711-98b2-387b8c2bd860 [SWAP] └─sda6 LVM2_member 1z8s8i-WQL7-yutq-VKYE-cbBD-p1ZB-3dCuwo ├─libraryvg-vmlv ext4 35bfc2a9-a3c0-4eee-82a1-1f62ca52aad7 9.3G 88% /virtual_vm ├─libraryvg-uncryptlv ext4 4feeb184-a8a2-44a2-ac49-ade56c01853a 8.6G 91% /library/uncrypt └─libraryvg-cryptlv crypto_LUKS 5667183b-30ad-4bba-9c00-df9142079076
For persitent mount, you can use also UUID
UUID=a3da64e9-f198-4cb2-adcd-01ec0541cba9 / btrfs defaults 0 0 UUID=a3da64e9-f198-4cb2-adcd-01ec0541cba9 /.snapshots btrfs subvol=/@/.snapshots 0 0 UUID=a3da64e9-f198-4cb2-adcd-01ec0541cba9 /var btrfs subvol=/@/var 0 0 UUID=4c91d4e3-89f7-4b17-9507-84b17c69d777 /home xfs defaults 0 0 UUID=859cf5c6-a1ba-4711-98b2-387b8c2bd860 swap swap defaults 0 0 /dev/libraryvg/vmlv /virtual_vm ext4 defaults 0 2
the PVID and VGID of disk
LVM commands (such as vgs, lvchange, etc) display messages like this when trying to list VG or LV:
Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/dm-5 not /dev/sdd Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/emcpowerb not /dev/sde Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/sddlmab not /dev/sdf
With a default configuration, LVM commands will scan for devices in /dev and check every resulting device for LVM metadata. This is caused by the default filter in the /etc/lvm/lvm.conf, which is as follows:
filter = [ "a/.*/" ]
When using Device Mapper Multipath or other multipath software such as EMC PowerPath or Hitachi Dynamic Link Manager (HDLM), each path to a particular logical unit number (LUN) is registered as a different SCSI device, such as /dev/sdb or /dev/sdc. The multipath software will then create a new device that maps to those individual paths, such as /dev/mapper/mpath1 or /dev/mapper/mpatha for Device Mapper Multipath, /dev/emcpowera for EMC PowerPath, or /dev/sddlmab for Hitachi HDLM. Since each LUN has multiple device nodes in /dev that point to the same underlying data, they all contain the same LVM metadata and thus LVM commands will find the same metadata multiple times and report them as duplicates.
The filter you configure should include all devices that need to be checked for LVM metadata, such as the local hard drive with the root volume group on it and any multipathed devices. By rejecting the underlying paths to a multipath device (such as /dev/sdb, /dev/sdd, etc) you can avoid these duplicate PV warnings, since each unique metadata area will only be found once on the multipath device itself. The following examples show filters that will avoid duplicate PV warnings due to multiple storage paths being available.
This filter accepts the second partition on the first hard drive (/dev/sda and any device-mapper-multipath devices, while rejecting everything else.
filter = [ "a|/dev/sda2$|", "a|/dev/mapper/mpath.*|", "r|.*|" ]
This filter accepts all HP SmartArray controllers and any EMC PowerPath devices.
filter = [ "a|/dev/cciss/.*|", "a|/dev/emcpower.*|", "r|.*|" ]
You can also test a filter on the fly, without modifying the /etc/lvm/lvm.conf file, by adding the –config argument to the LVM command, as in the following example.
# lvs --config 'devices{ filter = [ "a|/dev/emcpower.*|", "r|.*|" ] }'
Pb with duplicate PVs
# vgchange -an # vgimportclone --basevgname tsmdb01vg /dev/md124vgs # vgscan # vgchange -ay
[root@hrstsm01 multipath]# pvcreate /dev/mapper/mpathc Device /dev/mapper/mpathc excluded by a filter. [root@hrstsm01 multipath]# dd if=/dev/zero of=/dev/mapper/mpathc bs=512 count=1 1+0 records in 1+0 records out 512 bytes (512 B) copied, 0.00945451 s, 54.2 kB/s [root@hrstsm01 multipath]# pvcreate /dev/mapper/mpathc Device /dev/mapper/mpathc excluded by a filter.
First:
# systemctl daemon-reload
followed by:
# systemctl restart remote-fs.target
or
# systemctl restart local-fs.target