This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
linux:linux_lvm [2023/06/19 11:12] manu |
linux:linux_lvm [2025/07/04 16:14] (current) manu [Force systemd to re-read fstab and create automount units] |
||
---|---|---|---|
Line 8: | Line 8: | ||
http://www.voleg.info/linux-mirror-system-disk.html | http://www.voleg.info/linux-mirror-system-disk.html | ||
+ | |||
+ | ===== Disk cloning RHEL9 ===== | ||
+ | |||
+ | For RHEL9 and higher | ||
+ | |||
+ | After booting on a clone, the server will start in single mode user. No pv or vg is visible | ||
+ | |||
+ | You have to update the file **/etc/lvm/devices/system.devices** to match the disk WWN | ||
+ | <cli prompt='#'> | ||
+ | [root@rhtest ~]# multipath -ll | ||
+ | mpatha (36005076xxxxxxxxxxx000000000017b9) dm-0 IBM,2145 | ||
+ | size=60G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw | ||
+ | </cli> | ||
+ | |||
+ | Or use the command | ||
+ | <cli prompt='#'> | ||
+ | lvmdevices --update | ||
+ | </cli> | ||
+ | |||
+ | <cli prompt='#'> | ||
+ | [root@rhtest ~]# cat /etc/lvm/devices/system.devices | ||
+ | # LVM uses devices listed in this file. | ||
+ | # Created by LVM command vgimportdevices pid 65293 at Wed May 21 10:23:35 2025 | ||
+ | HOSTNAME=rhtest | ||
+ | VERSION=1.1.10 | ||
+ | IDTYPE=mpath_uuid IDNAME=part3-mpath-36005076xxxxxxxxxxx00000000001616 DEVNAME=/dev/mapper/mpatha3 PVID=JzZ6Nf4VUZFbBcNle1TX9miPydVISnYT | ||
+ | </cli> | ||
+ | |||
+ | Also update the multipathing binding if you want to change order | ||
+ | <cli prompt='#'> | ||
+ | [root@rhtest ~]# cat /etc/multipath/bindings | ||
+ | # Multipath bindings, Version : 1.0 | ||
+ | # NOTE: this file is automatically maintained by the multipath program. | ||
+ | # You should not need to edit this file in normal circumstances. | ||
+ | # | ||
+ | # Format: | ||
+ | # alias wwid | ||
+ | # | ||
+ | mpatha 36005076xxxxxxxxxxx00000000001616 | ||
+ | mpathb 36005076xxxxxxxxxxx000000000017b9 | ||
+ | </cli> | ||
+ | |||
+ | After updating all files, import VG devices. | ||
+ | * to import all pv, vg (will only add the multipath map): vgimportdevices -a | ||
+ | * If only some of the disks contain PV’s (for each to be used device) to be used by this system run: lvmdevices --adddev [device name] | ||
+ | For example: lvmdevices --adddev /dev/mapper/mpathc | ||
+ | |||
+ | |||
+ | |||
+ | Now reboot. Everything is OK | ||
+ | |||
+ | * By default in a newly installed RHEL9 system LVM2 will use the /etc/lvm/devices/system.devices file to identify which disks are PV’s for this install. The /etc/lvm/lvm.conf filters are ignored. As the imported PV’s are not in this file LVM will not work with them. | ||
+ | |||
+ | * If a RHEL8 system is upgraded to RHEL9 the /etc/lvm/devices/system.devices file does not exists LVM will run as if use_devicesfile = 0 is set and use the /etc/lvm/lvm.conf filters. | ||
+ | |||
+ | * If the /etc/lvm/lvm.conf use_devicesfile parameter is set to 0. The /etc/lvm/devices/system.devices will not be used and the /etc/lvm/lvm.conf filters are used. | ||
+ | |||
===== LVM import same VGname ===== | ===== LVM import same VGname ===== | ||
Line 47: | Line 104: | ||
[root@rhtest ~]# vgchange -ay vg01 | [root@rhtest ~]# vgchange -ay vg01 | ||
[root@rhtest ~]# vgchange -ay vg02 | [root@rhtest ~]# vgchange -ay vg02 | ||
+ | </cli> | ||
+ | |||
+ | Check filesystem type | ||
+ | <cli prompt='#'> | ||
+ | [root@rhtest ~]# lsblk -o FSTYPE,MOUNTPOINT,NAME /dev/sdd | ||
+ | FSTYPE MOUNTPOINT NAME | ||
+ | sdd | ||
+ | vfat ├─sdd1 | ||
+ | xfs ├─sdd2 | ||
+ | LVM2_member └─sdd3 | ||
+ | xfs / ├─rhel_redhattest1-root | ||
+ | swap [SWAP] └─rhel_redhattest1-swap | ||
+ | </cli> | ||
+ | |||
+ | |||
+ | Check the filesystem using xfs_repair for xfs and fsck for ext4 | ||
+ | <cli prompt='#'> | ||
+ | [root@rhtest ~]# xfs_repair -v /dev/mapper/vg01-rhel_redhat--data | ||
+ | Phase 1 - find and verify superblock... | ||
+ | - block cache size set to 369264 entries | ||
+ | Phase 2 - using internal log | ||
+ | - zero log... | ||
+ | zero_log: head block 25428 tail block 25424 | ||
+ | ERROR: The filesystem has valuable metadata changes in a log which needs to | ||
+ | be replayed. Mount the filesystem to replay the log, and unmount it before | ||
+ | re-running xfs_repair. If you are unable to mount the filesystem, then use | ||
+ | the -L option to destroy the log and attempt a repair. | ||
+ | Note that destroying the log may cause corruption -- please attempt a mount | ||
+ | of the filesystem before doing this. | ||
+ | </cli> | ||
+ | |||
+ | If error to mount, I've first done a : xfs_repair -L <LV> | ||
+ | |||
+ | Mount the filesystem | ||
+ | <cli prompt='#'> | ||
+ | [root@rhtest ~]# mount -t xfs /dev/mapper/vg02-root /mnt3 | ||
+ | mount: /mnt3: wrong fs type, bad option, bad superblock on /dev/mapper/vg02-root, missing codepage or helper program, or other error. | ||
+ | [root@rhtest ~]# mount -o nouuid /dev/mapper/vg02-root /mnt3 | ||
</cli> | </cli> | ||
Line 279: | Line 374: | ||
To see the number of reserved blocks on a mounted XFS file system: | To see the number of reserved blocks on a mounted XFS file system: | ||
- | <cli promt='#'> | + | <cli prompt='#'> |
# xfs_io -x -c "resblks" /root/test | # xfs_io -x -c "resblks" /root/test | ||
</cli> | </cli> | ||
We can use this command to change the reserved number of blocks on a mounted XFS file system (replace <blocks> with an integer number): | We can use this command to change the reserved number of blocks on a mounted XFS file system (replace <blocks> with an integer number): | ||
- | <cli promt='#'> | + | <cli prompt='#'> |
# xfs_io -x -c "resblks <blocks>" /root/test | # xfs_io -x -c "resblks <blocks>" /root/test | ||
</cli> | </cli> | ||
To compute the percentage of reserved blocks, one must get the total number of blocks in the file system by multiplying the agcount and agsize numbers together. Those values are obtained via this command: | To compute the percentage of reserved blocks, one must get the total number of blocks in the file system by multiplying the agcount and agsize numbers together. Those values are obtained via this command: | ||
- | <cli promt='#'> | + | <cli prompt='#'> |
# xfs_info /root/test | # xfs_info /root/test | ||
meta-data=/dev/vda2 isize=256 agcount=4, agsize=6400 blks | meta-data=/dev/vda2 isize=256 agcount=4, agsize=6400 blks | ||
Line 317: | Line 412: | ||
Create a mirror LV | Create a mirror LV | ||
- | <cli promt='#'> | + | <cli prompt='#'> |
# lvconvert -m1 datavg/my_lv | # lvconvert -m1 datavg/my_lv | ||
</cli> | </cli> | ||
Line 353: | Line 448: | ||
To Repair a mirror after suffering a disk failure. | To Repair a mirror after suffering a disk failure. | ||
- | <cli promt='#'> | + | <cli prompt='#'> |
# lvconvert --repair | # lvconvert --repair | ||
</cli> | </cli> | ||
To Merges a snapshot into its origin volume. | To Merges a snapshot into its origin volume. | ||
- | <cli promt='#'> | + | <cli prompt='#'> |
# lvconvert --merge | # lvconvert --merge | ||
</cli> | </cli> | ||
To Create a snapshot from existing logical volume using another existing logical volume as its origin | To Create a snapshot from existing logical volume using another existing logical volume as its origin | ||
- | <cli promt='#'> | + | <cli prompt='#'> |
# lvconvert -s | # lvconvert -s | ||
# lvconvert --snapshot | # lvconvert --snapshot | ||
Line 369: | Line 464: | ||
To split off mirror images to form a new logical volume | To split off mirror images to form a new logical volume | ||
- | <cli promt='#'> | + | <cli prompt='#'> |
# lvconvert --splitmirrors Images | # lvconvert --splitmirrors Images | ||
</cli> | </cli> | ||
Line 376: | Line 471: | ||
Instead using ‘lvconvert’ mirroring command, we use here ‘pvmove‘ command with option ‘-n‘ (logical volume name) method to mirror data between two devices. | Instead using ‘lvconvert’ mirroring command, we use here ‘pvmove‘ command with option ‘-n‘ (logical volume name) method to mirror data between two devices. | ||
- | <cli promt='#'> | + | <cli prompt='#'> |
# pvmove -n /dev/datavg/data1lv /dev/vdb1 /dev/sda1 | # pvmove -n /dev/datavg/data1lv /dev/vdb1 /dev/sda1 | ||
</cli> | </cli> | ||
Line 425: | Line 520: | ||
/dev/libraryvg/vmlv /virtual_vm ext4 defaults 0 2 | /dev/libraryvg/vmlv /virtual_vm ext4 defaults 0 2 | ||
</cli> | </cli> | ||
+ | |||
+ | ===== list PVID / VGID ===== | ||
+ | |||
+ | the PVID and VGID of disk | ||
+ | |||
+ | |||
===== Errors on LVM ===== | ===== Errors on LVM ===== | ||
Line 503: | Line 604: | ||
# systemctl restart local-fs.target | # systemctl restart local-fs.target | ||
+ | |||
+ | ==== How disable the devices file (RHEL9 an higher) ==== | ||
+ | |||
+ | Disabling the devices file automatically enables the lvm.conf device filter. | ||
+ | |||
+ | Use LVM commands to control LVM device scanning. LVM commands interact with a file called the system.devices file, which lists the visible and usable devices. This feature is enabled by default in Red Hat Enterprise Linux 9. | ||
+ | |||
+ | You can enable Logical Volume Manager (LVM) to access and use all devices on the system, which overrides the restrictions caused by the devices listed in the system.devices | ||
+ | |||
+ | Add the line into **/etc/lvm/lvm.conf** | ||
+ | <code> | ||
+ | use_devicesfile=0 | ||
+ | </code> | ||
+ | |||
+ | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_logical_volumes/limiting-lvm-device-visibility-and-usage_configuring-and-managing-logical-volumes |