User Tools

Site Tools


linux:linux_lvm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
linux:linux_lvm [2023/06/19 13:50]
manu [LVM import same VGname]
linux:linux_lvm [2025/07/04 16:14] (current)
manu [Force systemd to re-read fstab and create automount units]
Line 8: Line 8:
  
 http://​www.voleg.info/​linux-mirror-system-disk.html http://​www.voleg.info/​linux-mirror-system-disk.html
 +
 +===== Disk cloning RHEL9 =====
 +
 +For RHEL9 and higher
 +
 +After booting on a clone, the server will start in single mode user. No pv or vg is visible
 +
 +You have to update the file **/​etc/​lvm/​devices/​system.devices** to match the disk WWN
 +<cli prompt='#'>​
 +[root@rhtest ~]# multipath -ll
 +mpatha (36005076xxxxxxxxxxx000000000017b9) dm-0 IBM,2145
 +size=60G features='​1 queue_if_no_path'​ hwhandler='​1 alua' wp=rw
 +</​cli>​
 +
 +Or use the command
 +<cli prompt='#'>​
 +lvmdevices --update
 +</​cli>​
 +
 +<cli prompt='#'>​
 +[root@rhtest ~]# cat /​etc/​lvm/​devices/​system.devices
 +# LVM uses devices listed in this file.
 +# Created by LVM command vgimportdevices pid 65293 at Wed May 21 10:23:35 2025
 +HOSTNAME=rhtest
 +VERSION=1.1.10
 +IDTYPE=mpath_uuid IDNAME=part3-mpath-36005076xxxxxxxxxxx00000000001616 DEVNAME=/​dev/​mapper/​mpatha3 PVID=JzZ6Nf4VUZFbBcNle1TX9miPydVISnYT
 +</​cli>​
 +
 +Also update the multipathing binding if you want to change order
 +<cli prompt='#'>​
 +[root@rhtest ~]# cat /​etc/​multipath/​bindings
 +# Multipath bindings, Version : 1.0
 +# NOTE: this file is automatically maintained by the multipath program.
 +# You should not need to edit this file in normal circumstances.
 +#
 +# Format:
 +# alias wwid
 +#
 +mpatha 36005076xxxxxxxxxxx00000000001616 ​
 +mpathb 36005076xxxxxxxxxxx000000000017b9
 +</​cli>​
 +
 +After updating all files, import VG devices. ​
 +  * to import all pv, vg (will only add the multipath map): vgimportdevices -a 
 +  * If only some of the disks contain PV’s (for each to be used device) to be used by this system run: lvmdevices --adddev [device name] 
 +  For example: lvmdevices --adddev /​dev/​mapper/​mpathc  ​
 +
 +
 +
 +Now reboot. Everything is OK
 +
 +  * By default in a newly installed RHEL9 system LVM2 will use the /​etc/​lvm/​devices/​system.devices file to identify which disks are PV’s for this install. The /​etc/​lvm/​lvm.conf filters are ignored. As the imported PV’s are not in this file LVM will not work with them.
 +
 +  * If a RHEL8 system is upgraded to RHEL9 the /​etc/​lvm/​devices/​system.devices file does not exists LVM will run as if use_devicesfile = 0 is set and use the /​etc/​lvm/​lvm.conf filters.
 +
 +  * If the /​etc/​lvm/​lvm.conf use_devicesfile parameter is set to 0. The /​etc/​lvm/​devices/​system.devices will not be used and the /​etc/​lvm/​lvm.conf filters are used.
 +
  
 ===== LVM import same VGname ===== ===== LVM import same VGname =====
Line 49: Line 106:
 </​cli>​ </​cli>​
  
-check filesystem type+Check filesystem type
 <cli prompt='#'>​ <cli prompt='#'>​
-[root@rhtest ~]# mount -t xfs /​dev/​mapper/​vg02-root /mnt3 +[root@rhtest ~]# lsblk -o FSTYPE,MOUNTPOINT,NAME /dev/sdd 
-mount: /mnt3: wrong fs typebad optionbad superblock on /dev/mapper/vg02-root, missing codepage or helper program, or other error. +FSTYPE ​     MOUNTPOINT NAME 
-[root@rhtest ~]# mount -o nouuid /​dev/​mapper/​vg02-root /mnt3+                       sdd 
 +vfat                   ​├─sdd1 
 +xfs                    ├─sdd2 
 +LVM2_member ​           └─sdd3 
 +xfs                    ​├─rhel_redhattest1-root 
 +swap        ​[SWAP      └─rhel_redhattest1-swap
 </​cli>​ </​cli>​
 +
  
 Check the filesystem using xfs_repair for xfs and fsck for ext4 Check the filesystem using xfs_repair for xfs and fsck for ext4
Line 73: Line 136:
  
 If error to mount, I've first done a : xfs_repair -L <LV> If error to mount, I've first done a : xfs_repair -L <LV>
 +
 +Mount the filesystem
 <cli prompt='#'>​ <cli prompt='#'>​
-[root@rhtest ~]# lsblk -o FSTYPE,MOUNTPOINT,NAME /dev/sdd +[root@rhtest ~]# mount -t xfs /​dev/​mapper/​vg02-root /mnt3 
-FSTYPE ​     MOUNTPOINT NAME +mount: /mnt3: wrong fs typebad optionbad superblock on /dev/mapper/vg02-root, missing codepage or helper program, or other error. 
-                       sdd +[root@rhtest ~]# mount -o nouuid /​dev/​mapper/​vg02-root /mnt3
-vfat                   ​├─sdd1 +
-xfs                    ├─sdd2 +
-LVM2_member ​           └─sdd3 +
-xfs                    ​├─rhel_redhattest1-root +
-swap        ​[SWAP      └─rhel_redhattest1-swap+
 </​cli>​ </​cli>​
  
Line 314: Line 374:
  
 To see the number of reserved blocks on a mounted XFS file system: To see the number of reserved blocks on a mounted XFS file system:
-<​cli ​promt='#'>​+<​cli ​prompt='#'>​
 # xfs_io -x -c "​resblks"​ /root/test # xfs_io -x -c "​resblks"​ /root/test
 </​cli>​ </​cli>​
  
 We can use this command to change the reserved number of blocks on a mounted XFS file system (replace <​blocks>​ with an integer number): We can use this command to change the reserved number of blocks on a mounted XFS file system (replace <​blocks>​ with an integer number):
-<​cli ​promt='#'>​+<​cli ​prompt='#'>​
 # xfs_io -x -c "​resblks <​blocks>"​ /root/test # xfs_io -x -c "​resblks <​blocks>"​ /root/test
 </​cli>​ </​cli>​
  
 To compute the percentage of reserved blocks, one must get the total number of blocks in the file system by multiplying the agcount and agsize numbers together. ​  Those values are obtained via this command: To compute the percentage of reserved blocks, one must get the total number of blocks in the file system by multiplying the agcount and agsize numbers together. ​  Those values are obtained via this command:
-<​cli ​promt='#'>​+<​cli ​prompt='#'>​
 # xfs_info /root/test # xfs_info /root/test
 meta-data=/​dev/​vda2 ​             isize=256 ​   agcount=4, agsize=6400 blks meta-data=/​dev/​vda2 ​             isize=256 ​   agcount=4, agsize=6400 blks
Line 352: Line 412:
  
 Create a mirror LV Create a mirror LV
-<​cli ​promt='#'>​+<​cli ​prompt='#'>​
 # lvconvert -m1 datavg/​my_lv # lvconvert -m1 datavg/​my_lv
 </​cli>​ </​cli>​
Line 388: Line 448:
  
 To Repair ​ a mirror after suffering a disk failure. To Repair ​ a mirror after suffering a disk failure.
-<​cli ​promt='#'>​+<​cli ​prompt='#'>​
 # lvconvert --repair # lvconvert --repair
 </​cli> ​ </​cli> ​
  
 To Merges ​ a snapshot into its origin volume. To Merges ​ a snapshot into its origin volume.
-<​cli ​promt='#'>​+<​cli ​prompt='#'>​
 # lvconvert --merge ​ # lvconvert --merge ​
 </​cli>​ </​cli>​
  
 To Create ​ a  snapshot from existing logical volume using another existing logical volume as its origin ​               ​ To Create ​ a  snapshot from existing logical volume using another existing logical volume as its origin ​               ​
-<​cli ​promt='#'>​+<​cli ​prompt='#'>​
 # lvconvert -s # lvconvert -s
 # lvconvert --snapshot # lvconvert --snapshot
Line 404: Line 464:
  
 To split off mirror images to form a new logical volume To split off mirror images to form a new logical volume
-<​cli ​promt='#'>​+<​cli ​prompt='#'>​
 # lvconvert --splitmirrors Images # lvconvert --splitmirrors Images
 </​cli>​ </​cli>​
Line 411: Line 471:
  
 Instead using ‘lvconvert’ mirroring command, we use here ‘pvmove‘ command with option ‘-n‘ (logical volume name) method to mirror data between two devices. Instead using ‘lvconvert’ mirroring command, we use here ‘pvmove‘ command with option ‘-n‘ (logical volume name) method to mirror data between two devices.
-<​cli ​promt='#'>​+<​cli ​prompt='#'>​
 # pvmove -n /​dev/​datavg/​data1lv /dev/vdb1 /dev/sda1 # pvmove -n /​dev/​datavg/​data1lv /dev/vdb1 /dev/sda1
 </​cli>​ </​cli>​
Line 460: Line 520:
 /​dev/​libraryvg/​vmlv ​                       /​virtual_vm ​   ext4   ​defaults ​                     0  2 /​dev/​libraryvg/​vmlv ​                       /​virtual_vm ​   ext4   ​defaults ​                     0  2
 </​cli>​ </​cli>​
 +
 +===== list PVID / VGID =====
 +
 +the PVID and VGID of disk
 +
 +
 ===== Errors on LVM ===== ===== Errors on LVM =====
  
Line 538: Line 604:
  
 # systemctl restart local-fs.target # systemctl restart local-fs.target
 +
 +==== How disable the devices file (RHEL9 an higher) ====
 +
 +Disabling the devices file automatically enables the lvm.conf device filter.
 +
 +Use LVM commands to control LVM device scanning. LVM commands interact with a file called the system.devices file, which lists the visible and usable devices. This feature is enabled by default in Red Hat Enterprise Linux 9.
 +
 +You can enable Logical Volume Manager (LVM) to access and use all devices on the system, which overrides the restrictions caused by the devices listed in the system.devices
 +
 +Add the line into **/​etc/​lvm/​lvm.conf**
 +<​code>​
 +use_devicesfile=0
 +</​code>​
 +
 +https://​docs.redhat.com/​en/​documentation/​red_hat_enterprise_linux/​9/​html/​configuring_and_managing_logical_volumes/​limiting-lvm-device-visibility-and-usage_configuring-and-managing-logical-volumes
linux/linux_lvm.1687175403.txt.gz · Last modified: 2023/06/19 13:50 by manu