User Tools

Site Tools


linux:linux_lvm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
linux:linux_lvm [2021/02/19 00:48]
manu [Extend a FS]
linux:linux_lvm [2025/07/04 16:14] (current)
manu [Force systemd to re-read fstab and create automount units]
Line 9: Line 9:
 http://​www.voleg.info/​linux-mirror-system-disk.html http://​www.voleg.info/​linux-mirror-system-disk.html
  
 +===== Disk cloning RHEL9 =====
  
 +For RHEL9 and higher
 +
 +After booting on a clone, the server will start in single mode user. No pv or vg is visible
 +
 +You have to update the file **/​etc/​lvm/​devices/​system.devices** to match the disk WWN
 +<cli prompt='#'>​
 +[root@rhtest ~]# multipath -ll
 +mpatha (36005076xxxxxxxxxxx000000000017b9) dm-0 IBM,2145
 +size=60G features='​1 queue_if_no_path'​ hwhandler='​1 alua' wp=rw
 +</​cli>​
 +
 +Or use the command
 +<cli prompt='#'>​
 +lvmdevices --update
 +</​cli>​
 +
 +<cli prompt='#'>​
 +[root@rhtest ~]# cat /​etc/​lvm/​devices/​system.devices
 +# LVM uses devices listed in this file.
 +# Created by LVM command vgimportdevices pid 65293 at Wed May 21 10:23:35 2025
 +HOSTNAME=rhtest
 +VERSION=1.1.10
 +IDTYPE=mpath_uuid IDNAME=part3-mpath-36005076xxxxxxxxxxx00000000001616 DEVNAME=/​dev/​mapper/​mpatha3 PVID=JzZ6Nf4VUZFbBcNle1TX9miPydVISnYT
 +</​cli>​
 +
 +Also update the multipathing binding if you want to change order
 +<cli prompt='#'>​
 +[root@rhtest ~]# cat /​etc/​multipath/​bindings
 +# Multipath bindings, Version : 1.0
 +# NOTE: this file is automatically maintained by the multipath program.
 +# You should not need to edit this file in normal circumstances.
 +#
 +# Format:
 +# alias wwid
 +#
 +mpatha 36005076xxxxxxxxxxx00000000001616 ​
 +mpathb 36005076xxxxxxxxxxx000000000017b9
 +</​cli>​
 +
 +After updating all files, import VG devices. ​
 +  * to import all pv, vg (will only add the multipath map): vgimportdevices -a 
 +  * If only some of the disks contain PV’s (for each to be used device) to be used by this system run: lvmdevices --adddev [device name] 
 +  For example: lvmdevices --adddev /​dev/​mapper/​mpathc  ​
 +
 +
 +
 +Now reboot. Everything is OK
 +
 +  * By default in a newly installed RHEL9 system LVM2 will use the /​etc/​lvm/​devices/​system.devices file to identify which disks are PV’s for this install. The /​etc/​lvm/​lvm.conf filters are ignored. As the imported PV’s are not in this file LVM will not work with them.
 +
 +  * If a RHEL8 system is upgraded to RHEL9 the /​etc/​lvm/​devices/​system.devices file does not exists LVM will run as if use_devicesfile = 0 is set and use the /​etc/​lvm/​lvm.conf filters.
 +
 +  * If the /​etc/​lvm/​lvm.conf use_devicesfile parameter is set to 0. The /​etc/​lvm/​devices/​system.devices will not be used and the /​etc/​lvm/​lvm.conf filters are used.
 +
 +
 +===== LVM import same VGname =====
 +
 +Here as you can see **Not using device**, it's a duplicate VG, so not possible to import the VG
 +<cli prompt='#'>​
 +[root@rhtest ~]# pvscan
 +  WARNING: Not using device /dev/sdc1 for PV GuRFbq-coDt-Bg6l-lraR-JYRZ-zbQx-8mknGe.
 +  WARNING: Not using device /dev/sdd3 for PV m8MZp2-r66w-lR11-ilJ7-bjPS-zopv-BFGFuc.
 +  WARNING: PV GuRFbq-coDt-Bg6l-lraR-JYRZ-zbQx-8mknGe prefers device /dev/sdb1 because device is used by LV.
 +  WARNING: PV m8MZp2-r66w-lR11-ilJ7-bjPS-zopv-BFGFuc prefers device /dev/sda3 because device is used by LV.
 +  PV /​dev/​sdb1 ​  VG rhel_redhat-data ​  lvm2 [<30.00 GiB / 1020.00 MiB free]
 +  PV /​dev/​sda3 ​  VG rhel_redhattest1 ​  lvm2 [22.41 GiB / 0    free]
 +  Total: 2 [<52.41 GiB] / in use: 2 [<52.41 GiB] / in no VG: 0 [0   ]
 +</​cli>​
 +
 +To import the VG, you have to rename it during import like this
 +<cli prompt='#'>​
 +[root@rhtest ~]# vgimportclone -n vg01 /dev/sdc1
 +[root@rhtest ~]# vgimportclone -n vg02 /dev/sdd3
 +[root@rhtest ~]# vgs
 +  VG               #PV #LV #SN Attr   ​VSize ​  VFree
 +  rhel_redhat-data ​  ​1 ​  ​1 ​  0 wz--n- <30.00g 1020.00m
 +  rhel_redhattest1 ​  ​1 ​  ​2 ​  0 wz--n- ​ 22.41g ​      0
 +  vg01               ​1 ​  ​1 ​  0 wz--n- <30.00g 1020.00m
 +  vg02               ​1 ​  ​2 ​  0 wz--n- ​ 22.41g ​      0
 +[root@rhtest ~]# lvs
 +  LV               ​VG ​              ​Attr ​      ​LSize ​  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
 +  rhel_redhat-data rhel_redhat-data -wi-ao---- ​ 29.00g
 +  root             ​rhel_redhattest1 -wi-ao---- <20.01g
 +  swap             ​rhel_redhattest1 -wi-ao---- ​  2.40g
 +  rhel_redhat-data vg01             ​-wi------- ​ 29.00g
 +  root             ​vg02 ​            ​-wi------- <20.01g
 +  swap             ​vg02 ​            ​-wi------- ​  2.40g
 +</​cli>​
 +
 +Now you're able to mount the filesystems,​ just enable first the 2 new VGs
 +<cli prompt='#'>​
 +[root@rhtest ~]# vgchange -ay vg01
 +[root@rhtest ~]# vgchange -ay vg02
 +</​cli>​
 +
 +Check filesystem type
 +<cli prompt='#'>​
 +[root@rhtest ~]# lsblk -o FSTYPE,​MOUNTPOINT,​NAME /dev/sdd
 +FSTYPE ​     MOUNTPOINT NAME
 +                       sdd
 +vfat                   ​├─sdd1
 +xfs                    ├─sdd2
 +LVM2_member ​           └─sdd3
 +xfs         / ​           ├─rhel_redhattest1-root
 +swap        [SWAP] ​      ​└─rhel_redhattest1-swap
 +</​cli>​
 +
 +
 +Check the filesystem using xfs_repair for xfs and fsck for ext4
 +<cli prompt='#'>​
 +[root@rhtest ~]# xfs_repair -v /​dev/​mapper/​vg01-rhel_redhat--data
 +Phase 1 - find and verify superblock...
 +        - block cache size set to 369264 entries
 +Phase 2 - using internal log
 +        - zero log...
 +zero_log: head block 25428 tail block 25424
 +ERROR: The filesystem has valuable metadata changes in a log which needs to
 +be replayed. ​ Mount the filesystem to replay the log, and unmount it before
 +re-running xfs_repair. ​ If you are unable to mount the filesystem, then use
 +the -L option to destroy the log and attempt a repair.
 +Note that destroying the log may cause corruption -- please attempt a mount
 +of the filesystem before doing this.
 +</​cli>​
 +
 +If error to mount, I've first done a : xfs_repair -L <LV>
 +
 +Mount the filesystem
 +<cli prompt='#'>​
 +[root@rhtest ~]# mount -t xfs /​dev/​mapper/​vg02-root /mnt3
 +mount: /mnt3: wrong fs type, bad option, bad superblock on /​dev/​mapper/​vg02-root,​ missing codepage or helper program, or other error.
 +[root@rhtest ~]# mount -o nouuid /​dev/​mapper/​vg02-root /mnt3
 +</​cli>​
  
 ===== Create a new filesystem ===== ===== Create a new filesystem =====
Line 241: Line 374:
  
 To see the number of reserved blocks on a mounted XFS file system: To see the number of reserved blocks on a mounted XFS file system:
-<​cli ​promt='#'>​+<​cli ​prompt='#'>​
 # xfs_io -x -c "​resblks"​ /root/test # xfs_io -x -c "​resblks"​ /root/test
 </​cli>​ </​cli>​
  
 We can use this command to change the reserved number of blocks on a mounted XFS file system (replace <​blocks>​ with an integer number): We can use this command to change the reserved number of blocks on a mounted XFS file system (replace <​blocks>​ with an integer number):
-<​cli ​promt='#'>​+<​cli ​prompt='#'>​
 # xfs_io -x -c "​resblks <​blocks>"​ /root/test # xfs_io -x -c "​resblks <​blocks>"​ /root/test
 </​cli>​ </​cli>​
  
 To compute the percentage of reserved blocks, one must get the total number of blocks in the file system by multiplying the agcount and agsize numbers together. ​  Those values are obtained via this command: To compute the percentage of reserved blocks, one must get the total number of blocks in the file system by multiplying the agcount and agsize numbers together. ​  Those values are obtained via this command:
-<​cli ​promt='#'>​+<​cli ​prompt='#'>​
 # xfs_info /root/test # xfs_info /root/test
 meta-data=/​dev/​vda2 ​             isize=256 ​   agcount=4, agsize=6400 blks meta-data=/​dev/​vda2 ​             isize=256 ​   agcount=4, agsize=6400 blks
Line 271: Line 404:
     * block reservation. Hence by default we cover roughly 2000 concurrent     * block reservation. Hence by default we cover roughly 2000 concurrent
     * allocation reservations.     * allocation reservations.
 +
 ===== Create mirror LV ===== ===== Create mirror LV =====
  
Line 278: Line 412:
  
 Create a mirror LV Create a mirror LV
-<​cli ​promt='#'>​+<​cli ​prompt='#'>​
 # lvconvert -m1 datavg/​my_lv # lvconvert -m1 datavg/​my_lv
 </​cli>​ </​cli>​
Line 314: Line 448:
  
 To Repair ​ a mirror after suffering a disk failure. To Repair ​ a mirror after suffering a disk failure.
-<​cli ​promt='#'>​+<​cli ​prompt='#'>​
 # lvconvert --repair # lvconvert --repair
 </​cli> ​ </​cli> ​
  
 To Merges ​ a snapshot into its origin volume. To Merges ​ a snapshot into its origin volume.
-<​cli ​promt='#'>​+<​cli ​prompt='#'>​
 # lvconvert --merge ​ # lvconvert --merge ​
 </​cli>​ </​cli>​
  
 To Create ​ a  snapshot from existing logical volume using another existing logical volume as its origin ​               ​ To Create ​ a  snapshot from existing logical volume using another existing logical volume as its origin ​               ​
-<​cli ​promt='#'>​+<​cli ​prompt='#'>​
 # lvconvert -s # lvconvert -s
 # lvconvert --snapshot # lvconvert --snapshot
Line 330: Line 464:
  
 To split off mirror images to form a new logical volume To split off mirror images to form a new logical volume
-<​cli ​promt='#'>​+<​cli ​prompt='#'>​
 # lvconvert --splitmirrors Images # lvconvert --splitmirrors Images
 </​cli>​ </​cli>​
 +
 ==== LVM pvmove Mirroring Method ==== ==== LVM pvmove Mirroring Method ====
  
 Instead using ‘lvconvert’ mirroring command, we use here ‘pvmove‘ command with option ‘-n‘ (logical volume name) method to mirror data between two devices. Instead using ‘lvconvert’ mirroring command, we use here ‘pvmove‘ command with option ‘-n‘ (logical volume name) method to mirror data between two devices.
-<​cli ​promt='#'>​+<​cli ​prompt='#'>​
 # pvmove -n /​dev/​datavg/​data1lv /dev/vdb1 /dev/sda1 # pvmove -n /​dev/​datavg/​data1lv /dev/vdb1 /dev/sda1
 </​cli>​ </​cli>​
 The command is one of the simplest way to mirror the data between two devices, but in real environment Mirroring is used more often than pvmove. The command is one of the simplest way to mirror the data between two devices, but in real environment Mirroring is used more often than pvmove.
 +
 +==== Filestems list ====
 +
 +List filesystems type
 +<cli prompt='#'>​
 +manu-opensuse:​~ # df -Th
 +Filesystem ​                     Type      Size  Used Avail Use% Mounted on
 +devtmpfs ​                       devtmpfs ​ 7.8G  4.0K  7.8G   1% /dev
 +tmpfs                           ​tmpfs ​    ​7.8G ​ 487M  7.3G   7% /dev/shm
 +tmpfs                           ​tmpfs ​    ​7.8G ​ 1.9M  7.8G   1% /run
 +tmpfs                           ​tmpfs ​    ​7.8G ​    ​0 ​ 7.8G   0% /​sys/​fs/​cgroup
 +/​dev/​sda3 ​                      ​btrfs ​     40G   ​35G ​ 5.1G  88% /
 +/​dev/​sda1 ​                      ​vfat ​     511M  8.4M  503M   2% /boot/efi
 +/​dev/​sda3 ​                      ​btrfs ​     40G   ​35G ​ 5.1G  88% /​boot/​grub2/​i386-pc
 +/​dev/​sda4 ​                      ​xfs ​       25G   ​18G ​ 7.4G  71% /home
 +/​dev/​sda3 ​                      ​btrfs ​     40G   ​35G ​ 5.1G  88% /root
 +/​dev/​sda3 ​                      ​btrfs ​     40G   ​35G ​ 5.1G  88% /.snapshots
 +/​dev/​mapper/​libraryvg-uncryptlv ext4      192G  175G  8.6G  96% /​library/​uncrypt
 +</​cli>​
 +
 +List only disks and partitions, with UUID
 +<cli prompt='#'>​
 +manu-opensuse:​~ # lsblk -f
 +NAME                    FSTYPE LABEL UUID                                   ​FSAVAIL FSUSE% MOUNTPOINT
 +sda                                                                                             
 +├─sda1 ​                 vfat         ​F926-FC70 ​                              ​502.7M ​    2% /boot/efi
 +├─sda2 ​                 ext3         ​a2e132cf-d69c-448c-bc59-9e833bebb95c ​                 ​
 +├─sda3 ​                 btrfs        a3da64e9-f198-4cb2-adcd-01ec0541cba9 ​       5G    87% /
 +├─sda4 ​                 xfs          4c91d4e3-89f7-4b17-9507-84b17c69d777 ​     7.3G    71% /home
 +├─sda5 ​                 swap         ​859cf5c6-a1ba-4711-98b2-387b8c2bd860 ​                 [SWAP]
 +└─sda6 ​                 LVM2_member ​ 1z8s8i-WQL7-yutq-VKYE-cbBD-p1ZB-3dCuwo ​               ​
 +  ├─libraryvg-vmlv ​     ext4         ​35bfc2a9-a3c0-4eee-82a1-1f62ca52aad7 ​     9.3G    88% /virtual_vm
 +  ├─libraryvg-uncryptlv ext4         ​4feeb184-a8a2-44a2-ac49-ade56c01853a ​     8.6G    91% /​library/​uncrypt
 +  └─libraryvg-cryptlv ​  ​crypto_LUKS ​ 5667183b-30ad-4bba-9c00-df9142079076 ​                 ​
 +</​cli>​
 +
 +For persitent mount, you can use also UUID
 +<cli prompt='#'>​
 +UUID=a3da64e9-f198-4cb2-adcd-01ec0541cba9 ​ /              btrfs  defaults ​                     0  0
 +UUID=a3da64e9-f198-4cb2-adcd-01ec0541cba9 ​ /​.snapshots ​   btrfs  subvol=/​@/​.snapshots ​         0  0
 +UUID=a3da64e9-f198-4cb2-adcd-01ec0541cba9 ​ /var           ​btrfs ​ subvol=/​@/​var ​                ​0 ​ 0
 +UUID=4c91d4e3-89f7-4b17-9507-84b17c69d777 ​ /home          xfs    defaults ​                     0  0
 +UUID=859cf5c6-a1ba-4711-98b2-387b8c2bd860 ​ swap           ​swap ​  ​defaults ​                     0  0
 +/​dev/​libraryvg/​vmlv ​                       /​virtual_vm ​   ext4   ​defaults ​                     0  2
 +</​cli>​
 +
 +===== list PVID / VGID =====
 +
 +the PVID and VGID of disk
 +
  
 ===== Errors on LVM ===== ===== Errors on LVM =====
Line 419: Line 604:
  
 # systemctl restart local-fs.target # systemctl restart local-fs.target
 +
 +==== How disable the devices file (RHEL9 an higher) ====
 +
 +Disabling the devices file automatically enables the lvm.conf device filter.
 +
 +Use LVM commands to control LVM device scanning. LVM commands interact with a file called the system.devices file, which lists the visible and usable devices. This feature is enabled by default in Red Hat Enterprise Linux 9.
 +
 +You can enable Logical Volume Manager (LVM) to access and use all devices on the system, which overrides the restrictions caused by the devices listed in the system.devices
 +
 +Add the line into **/​etc/​lvm/​lvm.conf**
 +<​code>​
 +use_devicesfile=0
 +</​code>​
 +
 +https://​docs.redhat.com/​en/​documentation/​red_hat_enterprise_linux/​9/​html/​configuring_and_managing_logical_volumes/​limiting-lvm-device-visibility-and-usage_configuring-and-managing-logical-volumes
linux/linux_lvm.1613692091.txt.gz · Last modified: 2021/02/19 00:48 by manu