This is an old revision of the document!
http://www.datadisk.co.uk/html_docs/redhat/rh_lvm.htm
http://www.voleg.info/linux-mirror-system-disk.html
https://www.thegeekdiary.com/centos-rhel-7-how-to-create-and-remove-the-lvm-mirrors-using-lvconvert/
Create a new disk for linux, rescan and create a partition type 8e00 (LVM) using gdisk
[root@lnxa081 centos_75]# rescan-scsi-bus.sh -a [root@lnxa081 centos_75]# gdisk /dev/mapper/mpathb GPT fdisk (gdisk) version 0.8.6 Partition table scan: MBR: not present BSD: not present APM: not present GPT: not present Creating new GPT entries. Command (? for help): n Partition number (1-128, default 1): First sector (34-83886046, default = 2048) or {+-}size{KMGTP}: Last sector (2048-83886046, default = 83886046) or {+-}size{KMGTP}: Current type is 'Linux filesystem' Hex code or GUID (L to show codes, Enter = 8300): 8e00 Changed type of partition to 'Linux LVM' Command (? for help): w Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING PARTITIONS!! Do you want to proceed? (Y/N): y OK; writing new GUID partition table (GPT) to /dev/mapper/mpathb. Warning: The kernel is still using the old partition table. The new table will be used at the next reboot. The operation has completed successfully. [root@lnxa081 centos_75]# gdisk -l /dev/mapper/mpathb GPT fdisk (gdisk) version 0.8.6 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/mapper/mpathb: 83886080 sectors, 40.0 GiB Logical sector size: 512 bytes Disk identifier (GUID): 6E544892-1B5E-40AA-8551-00D674313F21 Partition table holds up to 128 entries First usable sector is 34, last usable sector is 83886046 Partitions will be aligned on 2048-sector boundaries Total free space is 2014 sectors (1007.0 KiB) Number Start (sector) End (sector) Size Code Name 1 2048 83886046 40.0 GiB 8E00 Linux LVM
Create a VG, and LV, filesystem:
[root@lnxa081 centos_75]# ls /dev/mapper/mpathb* /dev/mapper/mpathb [root@lnxa081 centos_75]# partprobe -s [root@lnxa081 centos_75]# ls /dev/mapper/mpathb* /dev/mapper/mpathb /dev/mapper/mpathb1 [root@lnxa081 centos_75]# pvcreate /dev/mapper/mpathb1 Physical volume "/dev/mapper/mpathb1" successfully created. [root@lnxa081 centos_75]# pvs PV VG Fmt Attr PSize PFree /dev/mapper/mpathb1 datavg lvm2 a-- <40.00g <40.00g
[root@lnxa081 ~]# pvs --segments -o +lv_name,lv_size PV VG Fmt Attr PSize PFree Start SSize LV LSize /dev/mapper/mpathwp1 vg_container01 lvm2 a-- 5.00t 5.00t 0 1310719 0 /dev/mapper/mpathxp1 vg_container01 lvm2 a-- 5.00t 4.00t 0 262146 lv_container001 11.00t /dev/mapper/mpathxp1 vg_container01 lvm2 a-- 5.00t 4.00t 262146 1048573 0 /dev/mapper/mpathyp1 vg_container01 lvm2 a-- 5.00t 0 0 1310719 lv_container001 11.00t /dev/mapper/mpathzp1 vg_container01 lvm2 a-- 5.00t 0 0 1310719 lv_container001 11.00t
[root@lnxa081 centos_75]# vgcreate datavg /dev/mapper/mpathb1 Volume group "datavg" successfully created [root@lnxa081 centos_75]# vgs VG #PV #LV #SN Attr VSize VFree datavg 1 0 0 wz--n- <40.00g <40.00g
[root@lnxa081 centos_75]# lvcreate -n postgres1lv -L20G datavg Logical volume "postgres1lv" created. [root@lnxa081 centos_75]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert postgres1lv datavg -wi-a----- 20.00g
Filesystems most used type are: xfs, ext4, and coming btrfs
[root@lnxa081 centos_75]# mkfs.xfs /dev/mapper/datavg-postgres1lv meta-data=/dev/mapper/datavg-postgres1lv isize=512 agcount=16, agsize=327680 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=5242880, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@lnxa081 centos_75]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Fri May 18 07:49:08 2018 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # UUID=fe7f5254-1302-4bd5-8e3e-1a6046ce2943 / btrfs subvol=root 0 0 UUID=39651e3e-f8a3-4e46-ab7f-6b3ea7bb5991 /boot xfs defaults 0 0 UUID=fe7f5254-1302-4bd5-8e3e-1a6046ce2943 /home btrfs subvol=home 0 0 UUID=58d659e7-72ff-4434-8803-7e4a5df78b91 swap swap defaults 0 0 /dev/mapper/datavg-postgres1lv /postgres1 xfs defaults 0 0 [root@lnxa081 centos_75]# mkdir /postgres1 [root@lnxa081 centos_75]# mount /postgres1 [root@lnxa081 centos_75]# df -h | grep post /dev/mapper/datavg-postgres1lv 20G 33M 20G 1% /postgres1
Extend XFS filesystem using xfs_growfs
[root@lnxa081 centos_75]# lvextend -L +1G /dev/mapper/datavg-postgres1lv Size of logical volume datavg/postgres1lv changed from 20.00 GiB (5120 extents) to 21.00 GiB (5376 extents). Logical volume datavg/postgres1lv successfully resized. [root@lnxa081 centos_75]# xfs_growfs -d /dev/mapper/datavg-postgres1lv meta-data=/dev/mapper/datavg-postgres1lv isize=512 agcount=16, agsize=327680 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0 spinodes=0 data = bsize=4096 blocks=5242880, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 5242880 to 5505024 [root@lnxa081 centos_75]# df -h | grep post /dev/mapper/datavg-postgres1lv 20G 33M 20G 1% /postgres1
Change the default reserved blocks percentage cache size for filesystems, to use more space (default value is 5%):
[root@rh-tsm ~]# tune2fs -l /dev/mapper/TSMDB-TSMDB02_lv tune2fs 1.42.9 (28-Dec-2013) Filesystem volume name: <none> Last mounted on: /TSMDATABASE/DATA/db2 Filesystem UUID: e52277be-f820-4906-b6a3-1b30e0e88934 Filesystem magic number: 0xEF53 Filesystem revision : 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: unsigned_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 8126464 Block count: 32505856 Reserved block count #: 1625292 Free blocks: 6970825 Free inodes: 8126353 First block: 0 Block size: 4096 Fragment size: 4096 Group descriptor size: 64 Reserved GDT blocks: 1024 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stride: 8 RAID stripe width: 8 Flex block group size: 16 Filesystem created: Mon Jan 23 14:14:45 2017 Last mount time: Mon Jan 23 15:03:52 2017 Last write time: Mon Jan 23 15:03:52 2017 Mount count: 2 Maximum mount count: -1 Last checked: Mon Jan 23 14:14:45 2017 Check interval: 0 (<none>) Lifetime writes: 97 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: d81553db-6a9f-42de-978d-36c420a52b9f Journal backup: inode blocks [root@rh-tsm ~]# tune2fs -m 3 /dev/mapper/TSMDB-TSMDB02_lv tune2fs 1.42.9 (28-Dec-2013) Setting reserved blocks percentage to 3% (975175 blocks)
To see the number of reserved blocks on a mounted XFS file system:
# xfs_io -x -c "resblks" /root/test
We can use this command to change the reserved number of blocks on a mounted XFS file system (replace <blocks> with an integer number):
# xfs_io -x -c "resblks <blocks>" /root/test
To compute the percentage of reserved blocks, one must get the total number of blocks in the file system by multiplying the agcount and agsize numbers together. Those values are obtained via this command:
# xfs_info /root/test meta-data=/dev/vda2 isize=256 agcount=4, agsize=6400 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 finobt=0 data = bsize=4096 blocks=25600, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=0 log =internal bsize=4096 blocks=853, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
The reserved block percentage would be: 100 * “reserved blocks” / (agsize * agcount)
Users may be surprised to see values less than 1%, whereas older file systems typically defaulted to 5%.
lvconvert with option -m1 for mirror and -m0 to remove a mirror copy
Create a mirror LV
# lvconvert -m1 datavg/my_lv
Check synchronization status
# lvs -a -o name,copy_percent,devices datavg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdb1(0) # lvs --all --segments -o +devices LV VG Attr #Str Type SSize Devices root centos -wi-ao---- 1 linear 17.47g /dev/sda2(512) swap centos -wi-ao---- 1 linear 2.00g /dev/sda2(0) testlv datavg rwi-aor--- 2 raid1 1.00g testlv_rimage_0(0),testlv_rimage_1(0) [testlv_rimage_0] datavg iwi-aor--- 1 linear 1.00g /dev/sdb(0) [testlv_rimage_1] datavg iwi-aor--- 1 linear 1.00g /dev/sdc(1) [testlv_rmeta_0] datavg ewi-aor--- 1 linear 4.00m /dev/sdb(256) [testlv_rmeta_1] datavg ewi-aor--- 1 linear 4.00m /dev/sdc(0)
Once synchronized, you can remove one copy you have to specify the disk to remove
# lvconvert -m0 datavg/my_lv /dev/sda1 # lvs -a -o name,copy_percent,devices datavg LV Copy% Devices datalv /dev/sdb1(1)
To Repair a mirror after suffering a disk failure.
# lvconvert --repair
To Merges a snapshot into its origin volume.
# lvconvert --merge
To Create a snapshot from existing logical volume using another existing logical volume as its origin
# lvconvert -s # lvconvert --snapshot
To split off mirror images to form a new logical volume
# lvconvert --splitmirrors Images
Instead using ‘lvconvert’ mirroring command, we use here ‘pvmove‘ command with option ‘-n‘ (logical volume name) method to mirror data between two devices.
# pvmove -n /dev/datavg/data1lv /dev/vdb1 /dev/sda1
The command is one of the simplest way to mirror the data between two devices, but in real environment Mirroring is used more often than pvmove.
LVM commands (such as vgs, lvchange, etc) display messages like this when trying to list VG or LV:
Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/dm-5 not /dev/sdd Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/emcpowerb not /dev/sde Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/sddlmab not /dev/sdf
With a default configuration, LVM commands will scan for devices in /dev and check every resulting device for LVM metadata. This is caused by the default filter in the /etc/lvm/lvm.conf, which is as follows:
filter = [ "a/.*/" ]
When using Device Mapper Multipath or other multipath software such as EMC PowerPath or Hitachi Dynamic Link Manager (HDLM), each path to a particular logical unit number (LUN) is registered as a different SCSI device, such as /dev/sdb or /dev/sdc. The multipath software will then create a new device that maps to those individual paths, such as /dev/mapper/mpath1 or /dev/mapper/mpatha for Device Mapper Multipath, /dev/emcpowera for EMC PowerPath, or /dev/sddlmab for Hitachi HDLM. Since each LUN has multiple device nodes in /dev that point to the same underlying data, they all contain the same LVM metadata and thus LVM commands will find the same metadata multiple times and report them as duplicates.
The filter you configure should include all devices that need to be checked for LVM metadata, such as the local hard drive with the root volume group on it and any multipathed devices. By rejecting the underlying paths to a multipath device (such as /dev/sdb, /dev/sdd, etc) you can avoid these duplicate PV warnings, since each unique metadata area will only be found once on the multipath device itself. The following examples show filters that will avoid duplicate PV warnings due to multiple storage paths being available.
This filter accepts the second partition on the first hard drive (/dev/sda and any device-mapper-multipath devices, while rejecting everything else.
filter = [ "a|/dev/sda2$|", "a|/dev/mapper/mpath.*|", "r|.*|" ]
This filter accepts all HP SmartArray controllers and any EMC PowerPath devices.
filter = [ "a|/dev/cciss/.*|", "a|/dev/emcpower.*|", "r|.*|" ]
You can also test a filter on the fly, without modifying the /etc/lvm/lvm.conf file, by adding the –config argument to the LVM command, as in the following example.
# lvs --config 'devices{ filter = [ "a|/dev/emcpower.*|", "r|.*|" ] }'
Pb with duplicate PVs
# vgchange -an # vgimportclone --basevgname tsmdb01vg /dev/md124vgs # vgscan # vgchange -ay
[root@hrstsm01 multipath]# pvcreate /dev/mapper/mpathc Device /dev/mapper/mpathc excluded by a filter. [root@hrstsm01 multipath]# dd if=/dev/zero of=/dev/mapper/mpathc bs=512 count=1 1+0 records in 1+0 records out 512 bytes (512 B) copied, 0.00945451 s, 54.2 kB/s [root@hrstsm01 multipath]# pvcreate /dev/mapper/mpathc Device /dev/mapper/mpathc excluded by a filter.
First:
# systemctl daemon-reload
followed by:
# systemctl restart remote-fs.target
or
# systemctl restart local-fs.target