User Tools

Site Tools


linux:raid_devices

RAID devices

RAID software installation

Avant tout, il faudra s'assurer d'avoir les outils nécessaires d'installés, à savoir :

  util-linux pour la commande fdisk ;
  coreutils pour la commande mknod ;
  e2fsprogs pour la commande mke2fs ;
  mdadm pour la commande mdadm.

# dnf install util-linux coreutils e2fsprogs mdadm

Ensuite il faudra démarrer le service qui s'occupera de la surveillance et de la gestion du RAID :

# systemctl start mdmonitor.service # systemctl enable mdmonitor.service # systemctl status mdmonitor.service

2.2 Préparation des Disques physiques Note tip.png Disques > 2To Pour les capacités de disques supérieures à 2To, il conviendra d'utiliser gdisk pour pouvoir gérer le partitionnement de type GPT au lieu de MBR.

Premièrement, nous vérifierons la présence des disques avec lesquels nous souhaitons créer un système RAID. Il conviendra donc d'utiliser la commande FDISK pour cela :

# fdisk -l

Il en résultera une sortie proche de celle-ci, excepté bien sûr les détails sur les disques (tailles, block, cylindre, etc) :

Disk /dev/sdc: 251.0 GB, 251000193024 bytes 255 heads, 63 sectors/track, 30515 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot      Start         End      Blocks   Id  System

Disk /dev/sdd: 251.0 GB, 251000193024 bytes 255 heads, 63 sectors/track, 30515 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

Il nous faudra maintenant, les installer en créant pour chacun une partition. Dans le reste de cet article, nous parlerons de disques malgré le fait que ce soit des partitions comme chaque disque n'aura qu'une seule partition occupant l'ensemble de chaque disque.

# fdisk /dev/sdc

Étant donné que nos disques sont vierges de toute manipulation antérieure, nous pourrons directement lancer la création. Dans le cas contraire, il conviendra de supprimer/formater toutes les partitions précédentes.

Une fois dans <fdisk>, la commande “m” permettra d'afficher l'aide sur les différentes commandes. Pour créer une nouvelle partition, la commande “n” sera donc utilisée comme ci-dessous :

The number of cylinders for this disk is set to 30515. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs

(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): n

Ensuite, le choix de la partition se portera sur la partition primaire par la commande “p” :

Command action

e   extended
p   primary partition (1-4)

Nous sélectionnerons ici qu'une partition sur les 4 disponibles en “primary partition” en utilisant la capacité maximale comme ci-dessous :

Partition number (1-4): 1 First cylinder (1-30515, default 1): 1 Last cylinder or +size or +sizeM or +sizeK (1-30515, default 30515): Using default value 30515

Nous terminerons par le code du type de partition à assigner par la commande “t”, puis le choix du type de partition système qui, se fera par la commande “fd” comme ci-dessous :

Command (m for help): t Selected partition 1

Hex code (type L to list codes): fd Changed system type of partition 1 to fd (Linux raid autodetect)

Afin de s'assurer que tout soit correct avant d'écrire les modification, il sera possible d'utiliser la commande “p”. Puis il suffira d'utiliser la commande “w” pour écrire la nouvelle table de partition :

Command (m for help): w

Note tip.png A noter Cette procédure sera à réaliser sur chacun des disques composant la grappe RAID.

Vous pourrez par la suite vérifier le résultat avec la commande fdisk, de la même manière que précédemment, comme ci-dessous :

# fdisk -l

Disk /dev/sdc: 251.0 GB, 251000193024 bytes 255 heads, 63 sectors/track, 30515 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot      Start         End      Blocks   Id  System

/dev/sdc1 1 30515 245111706 fd Linux raid autodetect

Disk /dev/sdd: 251.0 GB, 251000193024 bytes 255 heads, 63 sectors/track, 30515 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot      Start         End      Blocks   Id  System

/dev/sdd1 1 30515 245111706 fd Linux raid autodetect

2.3 Préparation du Disque RAID

Cette étape nous permettra de créer notre périphérique (disque md0) qui accueillera notre système RAID-1 (pour cette exemple).

  Pour cela nous utiliserons la commande mknod comme suit :

# mknod /dev/md0 b 9 20

  Initialisation du disque RAID-1 :

# mdadm -C /dev/md0 -l 1 –raid-device=2 /dev/sdc1 /dev/sdd1

L'option -C (create) permet de créer le raid.

L'option -l (ou –level) permet de définir le type de RAID que l'on souhaite créer [0,1,0+1,5,10,…].

L'option –raid-device (périphérique RAID) vous l'aurez compris, permet de définir le nombre de disques utilisé.

Si tout se passe bien, il en résultera ce qui suit :

mdadm: array /dev/md0 started.

  On pourra par la suite vérifier le résultat de la commande précédente par celle ci :

# cat /proc/mdstat

Personalities : [raid1] [raid6] [raid5] [raid4] md0 : active raid1 sdd1[1] sdc1[0]

   245117312 blocks [2/2] [UU]
   [>....................]  resync =  3.9% (9777216/245117312) finish=85.8min speed=45680K/sec

Comme vous pouvez le constater, le raid est en place et en pleine synchronisation disque, md0 2.4 Mise en place du Disque RAID

Pour réaliser cela, il conviendra donc de créer notre système de fichiers comme suit :

# mkfs.ext4 -c -j -L <nom_du_disque> /dev/md0

mke2fs 1.43.4 (31-Jan-2017) Filesystem label=dataraid OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 30654464 inodes, 61279328 blocks 3063966 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=0 1871 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks:

     32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
     4096000, 7962624, 11239424, 20480000, 23887872

Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 29 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.

Voilà, maintenant notre système de fichiers et notre journalisation créée, nous pourrons vérifier l'état du disque de la manière suivante :

# fdisk -l /dev/md0

Disk /dev/md0: 251.0 GB, 251000127488 bytes 2 heads, 4 sectors/track, 61279328 cylinders Units = cylinders of 8 * 512 = 4096 bytes

Vous constaterez qu'il n'y a pas de partition 1, 2, 3 ou plus. Ce qui est tout à fait normal. md0 étant en gros un périphérique d'interface (géré par mdadm) sur les partitions systèmes des disques précédemment configurées. 3 Configuration 3.1 Automatiser le montage du Disque RAID

Il suffira de rajouter dans la table du système de fichier /etc/fstab les informations concernant le disque :

  Chemin du périphérique ;
  Chemin du point de montage désiré ;
  Type de système de fichier ;
  Option au démarrage ;
  État de fsck (contrôleur de système de fichiers disques) au démarrage.

/dev/md0 /mnt/RAID1 ext4 defaults 1 0

Il conviendra par la suite, de mettre à jour notre table RAID comme suit :

# mdadm –detail –scan > /etc/mdadm.conf

4 Gestion d'un Système RAID présent

Dans cette section, nous montrerons comment gérer et superviser un system RAID déjà en place. Nous partirons du principe que chaque disque ne contient qu'une seule partition. En cas de partitions multiples, il faudra donc appliquer les commandes pour chacune d'elles si l'on souhaite ajouter ou retirer un disque. 4.1 Administration de base

  Informations sur le statut du RAID :

# cat /proc/mdstat

  Lister les RAID présents sur le système :

# mdadm –detail –scan

  Lister les informations détaillées d'une grappe :

# mdadm –detail /dev/md0

4.2 Ajouter un disque

Ce dernier devra posséder le même partitionnement afin de pouvoir être intégré dans la grappe RAID. Par conséquent le disque devra avoir une capacité égale ou supérieure (la capacité totale du disque ne sera alors pas utilisée).

La table de partitionnement sera récupérée d'un disque de la grappe RAID puis copiée sur le nouveau disque à l'aide de sfdisk et de l'option -d (dump) :

# sfdisk -d /dev/sdc > part_sdc.out # sfdisk /dev/sdv < part_sdc.out

Puis reste à ajouter le disque à la grappe RAID :

# mdadm /dev/md0 –add sdv1

Dans le cas de ce tutoriel la grappe RAID est constituée de 2 disques sur un RAID1, par conséquent si les 2 disques sont bons alors ce nouveau disque sera en “Spare”. Sinon la synchronisation commencera automatiquement et il sera possible de suivre l'état d'avancement :

# cat /proc/mdstat

4.3 Supprimer un disque

Avant toute suppression d'un disque, il faudra au préalable indiquer au gestionnaire l'état du disque :

# mdadm /dev/md0 –set-faulty /dev/sdc1 # mdadm /dev/md0 –remove /dev/sdc1

4.4 Remplacer un disque

L'opération de remplacement consistera à effectuer l'opération de suppression d'un disque puis d'ajout. 4.5 Ajouter un disque dit, Spare

Un disque dit “Spare” est un disque de secours, permettant en cas de panne de l'un des disques de la grappe RAID, de le remplacer sans intervention humaine.

Il suffit d'ajouter simplement le disque, voir Ajouter un disque, car c'est un 3ème disque sur une grappe de 2 dans notre cas.

Nous aurions pu aussi le rajouter dès la création de la grappe RAID :

# mdadm -C /dev/md0 -l 1 –raid-device=2 /dev/sdc1 /dev/sdd1 –spare-devices=1 /dev/sdv1

4.6 Augmenter la grappe

Il est possible de passer d'une grappe de 2 à 3 disques.

Par exemple, pour ajouter le disque déjà présent en “Spare” il suffira de redéfinir le nombre de disques :

# mdadm /dev/md0 –raid-device=3

La prise en compte sera immédiate et la synchronisation commencera. 4.7 Ne plus utiliser un disque dans une grappe RAID

mdadm écrit les informations relatives au volume dans un “superblock” sur chaque disque. Pour examiner les informations du “superblock” :

# mdadm –examine /dev/sdc1

Les “superblock” sont persistants, même après retrait d'un disque d'une grappe RAID. Pour le supprimer, opération indispensable avant de réutiliser le disque dans une grappe RAID :

# mdadm –zero-superblock /dev/sdc1

4.8 Être informé par mail

Il est possible d'être informé par mail d'un problème sur la grappe RAID en modifiant l'option MAILADDR dans le fichier /etc/mdadm.conf. Cependant il faudra qu'à minima un serveur de mail soit actif sur cette machine.

MAILADDR root@mydomain.tld

Pour tester le bon fonctionnement sans faire de manipulations hasardeuses :

# mdadm –monitor –scan –test –oneshot

RAID debug

Problem with mdadm or lvm after upgrade

change value in /etc/lvm/lvm.conf and reboot:

use_lvmetad = 0
[root@localhost ~]# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "vg_asterix00" using metadata type lvm2
  Found volume group "fedora" using metadata type lvm2

Hardware RAID On Linux with LSI Cards (Broadcom)

storcli replace previous megaraid command line

List all adapters and raid status

# /opt/MegaRAID/storcli/storcli64 show all

Detail info on adapter 0

# storcli64 /c0 show all
Warning: Class '\Joomla\CMS\Document\Renderer\Html\ModulesRenderer' not found in /home/fibrevillage/public_html/libraries/loader.php on line 663
Fibrevillage	

    Home
    Sysadmin
    Storage
    Database
    Scripting
    About
    Login

Storcli virtual drive command examples

The Storage Command Line Tool (StorCLI) is the command line management software designed for the MegaRAID® product line. it's drive commands provide information and perform actions related to physical drives. We all know that most of cases, we use virtual/Logical drives instead of physical drives. In this article, I'll show you Storcli virtual drive command examples.
Storcli Virtual Drive Add Commands

The Storage Command Line Tool supports the following commands to add virtual drives:

storcli /cx add vd type=raid[0|1|5|6|10|50|60][Size=<VD1_Sz>,<VD2_Sz>,..|*all]
[name=<VDNAME1>,..] drives=e:s|e:s-x|e:s-x,y;e:s-x,y,z [PDperArray=x][SED]
[pdcache=on|off|*default][pi] [DimmerSwitch(ds)=default|automatic(auto)|
*none|maximum(max)|MaximumWithoutCaching(maxnocache)][cachevd]
[wt|*wb] [nora|*ra] [*direct|cached] [CachedBadBBU|*NoCachedBadBBU]
[Strip=<8|16|32|64|128|256|1024>] [AfterVd=X] [Spares = [e:]s|[e:]s-x|[e:]s-x,y]
[force]

 Oow, lots of options, but major ones are just few. Here are they.

drives:

Valid enclosure number and valid slot numbers for the enclosure. In e:s|e:s-x|e:s-x,y:

e specifies the enclosure ID.
s represents the slot in the enclosure.
e:s-x is the range convention used to represent slots s to x in the enclosure e.

pdperarray

0 to 15. Specifies the number of physical drives per array

pdcache on|off|default.

Enables or disables PD cache.

direct|cached

cached: Cached I/O.
direct: Direct I/O.

Sets the logical drive cache policy. Direct I/O is the default.

wt|wb

wt: Write through.
wb: Write back.

nora|ra

ra: Read ahead.

strip

Sets the strip size for the RAID configuration

In kb 8, 16, 32, 64, 128, 256, 512, 1024. 

Example:

storcli /c0 add vd type=raid10 size=200gb,300gb,400gb names=tmp1,tmp2,tmp3
drives=252:2-4,5-7 pdperarray=2

The command above creates 3 RAID10 arrays, tmp1, tmp2, tmp3, sizes are 200gb, 300gb, 400gb in the same order. Using drives 2,3,4,5,6,7 in enclosure 252.

You can also create a cachecade array using the following command.

storcli /cx add VD cachecade|cc Type = raid[0,1,10] drives =
[e:]s|[e:]s-x|[e:]s-x,y [WT| WB] [assignvds = 0,1,2

Storcli Virtual Drive Delete Commands

Deleting a Virtual drive is easier than creating one. The Storage Command Line Tool supports the following virtual drive delete commands:

NOTE If the virtual drive has user data, you must use the force option to delete the virtual drive.
A virtual drive with a valid master boot record (MBR) and a partition table is considered to contain user data.

If you delete a virtual drive with a valid MBR without erasing the data and then create a new virtual drive using the
same set of physical drives and the same RAID level as the deleted virtual drive, the old unerased MBR still exists at block0 of the new virtual drive, which makes it a virtual drive with valid user data. Therefore, you must provide the force option to delete this newly created virtual drive.
Storcli /cx/vx/vall del

This command deletes a particular virtual drive or, when the vall option is used, all the virtual drives on the
controller are deleted.
Example:

storcli64 /c0/v1 del

storcli /cx/vx|vall del cachecade

This command deletes a specific CacheCade virtual drive on a controller, or all the CacheCade configuration for
a controller.
Example:

storcli64 /c0/vall del cachecade

storcli /cx/vx|vall del force

This command deletes a virtual drive only after the cache flush is completed. With the force option, the command
deletes a virtual drive without waiting for the cache flush to complete.
Example:

storcli64 /c0/v1 del force

Storcli Virtual Drive Show Commands
storcli /cx/vx show

This command shows the summary of the virtual drive information.
Input example:

# storcli64 /c0/v0 show
Controller = 0
Status = Success
Description = None

Virtual Drives :
==============
----------------------------------------------------------------
DG/VD TYPE  State Access Consist Cache Cac sCC      Size Name   
----------------------------------------------------------------
0/0   RAID6 Optl  RW     Yes     RWTD  -   ON  65.491 TB vdisk0 
----------------------------------------------------------------
Cac=CacheCade|Rec=Recovery|OfLn=OffLine|Pdgd=Partially Degraded|dgrd=Degraded
Optl=Optimal|RO=Read Only|RW=Read Write|HD=Hidden|TRANS=TransportReady|B=Blocked|
Consist=ConsistentR=Read Ahead Always|NR=No Read Ahead|WB=WriteBack|
AWB=Always WriteBack|WT=WriteThrough|C=Cached IO|D=Direct IO|sCC=Scheduled
Check Consistency

The command above shows the virtual drive 0 is a RAID6 array, in optimal state

RW=Read Write
RWTD : R=Read Ahead Always, WT=WriteThrough, D=Direct IO
sCC=Scheduled Check Consistency

storcli /cx/vx show all

This command shows all virtual drive information, which includes virtual drive information, physical drives used for
the virtual drives, and virtual drive properties. More detail info compare with show command
Example:

# storcli64 /c0/v0 show all
Controller = 0
Status = Success
Description = None

/c0/v0 :
======
----------------------------------------------------------------
DG/VD TYPE  State Access Consist Cache Cac sCC      Size Name   
----------------------------------------------------------------
0/0   RAID6 Optl  RW     Yes     RWTD  -   ON  65.491 TB vdisk0 
----------------------------------------------------------------
Cac=CacheCade|Rec=Recovery|OfLn=OffLine|Pdgd=Partially Degraded|dgrd=Degraded
Optl=Optimal|RO=Read Only|RW=Read Write|HD=Hidden|TRANS=TransportReady|B=Blocked|
Consist=ConsistentR=Read Ahead Always|NR=No Read Ahead|WB=WriteBack|
AWB=Always WriteBack|WT=WriteThrough|C=Cached IO|D=Direct IO|sCC=Scheduled
Check Consistency

PDs for VD 0 :
============
---------------------------------------------------------------------------
EID:Slt DID State DG     Size Intf Med SED PI SeSz Model                Sp 
---------------------------------------------------------------------------
54:0     58 Onln   0 7.276 TB SATA HDD N   N  512B HGST HUH7280xxxxxx U  
...
54:10    57 Onln   0 7.276 TB SATA HDD N   N  512B HGST HUH7280xxxxxx U  
---------------------------------------------------------------------------

EID-Enclosure Device ID|Slt-Slot No.|DID-Device ID|DG-DriveGroup
DHS-Dedicated Hot Spare|UGood-Unconfigured Good|GHS-Global Hotspare
UBad-Unconfigured Bad|Onln-Online|Offln-Offline|Intf-Interface
Med-Media Type|SED-Self Encryptive Drive|PI-Protection Info
SeSz-Sector Size|Sp-Spun|U-Up|D-Down|T-Transition|F-Foreign
UGUnsp-Unsupported|UGShld-UnConfigured shielded|HSPShld-Hotspare shielded
CFShld-Configured shielded|Cpybck-CopyBack|CBShld-Copyback Shielded

VD0 Properties :
==============
Strip Size = 256 KB
Number of Blocks = 140642942976
VD has Emulated PD = Yes
Span Depth = 1
Number of Drives Per Span = 11
Write Cache(initial setting) = WriteThrough
Disk Cache Policy = Disk's Default
Encryption = None
Data Protection = Disabled
Active Operations = None
Exposed to OS = Yes
Creation Date = 04-07-2017
Creation Time = 08:54:48 PM
Emulation type = default
Cachebypass size = Cachebypass-64k
Cachebypass Mode = Cachebypass Intelligent
Is LD Ready for OS Requests = Yes
SCSI NAA Id = 600605b00ba7a6c020eebd18d7734837

Storcli Virtual Drive Set Commands

The Storage Command Line Tool supports the following commands to change virtual drive properties:
storcli /cx/vx set accesspolicy=<rw|ro|blocked|rmvblkd>

This command sets the access policy on a virtual drive to read write, read only, or blocked or rmvblkd
(remove blocked).
Example:
storcli64 /c0/v0 set accesspolicy=rw
storcli /cx/vx set cachedbadbbu=<on|off>

This command enables the use write cache for the virtual drive when the BBU is bad.
Example:
storcli64 /c0/v0 set cachedbadbbu=on
storcli /cx/vx set iopolicy=<cached|direct>

This command sets the I/O policy on a virtual drive to cached I/O or direct I/O.
Example:
storcli64 /c0/v0 set iopolicy=cached
storcli /cx/vx set name=<namestring>

This command names a virtual drive. The name is restricted to 15 characters
Example:
storcli64 /c1/v0 set name=testdrive1
storcli /cx/vx set pdcache=<on|off|default

This command sets the current disk cache policy on a virtual drive to on, off, or default setting.
Example:
storcli64 /c0/v0 set pdcache=on
storcli /cx/vx set rdcache=<ra|nora>

This command sets the read cache policy on a virtual drive to read ahead, no read ahead, or adaptive read ahead.
Example:
storcli64 /c0/v0 set rdcache=nora
storcli /cx/vx set security

This command secures the virtual drive.
Example:
storcli64 /c0/v0 set security
storcli /cx/vx|vall set ssdcaching=<on|off>

storcli /cx/vx|vall set ssdcaching=<on|off>
This command assigns CacheCade virtual drives. If ssdcaching=off, the CacheCade virtual drive is removed.
Example:
storcli64 /c0/v0 set ssdcaching=on
storcli /cx/vx set wrcache=<wt|wb|awb>

This command sets the write cache policy on a virtual drive to write back, write through, or always write back.
Example:
storcli64 /c0/v0 set wrcache=wt
Storcli Virtual Drive Preserved cache Commands

If a virtual drive becomes offline or is deleted because of missing physical disks, the controller preserves the dirty
cache from the virtual disk. The Storage Command Line Tool supports the following commands for preserved cache:
storcli /cx/vx delete preservedcache

This command deletes the preserved cache for a particular virtual drive on the controller in missing state. Use the
force option to delete the preserved cache of a virtual drive in offline state.
Example:

storcli /c0/v1 delete preservedcache

storcli /cx show preservedCache

This command shows the virtual drive that has preserved cache and whether the virtual drive is offline or missing.
Example:

storcli /c0 show preservedCache

Storcli Virtual Drive Initialization Commands

You can use the following commands to initialize initialize virtual drives:

Note: If the virtual drive has user data, you must use the force option to initialize the virtual drive.
A virtual drive with a valid MBR and partition table is considered to contain user data.
storcli /cx/vx show init

This command shows the initialization progress of a virtual drive in percentage.
Example:

storcli64 /c0/v1 show init

storcli /cx/vx start init [full]

This command starts the initialization of a virtual drive. The default initialization type is fast initialization. If the full
option is specified, full initialization of the virtual drive starts.
Example:

storcli64 /cx/vx start init [full]

storcli /cx/vx stop init

This command stops the initialization of a virtual drive. A stopped initialization cannot be resumed.
Example:

storcli64 /c0/v0 stop init

Storcli Virtual Drive Erase Commands

Use the command below to erase virtual drives:

NOTE If the virtual drive has user data, you must use the force option to erase the virtual drive.
A virtual drive with a valid MBR and partition table is considered to contain user data.
storcli /cx/vx erase [force]

This command erases the data on the virtual drive. You can use the force option as a confirmation to erase the data on the drive and the security information.
Example:

storcli /c0/v0 erase[force]

Storcli Virtual Drive Migration Commands

NOTE The virtual drive migration commands are not supported in Embedded MegaRAID

Storcli also supports virtual drive migration (reconstruction), from one type of raid to the other:
storcli /cx/vx show migrate

This command shows the progress of the virtual drive migrate operation in percentage.
Example:

storcli /c0/v0 show migrate

storcli /cx/vx start migrate <type=raidlevel> [option=<add|remove> disk=<e1/s1,e2/s2 ...> ]

This command starts the reconstruction on a virtual drive to the specified RAID level by adding or removing disks
from the existing virtual drive. You can use the following options with the start migrate command:

type = RAID level RAID [0|1|5|6] The RAID level to which the virtual drive must be migrated.

[option=<add | remove>
add: Adds disks to the virtual drive and starts reconstruction.
remove: Removes disks from the virtual drive and starts reconstruction.

disk=<e1:s1,e2:s2, …>]
disk: The enclosure number and the slot number of the disks to be added to the virtual drive.

Example:

storcli64 /c0/v3 start migrate type=r5 option=add disk=e5:s2,e5:s3

 

Note: Not all RAID levels can be migrated to each other, here is the supported matrix

Initial RAID level Migrated RAID level
RAID 0                RAID 1
RAID 0                RAID 5
RAID 0                RAID 6
RAID 1                RAID 0
RAID 1                RAID 5
RAID 1                RAID 6
RAID 5                RAID 0
RAID 5                RAID 6
RAID 6                RAID 0
RAID 6                RAID 5

Storcli Virtual Drive Consistency Check Commands

MegaRAID virtual drive consistency checks:

Note: If enclosures are used to connect the physical drives to the controller, specify the IDs in the command.
storcli /cx/vx pause cc

This command pauses an ongoing consistency check process. You can resume the consistency check at a later time.
You can run this command only on a virtual drive that has a consistency check operation running.
Example:

storcli64 /c0/v4 pause cc

storcli /cx/vx resume cc

This command resumes a suspended consistency check operation. You can run this command on a virtual drive that has a paused consistency check operation.

NOTE You cannot resume a stopped consistency check process
Example:

storcli64 /c0/v4 resume cc

storcli /cx/vx show cc

This command shows the progress of the consistency check operation in percentage.
Example:

storcli64 /c0/v5 show cc

storcli /cx/vx start cc [force]

This command starts a consistency check operation for a virtual drive. Typically, a consistency check operation is run on an initialized virtual drive. Use the force option to run a consistency check on an uninitialized drive.
Example:

storcli /c0/v4 start cc

storcli /cx/vx stop cc

This command stops a consistency check operation. You can run this command only for a virtual drive that has a
consistency check operation running.
Example:

storcli64 /c0/v4 stop cc

Storcli Virtual Drive Background Initialization Commands

The Storage Command Line Tool supports the following commands for background initialization:
storcli /cx/vx resume bgi

This command resumes a suspended background initialization operation.
Example:

storcli64 /c0/v0 resume bgi

storcli /cx/vx set autobgi=<on|off>

This command sets the auto background initialization setting for a virtual drive to on or off.
Example:

storcli64 /c0/v0 set autobgi=on

storcli /cx/vx show autobgi

This command shows the background initialization setting for a virtual drive.
Example:

storcli64 /c0/v0 show autobgi

storcli /cx/vx show bgi

This command shows the background initialization progress on the specified virtual drive in percentage.
Example:

storcli64 /c0/v0 show bgi

storcli /cx/vx stop bgi

This command stops a background initialization operation. You can run this command only for a virtual drive that is
currently initialized.
Example:

storcli64 /c0/v4 stop bgi

storcli /cx/vx suspend bgi

This command suspends a background initialization operation. You can run this command only for a virtual drive that is currently initialized.
Example:

storcli64 /c0/v4 pause bgi

Storcli Virtual Drive Expansion Commands

The Storage Command Line Tool supports the following commands for virtual drive expansion:
storcli /cx/vx expand size=<value> [expandarray]

This command expands the virtual drive within the existing array or if you replace the drives with drives larger than
the size of the existing array. The value of the expand size is in GB. If the expandarray option is specified, the
existing array is expanded. If this option is not specified, the virtual drive is expanded.

Example:

storcli64 /cx/vx expand size=1000000

storcli /cx/vx|vall show expansion

This command shows the expansion information on the virtual drive with and without array expansion.
Example:

storcli64 /c0/v0 show expansion

 

    You are here:   Home Storage StorCli useful commands with examples 

google search
See also:

    Storcli BBU(Backup Battery Unit) commands examples
    Storcli controller commands and examples, show and set properties
    Storcli drive group commands and examples
    Storcli Enclosure commands examples
    Storcli foreign configurations commands and examples
    Storcli Logging Commands examplles
    Storcli command reference
    Storcli dimmerswitch commands examples
    Storcli PHY commands examples
    Storcli system commands and examples
    StorCli useful commands with examples

References

linux/raid_devices.txt · Last modified: 2021/01/01 21:25 (external edit)