User Tools

Site Tools


storage:svc_cmd

This is an old revision of the document!


Some tips for using the SVC and Storwize V7000 Command Line Interface

The SVC and Storwize V7000 offers a command line interface that you access via SSH. You start your favorite SSH client (such as PuTTY or Mindterm) and then logon as admin or as your own user-id. Right now to do this you need to generate a private/public key pair, although with release 6.3 (which will be available November 2011), you will be able to logon via SSH using just a user-id and password.

Having logged on there are three categories of commands you can issue:

svcinfo:    Informational commands that let you examine your configuration.
svctask:   Task commands that let you change your configuration.
satask:     Service commands that are only used in specific circumstances.
sainfo:     Service information.
svcconfig:  To backup, clear, recover or restore

There are several CLI usability features that I routinely find users are not aware of, so I thought I would share some of them here:

Hidden commands

Change IP on cluster and service

IBM_2145:SVC-CL-M:admin> 
SYS_2076:ifscluster-svt2:superuser>chsystemip -clusterip 9.71.16.63 -gw 9.71.16.1 -mask 255.255.255.0 -port 1

If connecting to a different subnet, replace the old cable connection with the new cable connection and run the lssystemip command. For example:

IBM_2145:SVC-CL-M:admin> 
SYS_2076:ifscluster-svt2:superuser>lssystemip
cluster_id cluster_name location port_id IP_address subnet_mask gateway IP_address_6 prefix_6 gateway_6
00000200A9C0089E ifscluster-svt2 local 1 9.71.16.63 255.255.255.0 9.71.16.1
00000200A9C0089E ifscluster-svt2 local 2

In the management GUI, click General→Settings→Networks→Service IP, or use the satask chserviceip command to change the service IP address for node canister 1 and its associated subnet mask and default gateway. For example:

IBM_2145:SVC-CL-M:admin> 
SYS_2076:ifscluster-svt2:superuser>sainfo lsservicenodes
panel_name cluster_id cluster_name node_id node_name relation node_status error_data
01-1 00000200A9C0089E ifscluster-svt2 1 node1 local Active
01-2 00000200A9C0089E ifscluster-svt2 2 node2 partner Active
satask chserviceip -serviceip 9.71.16.68 -gw 9.71.16.1 -mask 255.255.255.0

Use the satask chserviceip command to change the service IP address for node canister 2 and its associated subnet mask and default gateway. For example:

IBM_2145:SVC-CL-M:admin> 
SYS_2076:ifscluster-svt2:superuser> satask chserviceip -serviceip 9.71.16.69 -gw 9.71.16.1 -mask 255.255.255.0 01-2

Stop/Reboot a node canister

satask -- stopnode -- --+- -poweroff -+-- --+------------+-><
                            +- -reboot----+     '-panel_name-'   
                            '- -warmstart '                      

Adding nodes to a cluster

Now that you have defined a basic cluster, you will want to add more nodes to it. Start by listing out all the unattached nodes

IBM_2145:SVC-CL-M:admin> svcinfo lsnodecandidate

this will list out all SVC nodes that have not been assigned to clusters yet. The next task is to add one of these nodes to your cluster, and you identify it by either its WWN or panelname, which is straightforward, except that they are called id and node-cover-name in the lsnodecandidate listing.

So you add a new node with either

IBM_2145:SVC-CL-M:admin> svctask addnode -panelname node-cover-name -iogrp N

or

IBM_2145:SVC-CL-M:admin> svctask addnode -wwnodename id -iogrp N

the N in -iogrp is an integer between 0 and 3. Remember that an IO group consists of two, and only two nodes, so the first node you add will go to iogrp0 and will be paired with the original node. Add extra nodes in pairs.

Managing mDisks and MDGs

An mDisk corresponds to a LUN as presented by the controller of the attached storage subsystem, so you don't create mDisks, you detect them. To do this, use the command

IBM_2145:SVC-CL-M:admin> svctask detectmdisk

then follow with

IBM_2145:SVC-CL-M:admin> svcinfo lsmdisk -delim| -nohdr

to see what disks have been picked up. The -delim| parameter separates the fields in the command output with a | rather than a lot of spaces, so the output will hopefully not span lines. -nohdr will suppress the header. Next you need to define a managed disk group with datareduction (deduplication)

IBM_2145:SVC-CL-M:admin> svctask  mkmdiskgrp -name MDGDS4FC001 -ext 1024 -datareduction yes -mdisk mdisk2

Next you need to define a managed disk group with the command

IBM_2145:SVC-CL-M:admin> svctask  mkmdiskgrp -name MDGDS4FC001 -ext 64 -mdisk mdisk2

In this case, I'm creating an MDG called MDGDS4FC001 with a 64 MB stripe size (the default is 16MB) and adding an mdisk called mdisk2 to it. You can add more mdisks by separating them with commas in the command above or you can add them to an existing MDG with the command

IBM_2145:SVC-CL-M:admin> svctask addmdisk -mdisk mdisk3 MDGDS4FC001

To check out the status of a managed disk group use the command

IBM_2145:SVC-CL-M:admin> svcinfo lsmdiskgrp -delim| 

On v7000, v5000, v3000, you have to create mdisk (array), and add it to a pool (mdiskgroup)

create DRAID6 (best RAID, can be extend later)

IBM_2076:V7000:admin> mkdistributedarray -driveclass 1 -drivecount 7 -level raid6 -rebuildareas 1 -strip 256 -stripewidth 6 Pool0

create RAID5

IBM_2076:V7000:admin> mkarray -level raid5 -drive 73:90:77:85:80:75:87 -strip 256 POOL01

Vdisk and mirroring management

Create a basic vdisk with 10GB:

IBM_2145:ITSO_SVC1:admin>mkvdisk -mdiskgrp STGPool_DS3500-2 -iogrp io_grp0 -node 1 -size 10 -unit gb -name vol1

Create a mirrored vdisk with 5120 MB:

IBM_2145:ITSO_SVC1:admin> mkvdisk -copies 2 -iogrp io_grp0 -mdiskgrp STGPool_DS3500-2:STGPool_DS5000-1 -name test_mirror -size 5120

Create 20 thin-provionned volumes:

# Create 20 x 256GB volumes for MYLPAR in pool POOL_RAID5
IBM_2145:ITSO_SVC1:admin> for x in {1..20}
do
num=`printf "%02d" $x`
mkvdisk -autoexpand -cache readwrite -copies 1 -grainsize 256 -iogrp io_grp0 -mdiskgrp POOL_RAID5 -name MYLPAR_L$num -rsize 2% -size 256 -unit gb -vtype striped -warning 80%
done

Expand a vdisk:

IBM_2145:ITSO_SVC1:admin> expandvdisksize -size 5120 test_vol1

Add a mirror copy of the disk in another pool (mdiskgroup), for information you can create a maximum of 2 copies: primary and secondary :

IBM_2145:ITSO_SVC1:admin>addvdiskcopy -mdiskgrp STGPool_DS5000-1 -vtype striped  -mirrorwritepriority redundancy -unit gb vol1
IBM_2145:ITSO_SVC1:admin>lsvdisksyncprogress
vdisk_id vdisk_name copy_id progress estimated_completion_time
23 vol1 1 48 120507203918

The -mirrorwritepriority redundancy parameter destage write IO at same time on both copies, for better performances you can keep the default: latency.

List both copies, and state:

IBM_2145:ITSO_SVC1:admin>lsvdiskid 23
name Volume_no_mirror
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id many
mdisk_grp_name many
capacity 1.00GB
type many
formatted no
mdisk_id many
mdisk_name many
.....
filesystem
mirror_write_priority redundancy

copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
...
copy_id 1
status online
sync yes
primary no
mdisk_grp_id 2
mdisk_grp_name STGPool_DS5000-1

Remove the copy with ID 1:

IBM_2145:ITSO_SVC2:superuser>rmvdiskcopy -copy 1 vol1

Split the mirror in 2 independent vdisks with the name vol2split

IBM_2145:ITSO_SVC1:admin>splitvdiskcopy -copy 1 -iogrp 0 -name vol2split vol1

In case of trouble, you can repair a copy:

repairvdiskcopy 

Disk Migration using image mode

If you want to use a LUN from another storage array, you can use a transparent mode with external virtualization:

IBM_2145:SVC-CL-M:admin> svctask mkmdiskgrp -ext 1024 -name MigrationPool_1024
IBM_2145:SVC-CL-M:admin> svctask mkvdisk -iogrp io_grp0 -mdisk mdisk0 -mdiskgrp MigrationPool_1024 -name controller0_0000000000000000 -syncrate 80 -vtype image

Managing Hosts

Next, you need to define your hosts to the SVC, but before you can do that, you need to know the WWPNs for the HBAs. Exactly how you do that will depend on what the host types are, so I'm going to assume that you know how to do this. There is a SVC Host Attachment Guide that will help you here.

Start by listing out all the unattached HBA ports that are zoned to the SVC.

IBM_2145:SVC-CL-M:admin> svcinfo lshbaportcandidate

The port names will be 16 HEX digits like

10000000DE1A34F2 10000000DF1045A2

Check that you can see the WWPNs that you are expecting, then define the host using (force if multiples WWN)

IBM_2145:SVC-CL-M:admin> svctask mkhost  -force name P1201234 -fcwwpn "210100E08B251EE6:210100F08C262EE7"

Here I'm creating a host definition for server P1201234 and connecting it with one port. You can add both WWPNs in the command above by separating them with a colon, or you can add the second using

IBM_2145:SVC-CL-M:admin> svctask addhostport -fcwwpn 10000000DF1045A2 P1201234

You can check to status of all the hosts, or an individual host with these commands

IBM_2145:SVC-CL-M:admin> svcinfo lshost 
IBM_2145:SVC-CL-M:admin> svcinfo lshost P1201234

Finally, map the virtual disk to the host

IBM_2145:SVC-CL-M:admin> svctask mkvdiskhostmap -host P1201234 VDAIX000

Working with disk replication

List all consistency groups, volumes are synchronized from site 1 to site 2:

root@nim - /root > ssh admin@svc2 lsrcconsistgrp -delim : | awk 'NR>1'
0:prod1ws1:0000020060218D2C:SVC-1:0000020060418CCE:SVC-2:master:consistent_synchronized:3:metro::
1:prod1ws2:0000020060218D2C:SVC-1:0000020060418CCE:SVC-2:master:consistent_synchronized:3:metro::
2:prod1as1:0000020060218D2C:SVC-1:0000020060418CCE:SVC-2:master:consistent_synchronized:7:metro::
3:prod1as2:0000020060218D2C:SVC-1:0000020060418CCE:SVC-2:master:consistent_synchronized:7:metro::
4:prod1ds1:0000020060218D2C:SVC-1:0000020060418CCE:SVC-2:master:consistent_synchronized:22:metro::

Stop replication on consistency group prod1ws1, and allow write access to disks on remote site

root@nim - /root > ssh admin@svc2 stoprcconsistgrp -access prod1ws1

Reverse the replication from site 2 to site 1 (failover):

root@nim - /root > ssh admin@svc1 startrcconsistgrp -force -primary aux prod1ws1

Change again to fallback, site 1 to site 2

root@nim - /root > ssh admin@svc1 switchrcconsistgrp -primary master prod1ws1

Advanced fonctions SVC and storwize

First step, check your license (lslicense), else update it according to what you paid. External virtualization can be used for 45 days for migration period.

Quorum best proctice for SVC

One disk quorum per site, and 1 or 2 IP quorum on third site

IBM_2145:SVC:report>lsquorum
quorum_index status id  name           controller_id controller_name active object_type override site_id site_name
0            online 279 v7ks1-vdisk1   17            v7ks1-node1     no     mdisk       no       1       SITE1
1            online 250 v7ks1-vdisk1   15            v7ks2-node1     no     mdisk       no       2       SITE2
3            online                                                  yes    device      no               srvquorum1.xxx.lu/10.10.10.10
4            online                                                  no     device      no               srvquorum2.xxx.lu/10.10.10.11

Change system layer (for replication or external virtualization)

The system layer is an exclusive option, you can work with external virtualization or replication, but not both together. SVC has a special appliance layer (equivalent to replication). Default layer for storwize is storage

admin@v7000 >chsystem -layer replication | storage

Note: If you specify -layer you must specify either replication or storage. This option can be changed if no other systems are visible on the fabric, and no system partnerships are defined.

For external virtualization, source storwize must have storage layer and target replication layer.
For metro / global mirror, both storwize must have the same layer (storage or replication).
If you set replication layer, you cannot provide storage for SVC, or storwize v7000 and v5000

Error on a storwize v3700, v5000 or v7000, unable to use external virtualization, the storwize to virtualize is not seen as a controller, but like node or host or unknown –> change the system layout to storage.

Error using replication metro mirror (PPRC) or global mirror –> change the system layout on all devices to have the same virtualization layer

Enable SVC NPIV

Here we talk about SVC FC port virtualization, host with NPIV is supported for a long time.

Enabling NPIV target port functionality on a new cluster After creating the cluster, but before starting to add hosts, issue

chiogrp fctargetportmode transitional <IO group> 

on each populated I/O group. Once that is done, issue

chiogrp fctargetportmode enabled <IO group> 

on each populated I/O group. If using WWPN based zoning, issue the command to view the set of FC ports.

lstargetportfc 

All ports that display virtualized=yes need to be zoned into hosts as described above.

Create a metro / global mirror partnership using FC

This procedure is the same for Metro (synchronous replication) and Global Mirror (asynchronous), at the end a special note for Global Mirror.

Example with PPRC between SVC and V7000

First create the right SAN zoning between SVc and V7000

Check candidate for replication on all array or SVC

IBM_2145:ITSO_SVC1:admin>lspartnershipcandidate
id configured name
00000200A0C006B2  no ITSO-Storwize-V7000-2
ITSO-Storwize-V7000-2:admin>lspartnershipcandidate
id configured name
000002006BE04FC4 no ITSO_SVC1

As of code level 6.3, you can create a partnership between an SVC system and a Storwize V7000 system if you first change the layer setting on the Storwize V7000 from storage to replication with the chsystem -layer command. This option can only be used if no other systems are visible on the fabric, and no system partnerships are defined. SVC systems are always in the appliance layer

Pre-verification of system configuration

IBM_2145:ITSO_SVC1:admin>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC1 local
ITSO-Storwize-V7000-2:admin>lspartnership
id name location partnership bandwidth
00000200A0C006B2 ITSO-Storwize-V7000-2 local

Check the system layer on all systems (SVC and V7000)

IBM_2145:ITSO_SVC1:admin>lssystem
id 000002006BE04FC4
name ITSO_SVC1
location local
partnership
bandwidth
....
id_alias 000002006BE04FC4
gm_link_tolerance 300
gm_inter_cluster_delay_simulation 0
gm_intra_cluster_delay_simulation 0
gm_max_host_delay 5
email_reply
email_contact
auth_service_type tip
relationship_bandwidth_limit 25
has_nas_key no
layer appliance

ITSO-Storwize-V7000-2:admin> lssystem
id 00000200A0C006B2
name ITSO-Storwize-V7000-2
location local
partnership 
...
gm_link_tolerance 300
gm_inter_cluster_delay_simulation 0
gm_intra_cluster_delay_simulation 0
gm_max_host_delay 5
...
relationship_bandwidth_limit 25
...
has_nas_key no
layer storage
rc_buffer_size 48
...
local_fc_port_mask 1111111111111111111111111111111111111111111111111111111111111111
partner_fc_port_mask 1111111111111111111111111111111111111111111111111111111111111111
high_temp_mode off
topology standard
topology_status 
rc_auth_method none

As you can see, the layer has to be changed on the V7000 as replication (then the V7000 can't provide anymore disks to SVC, only replication will be allowed)

chsystem -layer replication
Establishing the partnership

In Example, a partnership is created between ITSO_SVC1 and ITSO-Storwize-V7000-2, specifying 50 MBps bandwidth to be used for the background copy. To check the status of the newly created partnership, issue the lspartnership command. Also notice that the new partnership is only partially configured. It remains partially configured until the Metro Mirror partnership is created on the other node.

Creating the partnership from ITSO_SVC1 to ITSO-Storwize-V7000-2 and verifying it

IBM_2145:ITSO_SVC1:admin>lspartnership
IBM_2145:ITSO_SVC1:admin>mkfcpartnership -linkbandwidthmbits 1024 -backgroundcopyrate 25 ITSO-Storwize-V7000-2
IBM_2145:ITSO_SVC1:admin>lspartnership
id name location partnership bandwidth type
000002006BE04FC4 ITSO_SVC1 local
00000200A0C006B2 ITSO-Storwize-V7000-2 remote partially_configured_local 50 fc

In Example, the partnership is created between ITSO-Storwize-V7000-2 back to ITSO_SVC1, specifying the bandwidth to be used for a background copy of 50 MBps. After creating the partnership, verify that the partnership is fully configured on both systems by reissuing the lspartnership command.

Creating the partnership from ITSO-Storwize-V7000-2 to ITSO_SVC1 and verifying it

ITSO-Storwize-V7000-2:admin>lspartnership
ITSO-Storwize-V7000-2:admin>mkfcpartnership -linkbandwidthmbits 1024 -backgroundcopyrate 25 ITSO_SVC1
ITSO-Storwize-V7000-2:admin>lspartnership
id name location partnership bandwidth type
00000200A0C006B2 ITSO-Storwize-V7000-2 local
000002006BE04FC4 ITSO_SVC1 remote fully_configured 50 fc

For Global Mirror, steps are the same as Metro Mirror, and then you have to adapt the timeout values:

IBM_2145:ITSO_SVC1:admin>chsystem -gminterdelaysimulation 20
IBM_2145:ITSO_SVC1:admin>chsystem -gmintradelaysimulation 40
IBM_2145:ITSO_SVC1:admin>chsystem -gmlinktolerance 200

Manualy upgrade controler firmware/software

First copy firmware and test upgrade tool into /home/admin/upgrade

storage/svc_cmd.1639145160.txt.gz · Last modified: 2021/12/10 15:06 by manu