The SVC and Storwize V7000 offers a command line interface that you access via SSH. You start your favorite SSH client (such as PuTTY or Mindterm) and then logon as admin or as your own user-id. Right now to do this you need to generate a private/public key pair, although with release 6.3 (which will be available November 2011), you will be able to logon via SSH using just a user-id and password.
Having logged on there are three categories of commands you can issue:
svcinfo: Informational commands that let you examine your configuration. svctask: Task commands that let you change your configuration. satask: Service commands that are only used in specific circumstances. sainfo: Service information. svcconfig: To backup, clear, recover or restore
There are several CLI usability features that I routinely find users are not aware of, so I thought I would share some of them here:
Get the real time of a node:
IBM_FlashSystem:V5100-02:superuser>svqueryclock Fri Nov 5 14:53:21 CET 2021
Reference
http://cmhramblings.blogspot.com/2015/08/svc-mini-script-storage.html
http://db94.net:8080/wtf/pmwiki.php?n=Main.HandySVCMiniScripts
IBM_2145:SVC-CL-M:admin> SYS_2076:ifscluster-svt2:superuser>chsystemip -clusterip 9.71.16.63 -gw 9.71.16.1 -mask 255.255.255.0 -port 1
If connecting to a different subnet, replace the old cable connection with the new cable connection and run the lssystemip command. For example:
IBM_2145:SVC-CL-M:admin> SYS_2076:ifscluster-svt2:superuser>lssystemip cluster_id cluster_name location port_id IP_address subnet_mask gateway IP_address_6 prefix_6 gateway_6 00000200A9C0089E ifscluster-svt2 local 1 9.71.16.63 255.255.255.0 9.71.16.1 00000200A9C0089E ifscluster-svt2 local 2
In the management GUI, click General→Settings→Networks→Service IP, or use the satask chserviceip command to change the service IP address for node canister 1 and its associated subnet mask and default gateway. For example:
IBM_2145:SVC-CL-M:admin> SYS_2076:ifscluster-svt2:superuser>sainfo lsservicenodes panel_name cluster_id cluster_name node_id node_name relation node_status error_data 01-1 00000200A9C0089E ifscluster-svt2 1 node1 local Active 01-2 00000200A9C0089E ifscluster-svt2 2 node2 partner Active satask chserviceip -serviceip 9.71.16.68 -gw 9.71.16.1 -mask 255.255.255.0
Use the satask chserviceip command to change the service IP address for node canister 2 and its associated subnet mask and default gateway. For example:
IBM_2145:SVC-CL-M:admin> SYS_2076:ifscluster-svt2:superuser> satask chserviceip -serviceip 9.71.16.69 -gw 9.71.16.1 -mask 255.255.255.0 01-2
satask -- stopnode -- --+- -poweroff -+-- --+------------+->< +- -reboot----+ '-panel_name-' '- -warmstart '
Now that you have defined a basic cluster, you will want to add more nodes to it. Start by listing out all the unattached nodes
IBM_2145:SVC-CL-M:admin> svcinfo lsnodecandidate
this will list out all SVC nodes that have not been assigned to clusters yet. The next task is to add one of these nodes to your cluster, and you identify it by either its WWN or panelname, which is straightforward, except that they are called id and node-cover-name in the lsnodecandidate listing.
So you add a new node with either
IBM_2145:SVC-CL-M:admin> svctask addnode -panelname node-cover-name -iogrp N
or
IBM_2145:SVC-CL-M:admin> svctask addnode -wwnodename id -iogrp N
the N in -iogrp is an integer between 0 and 3. Remember that an IO group consists of two, and only two nodes, so the first node you add will go to iogrp0 and will be paired with the original node. Add extra nodes in pairs.
An mDisk corresponds to a LUN as presented by the controller of the attached storage subsystem, so you don't create mDisks, you detect them. To do this, use the command
IBM_2145:SVC-CL-M:admin> svctask detectmdisk
then follow with
IBM_2145:SVC-CL-M:admin> svcinfo lsmdisk -delim| -nohdr
to see what disks have been picked up. The -delim| parameter separates the fields in the command output with a | rather than a lot of spaces, so the output will hopefully not span lines. -nohdr will suppress the header. Next you need to define a managed disk group with datareduction (deduplication)
IBM_2145:SVC-CL-M:admin> svctask mkmdiskgrp -name MDGDS4FC001 -ext 1024 -datareduction yes -mdisk mdisk2
Next you need to define a managed disk group with the command
IBM_2145:SVC-CL-M:admin> svctask mkmdiskgrp -name MDGDS4FC001 -ext 64 -mdisk mdisk2
In this case, I'm creating an MDG called MDGDS4FC001 with a 64 MB stripe size (the default is 16MB) and adding an mdisk called mdisk2 to it. You can add more mdisks by separating them with commas in the command above or you can add them to an existing MDG with the command
IBM_2145:SVC-CL-M:admin> svctask addmdisk -mdisk mdisk3 MDGDS4FC001
To check out the status of a managed disk group use the command
IBM_2145:SVC-CL-M:admin> svcinfo lsmdiskgrp -delim|
On v7000, v5000, v3000, you have to create mdisk (array), and add it to a pool (mdiskgroup)
create DRAID6 (best RAID, can be extend later)
IBM_2076:V7000:admin> mkdistributedarray -driveclass 1 -drivecount 7 -level raid6 -rebuildareas 1 -strip 256 -stripewidth 6 Pool0
create RAID5
IBM_2076:V7000:admin> mkarray -level raid5 -drive 73:90:77:85:80:75:87 -strip 256 POOL01
Create a basic vdisk with 10GB:
IBM_2145:ITSO_SVC1:admin>mkvdisk -mdiskgrp STGPool_DS3500-2 -iogrp io_grp0 -node 1 -size 10 -unit gb -name vol1
Create a mirrored vdisk with 5120 MB:
IBM_2145:ITSO_SVC1:admin> mkvdisk -copies 2 -iogrp io_grp0 -mdiskgrp STGPool_DS3500-2:STGPool_DS5000-1 -name test_mirror -size 5120
Create 20 thin-provionned volumes:
# Create 20 x 256GB volumes for MYLPAR in pool POOL_RAID5 IBM_2145:ITSO_SVC1:admin> for x in {1..20} do num=`printf "%02d" $x` mkvdisk -autoexpand -cache readwrite -copies 1 -grainsize 256 -iogrp io_grp0 -mdiskgrp POOL_RAID5 -name MYLPAR_L$num -rsize 2% -size 256 -unit gb -vtype striped -warning 80% done
Expand a vdisk:
IBM_2145:ITSO_SVC1:admin> expandvdisksize -size 5120 test_vol1
Add a mirror copy of the disk in another pool (mdiskgroup), for information you can create a maximum of 2 copies: primary and secondary :
IBM_2145:ITSO_SVC1:admin>addvdiskcopy -mdiskgrp STGPool_DS5000-1 -vtype striped -mirrorwritepriority redundancy -unit gb vol1 IBM_2145:ITSO_SVC1:admin>lsvdisksyncprogress vdisk_id vdisk_name copy_id progress estimated_completion_time 23 vol1 1 48 120507203918
The -mirrorwritepriority redundancy parameter destage write IO at same time on both copies, for better performances you can keep the default: latency.
List both copies, and state:
IBM_2145:ITSO_SVC1:admin>lsvdiskid 23 name Volume_no_mirror IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id many mdisk_grp_name many capacity 1.00GB type many formatted no mdisk_id many mdisk_name many ..... filesystem mirror_write_priority redundancy copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name STGPool_DS3500-1 ... copy_id 1 status online sync yes primary no mdisk_grp_id 2 mdisk_grp_name STGPool_DS5000-1
Remove the copy with ID 1:
IBM_2145:ITSO_SVC2:superuser>rmvdiskcopy -copy 1 vol1
Split the mirror in 2 independent vdisks with the name vol2split
IBM_2145:ITSO_SVC1:admin>splitvdiskcopy -copy 1 -iogrp 0 -name vol2split vol1
In case of trouble, you can repair a copy:
repairvdiskcopy
If you want to use a LUN from another storage array, you can use a transparent mode with external virtualization:
IBM_2145:SVC-CL-M:admin> svctask mkmdiskgrp -ext 1024 -name MigrationPool_1024 IBM_2145:SVC-CL-M:admin> svctask mkvdisk -iogrp io_grp0 -mdisk mdisk0 -mdiskgrp MigrationPool_1024 -name controller0_0000000000000000 -syncrate 80 -vtype image
Next, you need to define your hosts to the SVC, but before you can do that, you need to know the WWPNs for the HBAs. Exactly how you do that will depend on what the host types are, so I'm going to assume that you know how to do this. There is a SVC Host Attachment Guide that will help you here.
Start by listing out all the unattached HBA ports that are zoned to the SVC.
IBM_2145:SVC-CL-M:admin> svcinfo lshbaportcandidate
The port names will be 16 HEX digits like
10000000DE1A34F2 10000000DF1045A2
Check that you can see the WWPNs that you are expecting, then define the host using (force if multiples WWN)
IBM_2145:SVC-CL-M:admin> svctask mkhost -force name P1201234 -fcwwpn "210100E08B251EE6:210100F08C262EE7"
Here I'm creating a host definition for server P1201234 and connecting it with one port. You can add both WWPNs in the command above by separating them with a colon, or you can add the second using
IBM_2145:SVC-CL-M:admin> svctask addhostport -fcwwpn 10000000DF1045A2 P1201234
You can check to status of all the hosts, or an individual host with these commands
IBM_2145:SVC-CL-M:admin> svcinfo lshost IBM_2145:SVC-CL-M:admin> svcinfo lshost P1201234
Finally, map the virtual disk to the host
IBM_2145:SVC-CL-M:admin> svctask mkvdiskhostmap -host P1201234 VDAIX000
List all consistency groups, volumes are synchronized from site 1 to site 2:
root@nim - /root > ssh admin@svc2 lsrcconsistgrp -delim : | awk 'NR>1' 0:prod1ws1:0000020060218D2C:SVC-1:0000020060418CCE:SVC-2:master:consistent_synchronized:3:metro:: 1:prod1ws2:0000020060218D2C:SVC-1:0000020060418CCE:SVC-2:master:consistent_synchronized:3:metro:: 2:prod1as1:0000020060218D2C:SVC-1:0000020060418CCE:SVC-2:master:consistent_synchronized:7:metro:: 3:prod1as2:0000020060218D2C:SVC-1:0000020060418CCE:SVC-2:master:consistent_synchronized:7:metro:: 4:prod1ds1:0000020060218D2C:SVC-1:0000020060418CCE:SVC-2:master:consistent_synchronized:22:metro::
Stop replication on consistency group prod1ws1, and allow write access to disks on remote site
root@nim - /root > ssh admin@svc2 stoprcconsistgrp -access prod1ws1
Reverse the replication from site 2 to site 1 (failover):
root@nim - /root > ssh admin@svc1 startrcconsistgrp -force -primary aux prod1ws1
Change again to fallback, site 1 to site 2
root@nim - /root > ssh admin@svc1 switchrcconsistgrp -primary master prod1ws1
First step, check your license (lslicense), else update it according to what you paid. External virtualization can be used for 45 days for migration period.
One disk quorum per site, and 1 or 2 IP quorum on third site
IBM_2145:SVC:report>lsquorum quorum_index status id name controller_id controller_name active object_type override site_id site_name 0 online 279 v7ks1-vdisk1 17 v7ks1-node1 no mdisk no 1 SITE1 1 online 250 v7ks1-vdisk1 15 v7ks2-node1 no mdisk no 2 SITE2 3 online yes device no srvquorum1.xxx.lu/10.10.10.10 4 online no device no srvquorum2.xxx.lu/10.10.10.11
The system layer is an exclusive option, you can work with external virtualization or replication, but not both together. SVC has a special appliance layer (equivalent to replication). Default layer for storwize is storage
admin@v7000 >chsystem -layer replication | storage
Note: If you specify -layer you must specify either replication or storage. This option can be changed if no other systems are visible on the fabric, and no system partnerships are defined.
For external virtualization, source storwize must have storage layer and target replication layer. For metro / global mirror, both storwize must have the same layer (storage or replication). If you set replication layer, you cannot provide storage for SVC, or storwize v7000 and v5000
Error on a storwize v3700, v5000 or v7000, unable to use external virtualization, the storwize to virtualize is not seen as a controller, but like node or host or unknown –> change the system layout to storage.
Error using replication metro mirror (PPRC) or global mirror –> change the system layout on all devices to have the same virtualization layer
Here we talk about SVC FC port virtualization, host with NPIV is supported for a long time.
Enabling NPIV target port functionality on a new cluster After creating the cluster, but before starting to add hosts, issue
chiogrp -fctargetportmode transitional <IO group>
on each populated I/O group.
Ensure that the hosts are using the NPIV ports for host I/O. To verify that you are logged in to these hosts with these NPIV ports, enter the following command
lsfabric -host host_id_or_name
Wait a minimum of 15 minutes before you move from transitional to enabled state.
To change the NPIV setting to enabled, enter the following command:
chiogrp -fctargetportmode enabled
on each populated I/O group. If using WWPN based zoning, issue the command to view the set of FC ports.
lstargetportfc
All ports that display virtualized=yes need to be zoned into hosts as described above.
This procedure is the same for Metro (synchronous replication) and Global Mirror (asynchronous), at the end a special note for Global Mirror.
Example with PPRC between SVC and V7000
First create the right SAN zoning between SVc and V7000
Check candidate for replication on all array or SVC
IBM_2145:ITSO_SVC1:admin>lspartnershipcandidate id configured name 00000200A0C006B2 no ITSO-Storwize-V7000-2
ITSO-Storwize-V7000-2:admin>lspartnershipcandidate id configured name 000002006BE04FC4 no ITSO_SVC1
As of code level 6.3, you can create a partnership between an SVC system and a Storwize V7000 system if you first change the layer setting on the Storwize V7000 from storage to replication with the chsystem -layer command. This option can only be used if no other systems are visible on the fabric, and no system partnerships are defined. SVC systems are always in the appliance layer
Pre-verification of system configuration
IBM_2145:ITSO_SVC1:admin>lspartnership id name location partnership bandwidth 000002006BE04FC4 ITSO_SVC1 local ITSO-Storwize-V7000-2:admin>lspartnership id name location partnership bandwidth 00000200A0C006B2 ITSO-Storwize-V7000-2 local
Check the system layer on all systems (SVC and V7000)
IBM_2145:ITSO_SVC1:admin>lssystem id 000002006BE04FC4 name ITSO_SVC1 location local partnership bandwidth .... id_alias 000002006BE04FC4 gm_link_tolerance 300 gm_inter_cluster_delay_simulation 0 gm_intra_cluster_delay_simulation 0 gm_max_host_delay 5 email_reply email_contact auth_service_type tip relationship_bandwidth_limit 25 has_nas_key no layer appliance ITSO-Storwize-V7000-2:admin> lssystem id 00000200A0C006B2 name ITSO-Storwize-V7000-2 location local partnership ... gm_link_tolerance 300 gm_inter_cluster_delay_simulation 0 gm_intra_cluster_delay_simulation 0 gm_max_host_delay 5 ... relationship_bandwidth_limit 25 ... has_nas_key no layer storage rc_buffer_size 48 ... local_fc_port_mask 1111111111111111111111111111111111111111111111111111111111111111 partner_fc_port_mask 1111111111111111111111111111111111111111111111111111111111111111 high_temp_mode off topology standard topology_status rc_auth_method none
As you can see, the layer has to be changed on the V7000 as replication (then the V7000 can't provide anymore disks to SVC, only replication will be allowed)
chsystem -layer replication
In Example, a partnership is created between ITSO_SVC1 and ITSO-Storwize-V7000-2, specifying 50 MBps bandwidth to be used for the background copy. To check the status of the newly created partnership, issue the lspartnership command. Also notice that the new partnership is only partially configured. It remains partially configured until the Metro Mirror partnership is created on the other node.
Creating the partnership from ITSO_SVC1 to ITSO-Storwize-V7000-2 and verifying it
IBM_2145:ITSO_SVC1:admin>lspartnership IBM_2145:ITSO_SVC1:admin>mkfcpartnership -linkbandwidthmbits 1024 -backgroundcopyrate 25 ITSO-Storwize-V7000-2 IBM_2145:ITSO_SVC1:admin>lspartnership id name location partnership bandwidth type 000002006BE04FC4 ITSO_SVC1 local 00000200A0C006B2 ITSO-Storwize-V7000-2 remote partially_configured_local 50 fc
In Example, the partnership is created between ITSO-Storwize-V7000-2 back to ITSO_SVC1, specifying the bandwidth to be used for a background copy of 50 MBps. After creating the partnership, verify that the partnership is fully configured on both systems by reissuing the lspartnership command.
Creating the partnership from ITSO-Storwize-V7000-2 to ITSO_SVC1 and verifying it
ITSO-Storwize-V7000-2:admin>lspartnership ITSO-Storwize-V7000-2:admin>mkfcpartnership -linkbandwidthmbits 1024 -backgroundcopyrate 25 ITSO_SVC1 ITSO-Storwize-V7000-2:admin>lspartnership id name location partnership bandwidth type 00000200A0C006B2 ITSO-Storwize-V7000-2 local 000002006BE04FC4 ITSO_SVC1 remote fully_configured 50 fc
For Global Mirror, steps are the same as Metro Mirror, and then you have to adapt the timeout values:
IBM_2145:ITSO_SVC1:admin>chsystem -gminterdelaysimulation 20 IBM_2145:ITSO_SVC1:admin>chsystem -gmintradelaysimulation 40 IBM_2145:ITSO_SVC1:admin>chsystem -gmlinktolerance 200
If you have pending upgrade on drives, for a long time and never end
IBM_Storwize:v3700-Quorum:superuser>lsdriveupgradeprogress id status estimated_completion_time 0 not_scheduled 7 not_scheduled 51 completed 221013120044 52 completed 221013120550 53 completed 221013121056 54 completed 221013121602
Reset the status to not_scheduled
First copy firmware and test upgrade tool into /home/admin/upgrade
IBM_Storwize:v3700-Quorum:superuser>lsdumps -prefix /home/admin/upgrade id filename 0 IBM_StorwizeV3000_INSTALL_upgradetest_30.6 1 IBM2072_INSTALL_7.8.1.11
Second install the upgrade test else firmware can't be installed
IBM_Storwize:v3700-Quorum:superuser>applysoftware -file IBM_StorwizeV3000_INSTALL_upgradetest_30.6
Third start test utility
IBM_Storwize:v3700-Quorum:superuser>svcupgradetest -v 7.8.1.11 svcupgradetest version 30.6 Please wait, the test may take several minutes to complete. ******************* Warning found ******************* The audit log of the system was cleared to prepare for the upgrade. The old audit log can be found in /dumps/audit/ on this node. ******************** Error found ******************** The system has at least one node canister with only 4GB of RAM installed This will prevent an upgrade to 7.8.1.11 from succeeding. Please contact your sales representative for information on obtaining an 8GB RAM upgrade.Instructions on how to install the RAM upgrade are available on the following Knowledge Center page: sh: /usr/bin/grep: No such file or directory sh: /usr/bin/grep: No such file or directory Traceback (most recent call last): File "/home/utilities/svcupgradetest/delete_old_dumps.py", line 96, in <module> clear_dump(nodes[node]._id, dump['filename']) File "/home/utilities/svcupgradetest/delete_old_dumps.py", line 75, in clear_dump command = ['/compass/bin/svctask', 'cleardumps', '-prefix', '{}/{}'.format(DUMPS_DIR, dump_name), node_id] ValueError: zero length field name in format https://www.ibm.com/support/knowledgecenter/en/STLM5A/com.ibm.storwize.v3700.710.doc/v3500_qi_installing_memory_module.html ******************* Warning found ******************* There was a problem deleting old dumps. Please contact your support representative. Results of running svcupgradetest: ================================== The tool has found 1 errors and 2 warnings. One issue will prevent the system being updated to 7.8.1.11 until it has been resolved
If it keeps schedules pending, then restart tomcat
IBM_Storwize:v3700:superuser> satask restartservice -service tomcat IBM_Storwize:v3700:superuser> satask restartservice -service cimom
Wait about 5 min and then retry the upgrade
Now you are ready to upgrade the new firmware
IBM_Storwize:v3700-Quorum:superuser>applysoftware -file IBM2072_INSTALL_7.8.1.11
Use the following command to check the upgrade progress, take care it will stay on 0, after it will switch to 50% after reboot of first controller…
IBM_Storwize:v3700-Quorum:superuser> lsupdate
These additional configuration steps can be done by using the command-line interface (CLI) or the management GUI.
Each SAN Volume Controller node in the system must be assigned to a site. Use the chnode CLI command. Each back-end storage system must be assigned to a site. Use the chcontroller CLI command. Each host must be assigned to a site. Use the chhost CLI command After all nodes, hosts, and storage systems are assigned to a site, the enhanced mode must be enabled by changing the system topology to stretched. For best results, configure an enhanced stretched system to include at least two I/O groups (four nodes). A system with just one I/O group cannot guarantee to maintain mirroring of data or uninterrupted host access in the presence of node failures or system updates.
get_ibm_svc_volumes () { svclogin=admin for storage in svc do ssh $svclogin@$storage lssystem -delim : > $PATH_DATA/$storage.lssystem manufacturer="IBM" model=$(grep '^product_name:' $PATH_DATA/$storage.lssystem | rev | awk '{print $1}' | rev) storage_name=$(grep '^name:' $PATH_DATA/$storage.lssystem | cut -d':' -f2) ssh $svclogin@$storage lsvdisk -unit gb -delim ':' > $PATH_DATA/$storage.lsvdisk ssh $svclogin@$storage lshostvdiskmap -delim ':' > $PATH_DATA/$storage.lshostvdiskmap for fullserial in $(cat $PATH_DATA/$storage.lsvdisk | awk -F':' '{print $14}' | grep -v '^vdisk_UID') do serial=$(echo $fullserial | cut -c1-28) vol_id=$(echo $fullserial | cut -c29-) vol=$(cat $PATH_DATA/$storage.lsvdisk | grep $fullserial | awk -F':' '{print $2";"$8";"$6";"$7}' | sed 's/GB;/;/') VG=$(cat $PATH_DATA/$storage.lshostvdiskmap | grep $fullserial | cut -d':' -f1,2 | sort -u) if [ "$VG" == "" ] then echo "${manufacturer};${model};${storage_name};${serial};${vol_id};${vol};;;${serial}${vol_id}" else for VG_name in $(echo $VG | sed 's/:/;/') do echo "${manufacturer};${model};${storage_name};${serial};${vol_id};${vol};${VG_name};${serial}${vol_id}" done fi done done } get_ibm_svc_volumes >> $csvfile
Print only vdisk (volume) name and size in byte
IBM_Storwize:V3K01:superuser>lsvdisk -filtervalue name=* -delim : -nohdr -bytes | cut -d':' -f2,8 DC1-TSM02-01:805306368000 DC1-TSM02-02:805306368000 DC1-TSM02-03:8903467204608 DC1-TSM02-04:8903467204608 DC1-TSM02-05:8903467204608 DC1-TSM02-06:8903467204608 DC1-LT-01:3858759680000
Print only vdisk (volume) name starting with TOTO
IBM_Storwize:V3K01:superuser>lsvdisk -filtervalue name=TOTO* -nohdr -delim , 0,TOTOTSM02-01,0,io_grp0,online,0,Pool-NL,750.00GB,striped,,,,,60050763008103FAE800000000000000,0,1,not_empty,0,no,0,0,Pool-NL,no,no,0,,
Generate command to recreate the same volumes as current on pool1
IBM_Storwize:V3K01:superuser>lsvdisk -filtervalue name=* -delim : -nohdr -bytes | cut -d':' -f2,8 | sed 's/:/\ /' | while read x y; do echo mkvdisk -mdiskgrp pool1 -iogrp io_grp0 -size $y -unit b -name $x; done mkvdisk -mdiskgrp pool1 -iogrp io_grp0 -size 805306368000 -unit b -name TSM02-01 mkvdisk -mdiskgrp pool1 -iogrp io_grp0 -size 805306368000 -unit b -name TSM02-02 mkvdisk -mdiskgrp pool1 -iogrp io_grp0 -size 8903467204608 -unit b -name TSM02-03 mkvdisk -mdiskgrp pool1 -iogrp io_grp0 -size 8903467204608 -unit b -name TSM02-04 mkvdisk -mdiskgrp pool1 -iogrp io_grp0 -size 8903467204608 -unit b -name DCAV-PRTSM02-05 mkvdisk -mdiskgrp pool1 -iogrp io_grp0 -size 8903467204608 -unit b -name DCAV-PRTSM02-06 mkvdisk -mdiskgrp pool1 -iogrp io_grp0 -size 3858759680000 -unit b -name DCAS-EVAULT-01 mkvdisk -mdiskgrp pool1 -iogrp io_grp0 -size 2199023255552 -unit b -name DCAV-DMZ-VMFS-TIPRTSM02-01 mkvdisk -mdiskgrp pool1 -iogrp io_grp0 -size 6597069766656 -unit b -name DCAV-PRVEECC01-01 mkvdisk -mdiskgrp pool1 -iogrp io_grp0 -size 6871947673600 -unit b -name DCAS-V3K01-VMFS18-TMP mkvdisk -mdiskgrp pool1 -iogrp io_grp0 -size 1099511627776 -unit b -name DCAV-PRTSM02-99
List groups:
IBM_FlashSystem:fs7300:user01> lsusergrp id name role remote owner_id owner_name 0 SecurityAdmin SecurityAdmin no 1 Administrator Administrator no 2 CopyOperator CopyOperator no 3 Service Service no 4 Monitor Monitor no 5 RestrictedAdmin RestrictedAdmin no
Create a new user with a password:
IBM_FlashSystem:fs7300:user01> mkuser -name user02 -usergrp SecurityAdmin -password xxxxxx
List all users
IBM_FlashSystem:fs7300:user01> lsuser id name password ssh_key remote usergrp_id usergrp_name 0 superuser yes yes no 0 SecurityAdmin 1 user02 yes no no 0 SecurityAdmin 3 report no yes no 4 Monitor
Create a user with ssh key axchange
[root@lnx01]/storage/.ssh # scp /storage/.ssh/id_ecdsa.pub user01@fs7300:/home/admin/upgrade Password: id_ecdsa.pub 100% 284 411.6KB/s 00:00
IBM_FlashSystem:fs7300:user01>mkuser -name report -usergrp Monitor -keyfile /home/admin/upgrade/id_ecdsa.pub User, id [8], successfully created
satask installsoftware -file IBM_5.0.12 -ignore
sainfo lsservicenode
satask chvpd -wwnn 500507680F00FFFF -fcportmap 11-11,12-12,13-13,14-14,21-21,22-22,23-23,24-24