User Tools

Site Tools


aix:alt_disk_migration

AIX:NIM alternate disk migration

AIX migration: it's an upgrade of AIX OS to a new level of AIX (Ex: 5.3 to 6.1)

With a NIM server it's possible to migrate a running AIX server, and activate the new OS level with only one reboot.

NIM Alternate disk migration requisites

  • The target client must be a registered with the master as a standalone NIM client. The NIM master must be able to execute remote commands on the client using the rshd protocol.
    • rlogin enable for root user on the target # chuser rlogin=true root
    • create a file $HOME/.rhosts on the target with permissions 600, containing hostname of NIM master, as solved by target host (Ex: nimsrv.dom.eu)
    • Activate Rcommand in the /etc/inetd.conf
      shell   stream  tcp6    nowait  root    /usr/sbin/rshd  rshd 
      login   stream  tcp6    nowait  root    /usr/sbin/rlogind       rlogind 
      exec    stream  tcp6    nowait  root    /usr/sbin/rexecd        rexecd 
    • And execute : refresh -s inetd
    • Test a connection without password: Ex: rsh labotest ls
    • In case of trouble, check /etc/hosts.equiv file
  • The SPOT that you plan to use must contain the following packages:
    bos.alt_disk_install.rte
  • On the target enable push from master NIM
    nimclient -p
  • Change TZ variable in /etc/environment by commenting out the old value and adding a new line with:
     TZ=Europe/Belgium 
  • The target machine must have a new disk free of data
    [root@labotest:/root/]# lspv
    hdisk0          00c579f0d914b866                    rootvg          active
    hdisk1          none                                None            
  • In case of TCB enable, you have to use a disk cache VG on the master, with enough space to contain a copy of the target rootvg without special LV. Check the state with:
     odmget -q attribute="TCB_STATE" PdAt | grep tcb 
  • The nimadm command is not supported with the multibos command when there is a bos_hd5 logical volume.

Be carefull: If you have other applicative filesystems on rootvg, like /oracle, and the application is modifying files during migration, then you have to resynchronize it after the Reboot on new AIX level. You can mount the old filesystems with :

# alt_rootvg_op -W -d hdisk0 ,and at the end put it on sleep again: 
# alt_rootvg_op -S 
  • NIM master bos.alt_disk_install.rte installed package must be exactly at same level as into the spot
[root@nimsrv]/root# /usr/sbin/nim -o lslpp -a lslpp_flags=Lc -a filesets=bos.alt_disk_install.rte spot_aix6100-06 
bos.alt_disk_install:bos.alt_disk_install.rte:6.1.6.0: : :C: :Alternate Disk Installation Runtime : : : : : : :0:0:/:1036
[root@nimsrv]/root# /usr/bin/lslpp -Lc bos.alt_disk_install.rte
bos.alt_disk_install:bos.alt_disk_install.rte:6.1.6.1: : :C:F:Alternate Disk Installation Runtime: : : : : : :0:0:/:1048

Alternate disk migration

On the NIM server:

# smitty
--> Software Installation and Maintenance
--> Alternate Disk Installation
--> NIM Alternate Disk Migration
--> Perform NIM Alternate Disk Migration
                                                           Perform NIM Alternate Disk Migration

Type or select values in entry fields.
Press Enter AFTER making all desired changes.
  
                                                        [Entry Fields]
* Target NIM Client                                  [labotest]                                                                                           +
* NIM LPP_SOURCE resource                            [aix6100-04]                                                                                      +
* NIM SPOT resource                                  [spot_aix6100-04]                                                                                 +
* Target Disk(s) to install                          [hdisk1]
  DISK CACHE volume group name                       [nimvg]     ==> only if TCB enable                                                                +
  
  NIM IMAGE_DATA resource                            []                                                                                                +
  NIM BOSINST_DATA resource                          []                                                                                                +
  NIM EXCLUDE_FILES resource                         []                                                                                                +
  NIM INSTALLP_BUNDLE resource                       []                                                                                                +
  NIM PRE-MIGRATION SCRIPT resource                  []                                                                                                +
  NIM POST-MIGRATION SCRIPT resource                 []                                                                                                +
  
  Phase to execute                                   [all]                                                                                             +
  NFS mounting options                               []
  Set Client bootlist to alternate disk?              yes                                                                                              +
  Reboot NIM Client when complete?                    no                                                                                               +
  Verbose output?                                     no                                                                                               +
  Debug output?                                       no                                                                                               +
  
  ACCEPT new license agreements?                      yes                                                                                              +

Or with command line:

# /usr/sbin/nimadm -H -c labotest -l aix6100-04 -s spot_aix6100-04 -d hdisk1 -Y

New option in AIX 7.3.2 add -A timestamps per phase Output:

Initializing the NIM master.
Initializing NIM client labotest.
Verifying alt_disk_migration eligibility.
Initializing log: /var/adm/ras/alt_mig/labotest_alt_mig.log
Starting Alternate Disk Migration.

+-----------------------------------------------------------------------------+
Executing nimadm phase 1.
+-----------------------------------------------------------------------------+
Cloning altinst_rootvg on client, Phase 1.
Client alt_disk_install command: alt_disk_copy -j -B -M 6.1 -P1 -d "hdisk1"
Calling mkszfile to create new /image.data file.
Checking disk sizes.
Creating cloned rootvg volume group and associated logical volumes.
Creating logical volume alt_hd5.
Creating logical volume alt_hd6.
Creating logical volume alt_hd8.
Creating logical volume alt_hd4.
Creating logical volume alt_hd2.
Creating logical volume alt_hd9var.
Creating logical volume alt_hd3.
Creating logical volume alt_hd1.
Creating logical volume alt_hd10opt.
Creating logical volume alt_fslv00.
Creating logical volume alt_hd7.
Creating /alt_inst/ file system.
Creating /alt_inst/admin file system.
Creating /alt_inst/audit file system.
Creating /alt_inst/home file system.
Creating /alt_inst/opt file system.
Creating /alt_inst/root file system.
Creating /alt_inst/tmp file system.
Creating /alt_inst/trace file system.
Creating /alt_inst/usr file system.
Generating a list of files
for backup and restore into the alternate file system...
Phase 1 complete.

+-----------------------------------------------------------------------------+
Executing nimadm phase 2.
+-----------------------------------------------------------------------------+
Creating nimadm cache file systems on volume group nimvg.
Checking for initial required migration space.
Creating cache file system /labotest_alt/alt_inst
Creating cache file system /labotest_alt/alt_inst/admin
Creating cache file system /labotest_alt/alt_inst/audit
Creating cache file system /labotest_alt/alt_inst/home
Creating cache file system /labotest_alt/alt_inst/opt
Creating cache file system /labotest_alt/alt_inst/root
Creating cache file system /labotest_alt/alt_inst/tmp
Creating cache file system /labotest_alt/alt_inst/trace
Creating cache file system /labotest_alt/alt_inst/usr

+-----------------------------------------------------------------------------+
Executing nimadm phase 3.
+-----------------------------------------------------------------------------+
Syncing client data to cache ...
.........................

SUCCESSES
---------
  Filesets listed in this section passed pre-commit verification
  and will be committed.

  Selected Filesets
  -----------------
  X11.compat.lib.X11R6_motif 6.1.4.1          # AIXwindows X11R6 Motif 1.2 &...

  << End of Success Section >>

+-----------------------------------------------------------------------------+
                          Committing Software...
+-----------------------------------------------------------------------------+

installp: COMMITTING software for:
        X11.compat.lib.X11R6_motif 6.1.4.1

Finished processing all filesets.  (Total time:  6 secs).

+-----------------------------------------------------------------------------+
                                Summaries:
+-----------------------------------------------------------------------------+

Installation Summary
--------------------
Name                        Level           Part        Event       Result
-------------------------------------------------------------------------------
X11.compat.lib.X11R6_motif  6.1.4.1         USR         APPLY       SUCCESS    
X11.compat.lib.X11R6_motif  6.1.4.1         USR         COMMIT      SUCCESS    

install_all_updates: Generating list of updatable rpm packages.
========================================================
The following rpm packages will be updated:
cdrecord 1.9-7
========================================================

install_all_updates: Updating rpm packages..


Validating RPM package selections ...

lsfs: No record matching '/dev/lv05' was found in /etc/filesystems.
fs_size_check[34]: /usr/sbin/ls:  not found
cdrecord                    ##################################################

+-----------------------------------------------------------------------------+
                           RPM  Error Summary:
+-----------------------------------------------------------------------------+
The following errors during installation occurred:
failed to stat /export/mksysb: No such file or directory


install_all_updates: Checking for recommended maintenance level 6100-04.
install_all_updates: Executing /usr/bin/oslevel -rf, Result = 6100-04
install_all_updates: Verification completed.
install_all_updates: Log file is /var/adm/ras/install_all_updates.log
install_all_updates: Result = SUCCESS
Known Recommended Maintenance Levels
------------------------------------
Restoring device ODM database.

+-----------------------------------------------------------------------------+
Executing nimadm phase 7.
+-----------------------------------------------------------------------------+
nimadm: There is no user customization script specified for this phase.

+-----------------------------------------------------------------------------+
Executing nimadm phase 8.
+-----------------------------------------------------------------------------+
Creating client boot image.
bosboot: Boot image is 44903 512 byte blocks.
Writing boot image to client's alternate boot disk hdisk1.

+-----------------------------------------------------------------------------+
Executing nimadm phase 9.
+-----------------------------------------------------------------------------+
Adjusting client file system sizes ...
Adjusting size for /
Adjusting size for /admin
Adjusting size for /audit
Adjusting size for /home
Adjusting size for /opt
Adjusting size for /root
Adjusting size for /tmp
Adjusting size for /trace
Adjusting size for /usr
Expanding /alt_inst/usr client filesystem.
Filesystem size changed to 4653056
Adjusting size for /home/scripts
Adjusting size for /usr/sysload
Adjusting size for /var
Adjusting size for /var/core
Syncing cache data to client ...

+-----------------------------------------------------------------------------+
Executing nimadm phase 10.
+-----------------------------------------------------------------------------+
Unmounting client mounts on the NIM master.
forced unmount of /labotest_alt/alt_inst/var/core
forced unmount of /labotest_alt/alt_inst/var
forced unmount of /labotest_alt/alt_inst/usr/sysload
forced unmount of /labotest_alt/alt_inst/home/scripts
forced unmount of /labotest_alt/alt_inst/usr
forced unmount of /labotest_alt/alt_inst/trace
forced unmount of /labotest_alt/alt_inst/tmp
forced unmount of /labotest_alt/alt_inst/root
forced unmount of /labotest_alt/alt_inst/opt
forced unmount of /labotest_alt/alt_inst/home
forced unmount of /labotest_alt/alt_inst/audit
forced unmount of /labotest_alt/alt_inst/admin
forced unmount of /labotest_alt/alt_inst
Removing nimadm cache file systems.
Removing cache file system /labotest_alt/alt_inst
Removing cache file system /labotest_alt/alt_inst/admin
Removing cache file system /labotest_alt/alt_inst/audit
Removing cache file system /labotest_alt/alt_inst/home

+-----------------------------------------------------------------------------+
Executing nimadm phase 11.
+-----------------------------------------------------------------------------+
Cloning altinst_rootvg on client, Phase 3.
Client alt_disk_install command: alt_disk_copy -j -B -M 6.1 -P3 -d "hdisk1"
## Phase 3 ###################
Verifying altinst_rootvg...
Modifying ODM on cloned disk.
forced unmount of /alt_inst/var/core
forced unmount of /alt_inst/var
forced unmount of /alt_inst/usr/sysload
forced unmount of /alt_inst/home/scripts
forced unmount of /alt_inst/usr
forced unmount of /alt_inst/trace
forced unmount of /alt_inst/tmp
forced unmount of /alt_inst/root
forced unmount of /alt_inst/opt
forced unmount of /alt_inst/home
forced unmount of /alt_inst/audit
forced unmount of /alt_inst/admin
forced unmount of /alt_inst
Changing logical volume names in volume group descriptor area.
Fixing LV control blocks...
Fixing file system superblocks...

+-----------------------------------------------------------------------------+
Executing nimadm phase 12.
+-----------------------------------------------------------------------------+
Cleaning up alt_disk_migration on the NIM master.
Cleaning up alt_disk_migration on client labotest.

Now you can reboot

IMPORTANT

Disable remote command when finish, and remove .rhosts file

To remove the old_rootvg, you have 2 ways:

  * exportvg old_rootvg
  * alt_rootvg_op -X

To mount the old filesystems with :

  * alt_rootvg_op -W -d hdisk0

To put it into sleep again:

  * alt_rootvg_op -S # add (-t) to recreate boot device 

Better Migration method (remove SDDPCM)

Define a premigration script to remove SDDPCM only on the clone

[root@nim1] /root > cat /export/nim/custom_scr_mig72/pre_scr_mig72.sh
#!/usr/bin/ksh

logname=/root/pre_scr_mig72.log
savedir=/root/save_mig72


#------------------------------------------------
main()
{
mkdir -p $savedir

echo "####### Save ssh keys"
cd /etc/ssh ; tar cvf $savedir/ssh_keys.tar .

echo "####### Save ssl config"
cd /var ; tar cvf $savedir/var_ssl.tar /var/ssl
echo "####### Save banner"
cp /etc/motd $savedir

echo "####### Save sendmail.cf"
cp /etc/mail/sendmail.cf $savedir

echo "####### Remove obsolet packages"
for lpp in $(lslpp -Lc | grep devices.ethernet.lnc2.rte |cut -d':' -f2)
do
  installp -u -g $lpp
done

echo "####### Remove obsolet packages"
for lpp in $(lslpp -Lc | egrep "^devices.sddpcm" |cut -d':' -f2)
do
  installp -u -g $lpp
done

}

main 2>&1 | tee $logname
[root@nim1] /root > lsnim -l pre_scr_mig72
pre_scr_mig72:
   class       = resources
   type        = script
   Rstate      = ready for use
   prev_state  = unavailable for use
   location    = /export/nim/custom_scr_mig72/pre_scr_mig72.sh
   alloc_count = 1
   server      = master

Put MPIO and SDDPCM driver in the lppsource, and do the migration for example from AIX 7.1 to 7.2, It will remove sddpcm from clone, and add automaticaly the latest version at the end of the migration

[root@nim1] /root > nimadm -c aixsrv1 -l aix7200-01-02_lpp -s aix7200-01-02_spot -d hdisk3 -a pre_scr_mig72 -j nimvg  -Y

If you have a kind of message Verifying alt_disk_migration eligibility /usr/sbin/nimadm[1146]: unknown: bad number
if you believe this disk is bootable, re-execute this command with the “-g” (ignore boot check) flag.

How to upgrade AIX using alt_disk and patches

INFO Modified emgr to block all customization scripts when INUCLIENTS is set. This can be overridden if the FORCE_SCRIPTS environment variable is set (export FORCE_SCRIPTS=yes).

0. Clean and make an extra disk.

# alt_disk_install -X      
# alt_disk_install -BCV -e /etc/exclude.rootvg  (hdiskx)

1. wake up alt_rootvg

# alt_rootvg_op -W -d (hdiskx)

2. Confirm installed ifixes

# INUCLIENTS=1 chroot /alt_inst /usr/sbin/emgr -l
# e.g.
INUCLIENTS=1 chroot /alt_inst /usr/sbin/emgr -r -L IV07730s03
INUCLIENTS=1 chroot /alt_inst /usr/sbin/emgr -r -L IV09922s03
INUCLIENTS=1 chroot /alt_inst /usr/sbin/emgr -r -L IV12612s03
INUCLIENTS=1 chroot /alt_inst /usr/sbin/emgr -r -L IV12629m03
INUCLIENTS=1 chroot /alt_inst /usr/sbin/emgr -r -L IV13803m03
INUCLIENTS=1 chroot /alt_inst /usr/sbin/emgr -r -L IV13873s03
INUCLIENTS=1 chroot /alt_inst /usr/sbin/emgr -r -L IV13891s03
INUCLIENTS=1 chroot /alt_inst /usr/sbin/emgr -r -L IV16603m03
INUCLIENTS=1 chroot /alt_inst /usr/sbin/emgr -r -L IV17536s03
INUCLIENTS=1 chroot /alt_inst /usr/sbin/emgr -r -L IV19158
INUCLIENTS=1 chroot /alt_inst /usr/sbin/emgr -r -L IV20864s03

3. Commit Current applied Fileset

# INUCLIENTS=1 chroot /alt_inst /usr/sbin/installp -s
# INUCLIENTS=1 chroot /alt_inst /usr/sbin/installp -cpgX all
# INUCLIENTS=1 chroot /alt_inst /usr/sbin/installp -cgX all
# INUCLIENTS=1 chroot /alt_inst /usr/sbin/installp -s
# INUCLIENTS=1 chroot /alt_inst /usr/bin/lppchk -v

4. Install AIX SP

# alt_rootvg_op -C -b update_all -I pa -l (patch directory) -V
# alt_rootvg_op -C -b update_all -I agXY -l (patch directory) -V
# INUCLIENTS=1 chroot /alt_inst /usr/bin/oslevel -s
# INUCLIENTS=1 chroot /alt_inst /usr/bin/lppchk -v

4.1 Remove Unnecessary fileset.

#INUCLIENTS=1 chroot /alt_inst /usr/bin/oslevel -rl 6100-08
#INUCLIENTS=1 chroot /alt_inst /usr/sbin/installp -u devices.msg.en_US.chrp.IBM.HPS.rte
#INUCLIENTS=1 chroot /alt_inst /usr/sbin/installp -u devices.msg.en_US.chrp.IBM.HPS.hpsfu
#INUCLIENTS=1 chroot /alt_inst /usr/bin/oslevel -rl 6100-08
# INUCLIENTS=1 chroot /alt_inst /usr/bin/oslevel -s

5. Install PowerHA SP

# alt_rootvg_op -C -b update_all -I pa -l (patch directory) -V
# alt_rootvg_op -C -b update_all -I agXY -l (patch directory) -V
# INUCLIENTS=1 chroot /alt_inst /usr/bin/lslpp -l cluster.*
# INUCLIENTS=1 chroot /alt_inst /usr/bin/lppchk -v

6. Add vmo parameter “numperm_global” into /etc/tunables/nextboot file

# INUCLIENTS=1 chroot /alt_inst /usr/sbin/vmo -FL numperm_global
# INUCLIENTS=1 chroot /alt_inst /usr/sbin/vmo -p -o numperm_global=0
# INUCLIENTS=1 chroot /alt_inst /usr/bin/cat /etc/tunables/nextboot

7. Remove xmdaily from inittab.

# INUCLIENTS=1 chroot /alt_inst /usr/sbin/lsitab xmdaily
# INUCLIENTS=1 chroot /alt_inst /usr/sbin/rmitab xmdaily
# INUCLIENTS=1 chroot /alt_inst /usr/sbin/lsitab xmdaily

8. Check applied Patch and Updated SW

# INUCLIENTS=1 chroot /alt_inst /usr/bin/oslevel -s
# INUCLIENTS=1 chroot /alt_inst /usr/bin/lppchk -v
# INUCLIENTS=1 chroot /alt_inst /usr/sbin/emgr -l
# INUCLIENTS=1 chroot /alt_inst /usr/bin/lslpp -l | grep -i cluster
# INUCLIENTS=1 chroot /alt_inst /usr/bin/lslpp -l rsct.core.hostrm
# INUCLIENTS=1 chroot /alt_inst /usr/bin/lslpp -l rsct.core.fsrm
# INUCLIENTS=1 chroot /alt_inst /usr/bin/lslpp -l rsct.core.errm
# INUCLIENTS=1 chroot /alt_inst /usr/bin/lslpp -l rsct.basic.rte
# INUCLIENTS=1 chroot /alt_inst /usr/bin/lslpp -l rsct.core.utils
# INUCLIENTS=1 chroot /alt_inst /usr/bin/lslpp -l rsct.core.rmc

9. Sleep altinst_rootvg; Create a bootimage

# alt_rootvg_op -S -t
# lspv | grep root ; only the rootvg is active.
# ps -ef | grep -i alt _ ; no /alt file system mounted.

10. Check & Change a bootlist

# bootlist -m normal -o
# bootlist -m normal hdiskY hdiskZ
# shutdown -Fr

AIX: multibos upgrade

Multibos, is a clone of the rootvg on the same disk, just the LV name are changed.

By default the BOS filesystems in rootvg (/, /usr, /var, /opt) and the BLV (hd5) are copied. All other filesystems and lvs are shared between BOS instances.

multibos -? ←-gives info of the parameters

Multibos update:

lsvg rootvg | grep FREE               <--checking if we have enough free space in rootvg
multibos -R                           <--remove any standby BOS from earlier (-R: remove)
multibos -sXp                         <--preview the operation (-s:setup, -X:expand fs if needed, -p:preview)
alog -of/etc/multibos/logs/op.alog    <--shows the log file

multibos -sX                          <--creates standby BOS
lsvg -l rootvg | grep bos             <--bos prefix added to the new lvs and fs'

multibos -S                           <--start standby BOS shell (-S:shell operation)
oslevel -s                            <--checking standby BOS level
exit                                  <--leaving standby BOS shell

multibos -Xacp -l /mnt/TL9SP4         <--preview of the update of the standby BOS (-a:update all, -c:customization operation?)
multibos -Xac -l /mnt/TL9SP4          <--updates the standby BOS (when done check new OS level in "multibos -S" (after then exit))

bootlist -m normal -ov                <--check that bootlist is set to standby BOS (-v:verbose)
                                      Check the correct BLV. Compare the output from bootlist with “Welcome to AIX” banner at startup (HMC).
                                      (usually no action is needed, multibos automatically sets the new blv)

e.g:

root@aix31: / # bootlist -m normal -ov
'ibm,max-boot-devices' = 0x5
NVRAM variable: (boot-device=/pci@80000002000000c/pci@2/pci1069,b166@1/scsi@0/sd@4:4 /pci@80000002000000c/pci@2/pci1069,b166@1/scsi@0/sd@4:2)
Path name: (/pci@80000002000000c/pci@2/pci1069,b166@1/scsi@0/sd@4:4)    <--check this pathname at startup, to determine which blv is used
match_specific_info: ut=disk/scsi/scsd
hdisk0 blv=bos_hd5
Path name: (/pci@80000002000000c/pci@2/pci1069,b166@1/scsi@0/sd@4:2)    <--as you see the ony difference is the :4 vs :2)
match_specific_info: ut=disk/scsi/scsd
hdisk0 blv=hd5


root@aix31: / # bootlist -m normal -o
hdisk0 blv=bos_hd5
hdisk0 blv=hd5

shutdown -Fr                   <--reboot
bootinfo -v                    <--verify which BLV the system booted from (bos_hd5 should be seen)
                               (the new fs' are now mounted and the old ones are renamed to bos_...)

if we need to go back to the original OS:

bootlist -m normal hdisk0 blv=hd5 hdisk0 blv=bos_hd5    <--set the bootlist to the original BLV first
shutdown -Fr

if you want to keep the upgraded OS (just without the bos_ prefixes): (0. if you want you can remove the original OS: multibos -R) ←-this will remove the original lvs, which were not upgraded

1. multibos -sX                <--as we have already bos_ prefixes, it will create the lvs as the original ones: hd2, hd9var, hd5...)
2. shutdown -Fr                <--BLV will be set to hd5, so the otiginal lvs OS (upgraded) will be boot
3. multibos -R                 <--removes the bos_ entries from rootvg

The filesystems that multibos copy are /, /usr, /var, and /opt. So if you want to use multibos, you should check that in rootvg you have enough space to duplicate that filesystems.

If you want to use multibos, and then install some efix, you should copy the efix, in some filesystem that multibos duplicate.

Changing bootlist if desired:

# bootlist -m normal hdisk0 blv=hd5 hdisk0 blv=bos_hd5

clone and upgrade (alt_disk_copy)

Reboot the system now with the “shutdown –Fr” command. After the reboot, confirm the TL level via “oslevel –r”. Verify which BLV the system booted from with the “bootinfo –v” command.

To clone the running 5300-00 rootvg to hdisk3, then apply updates from /updates to bring the cloned rootvg to a 5300-01 level:

     alt_disk_copy -d hdisk3 -F 5300-01_AIX_ML -l /updates Copy

The bootlist would then be set to boot from hdisk3 at the next reboot. To clone the running rootvg to hdisk3 and hdisk4, and execute update_all on all updates from /updates:

     alt_disk_copy -d "hdisk3 hdisk4" -b update_all -l /updates

PREREQUISITES First things first, we need to make sure the current environment is ready to be copied. The two main things that I look for are:

1. Make sure the current environment doesn’t have any missing filesets.

root@AIX / > oslevel -s
6100-05-01-1016
root@AIX / > instfix -i | grep ML
...
    All filesets for 6100-05_AIX_ML were found.

Note: If a particular ML doesn’t return positive (e.g. 6100-05_AIX_ML), use instfix -ciqk 6100-05_AIX_ML | grep “:-:” to see the fileset causing issues and rectify it. 2. Make sure all filesets have been committed. Now I’m not entirely sure if this is a requirement, but I prefer to work with an as clean environment as possible. <cli prompt='>'> root@AIX / > lslpp -l | grep -i applied </cli> Note: If you have software which is in an applied state, commit it.

Now that we’re happy with the state of the environment, you need to verify that there is enough room in rootvg to make copies of the following logical volumes [/ (hd4), /usr (hd2), /var (hd9var), /opt (hd10opt) and the boot logical volume (hd5)].

Once we’ve satisfied the above requirement, we can get started with the multibos work. I’m not going to dissect each command, so I suggest reading the multibos man pages to know what each switch does. The only one that I’ll explain is -p, which does a preview of the multibos command executed.

MULTIBOS 1. Remove any previous standby multibos environments. The below command will return a FAILURE status if you don’t have a standby multibos environment. If you’re using multibos for the first time, this will be the case.

root@AIX / > multibos -RX
Initializing multibos methods ...
Initializing log /etc/multibos/logs/op.alog ...
Gathering system information ...
multibos: 0565-077 Unable to locate standby BOS.
 
Log file is /etc/multibos/logs/op.alog
Return Status: FAILURE

2. Create the standby BOS. New logical volumes and filesystems will be created prefixed and with bos_. This can take a while to complete.

This will preview the creation of the multibos environment and list all the logical volumes that will be copied.

root@AIX / > multibos -sXp
...
...
Log file is /etc/multibos/logs/op.alog
Return Status = SUCCESS

You can also view the log file using the alog command.

root@AIX / > alog -of /etc/multibos/logs/op.alog

If everything completed successfully, run the command again without the preview.

root@AIX / > multibos -sX

Once complete, you’ll notice that the new logical volumes will be prefixed with bos_

3. Once complete, you can drop into a multibos shell and check oslevel, it should be the same as the host OS.

The below command will mount all the filesystems required and drop you in a MULTIBOS> prompt

root@AIX / > multibos -S
...
...
MULTIBOS> oslevel -s
6100-05-01-1016

To umount all the filesystems and break out of the multibos environment.

MULTIBOS> exit

4. Apply the TL/ML to the multibos environment.

The below command will tell multibos to apply the updates from the specified location (which is the location that your new ML resides).

root@AIX / > multibos -Xac -l /home/kristijan/6100-06-04

Drop into the multibos shell again and check that the ML has been successfully applied.

root@AIX / > multibos -S
MULTIBOS> oslevel -s
6100-06-04-1112
MULTIBOS> instfix -i | grep ML
    	All filesets for 6100-00_AIX_ML were found.
    	All filesets for 6.1.0.0_AIX_ML were found.
    	All filesets for 6100-01_AIX_ML were found.
    	All filesets for 6100-02_AIX_ML were found.
    	All filesets for 6100-03_AIX_ML were found.
    	All filesets for 6100-04_AIX_ML were found.
    	All filesets for 6100-05_AIX_ML were found.
    	All filesets for 6100-06_AIX_ML were found.
MULTIBOS> exit

5. Verify that the bootlist now contains blv=bos_hd5. bos_hd5 is the location of the new boot logical volume. It will be at the top of the list when you run the bootlist command.

root@AIX / > bootlist -m normal -o
hdisk0 blv=bos_hd5 pathid=0
hdisk0 blv=hd5 pathid=0

Managing ifixes in a NIM SPOT resource

To install an ifix into a NIM SPOT resource, place the ifix in an lpp_source in the emgr/ppc subdirectory. For example:

# lsnim -a location lpp1
lpp1:
   location = /export/lpp_source/lpp1
# ls -l /export/lpp_source/lpp1/emgr/ppc
-rw-r--r--   root   system  IZ12345.epkg.Z

Then, install the ifix using the cust operation:

# nim -o cust -a filesets=E:IZ12345.epkg.Z -a lpp_source=lpp1 spot1

or, using smitty:

# smitty nim_inst_latest
* Installation Target                                 spot1
* LPP_SOURCE                                          lpp1
* Software to Install                                [E:IZ12345.epkg.Z]      +

The E: tells the geninstall program that the software is packaged as an ifix and can be found in the emgr/ppc directory of the lpp_source. You may also create an installp_bundle resource, if you need to install multiple ifixes at once. Each ifix should be listed on a separate line of the bundle file.

To deinstall the ifix:

# nim -o maint -a filesets=E:IZ12345 -a installp_flags=u spot1

Managing ifixes in an altinst_rootvg

To create a copy of the rootvg and install an ifix to the altinst_rootvg:

# alt_disk_copy -d hdisk# -l /ifix_dir -w E:IZ12345.epkg.Z

To install an ifix into an altinst_rootvg that has already been created, you first perform a wake-up operation on the altinst_rootvg:

# alt_rootvg_op -Wd hdisk#

Then, you can install the ifix:

# alt_rootvg_op -Cw IZ12345.epkg.Z -l /ifix_dir

It is extremely important to put the altinst_rootvg back to sleep after the ifix has been installed:

# alt_rootvg_op -S

Managing ifixes in a multibos standby instance

The multibos command requires a bundle file that contains a list of the ifixes to be installed in geninstall bundle format. For example:

# cat ifixes.bnd
E:IZ12345.epkg.Z

To create a new multibos standby instance and install an ifix at the same time:

# multibos -Xsl /ifix_dir -b ifixes.bnd

To install an ifix into a multibos standby instance that has already been created:

# multibos -Xcl /ifix_dir -b ifixes.bnd

To deinstall the ifix, use the multibos shell function to open a shell in the standby instance of the OS:

# multibos -XS

Then, deinstall the ifix and exit the shell:

MULTIBOS> emgr -r -L IZ12345
MULTIBOS> exit
#

Using NIM Alternate Disk Migration (NIMADM)

https://www-01.ibm.com/support/docview.wss?uid=isg3T1012571

Question

How to use NIMADM
Answer

Using NIM Alternate Disk Migration (NIMADM)

What this document will cover:
What is NIMADM
Preparing for a NIMADM
Create a copy of rootvg to a free disk (or disks) and simultaneously migrate it to a new version or release level of AIX
Using a copy of rootvg, create a new NIM mksysb resource that has been migrated to a new version or release level of AIX
Using a NIM mksysb resource, create a new NIM mksysb resource that has been migrated to a new version or release level of AIX
Using a NIM mksysb resource, restore to a free disk (or disks) and simultaneously migrate to a new version or release level of AIX
Waking up and putting to sleep the migrated disk
Using a post migration script with nimadm
Logs used during the nimadm process and sample entries
Debug techniques if nimadm fails

NIMADM stands for Network Install Manager Alternate Disk Migration

The nimadm command is a utility that allows the system administrator to do the following:

· Create a copy of rootvg to a free disk (or disks) and simultaneously migrate it to a new version or release level of AIX.

· Using a copy of rootvg, create a new NIM mksysb resource that has been migrated to a new version or release level of AIX.

· Using a NIM mksysb resource, create a new NIM mksysb resource that has been migrated to a new version or release level of AIX.

· Using a NIM mksysb resource, restore to a free disk (or disks) and simultaneously migrate to a new version or release level of AIX.

The nimadm command uses NIM resources to perform these functions.


Preparing for a NIMADM

There are a few requirements that must be met before attempting to use I'll mention just some of these here.

· The NIM master must have the fileset installed in its own rootvg and in the SPOT that will be used for the migration. Both need to be at the same level. It is not necessary to install the alternate disk utilities on the client.

· The lpp_source and SPOT NIM resources that have been selected for the migration MUST match the AIX level to which you are migrating.

· The NIM master (as always) should be at the same or higher AIX level than the level you are migrating to on the client.

· The target client must be registered with the NIM master as a standalone NIM client.
· You will need to have the connection working between the master and client by either using rsh or nimsh.

· The NIM master must be able to execute remote commands on the client using rsh. rsh will need to be working in order for nimadm to work.
· Verify with the following on the client:
· # lssrc -ls inetd
· exec, login, and shell need to be active
· if not all active, please vi /etc/inetd.conf and make sure they are not commented out. If they have a # sign in front, remove the # , save the file and run refresh -s inetd, then verify with lssrc -ls inetd.

· Ensure the NIM client has a spare disk (not allocated to a volume group) large enough to contain a complete copy of its rootvg. If rootvg is mirrored, break the mirror and use one of the disks for the migration.

· Ensure the clients NIM master has a volume group (for example, nimadmvg) with enough free space to cater for a complete copy of the client's rootvg. If more than one AIX migration is occurring for multiple NIM clients, make sure there is capacity for a copy of each clients rootvg.
Create a copy of rootvg to a free disk (or disks) and simultaneously migrate it to a new version or release level of AIX

Creating a migrated copy of rootvg on another disk is probably the most common and straight forward use of nimadm. It dramatically reduces the amount of downtime compared to just doing a normal nim migration. As long as you have a free disk or disks large enough for a migrated copy of the rootvg and a nim server that is at the same level or higher than the level you want to migrate to this can be done. Once it completes you just reboot to the migrated disk and you are back up at the migrated level. If you discover any problems at the new level you just need to change the boot list back to the previous disk and reboot.

To perform a nimadm via SMIT do the following on the nim master:
# smitty nimadm
Perform NIM Alternate Disk Migration

Perform NIM Alternate Disk Migration

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

[TOP] [Entry Fields]
* Target NIM Client [select your client]
* NIM LPP_SOURCE resource [select the lpp_source]
* NIM SPOT resource [select the spot]
* Target Disk(s) to install [select the disk(s) you want to migrate to]
DISK CACHE volume group name []

NIM IMAGE_DATA resource []
NIM BOSINST_DATA resource []
NIM EXCLUDE_FILES resource []
NIM INSTALLP_BUNDLE resource []
NIM PRE-MIGRATION SCRIPT resource []
NIM POST-MIGRATION SCRIPT resource []

Phase to execute [all]
NFS mounting options []
Set Client bootlist to alternate disk? yes
Reboot NIM Client when complete? no
Verbose output? no
Debug output? no

ACCEPT new license agreements? No <-- change this to yes

Nothing else is required but it is recommended to enter a volume group for disk caching in order to avoid NFS issues.

If you prefer to use command line, the command would be:

# nimadm -c client_hostname -l lpp_source -s spot -d target_disk(s)

In order to also include a volume group for disk caching you would use the following command:

# nimadm -c client_hostname -l lpp_source -s spot -j VG_name -d target_disk(s) -Y

Using a copy of rootvg, create a new NIM mksysb resource that has been migrated to a new version or release level of AIX

If you have a nim client and you need a mksysb of it that can be put on another system that requires a higher release or version than the client is currently at, this will accomplish that.

There isn’t currently a way to do a Client to mksysb Migration via SMIT, you will need to use command line on the nim master:

To accomplish a Client to mksysb Migration via command line you would do the following:

# nimadm –c client -O path_migrated_mksysb_file -s spot -l lpp_sourc> -j VGname -Y -N new_nim_mksysb_name

To accomplish a mksysb to mksysb Migration via command line you would do the following:

# nimadm -T current_mksysb_resource -O path_migrated_mksysb_file -s spot -l lpp_source name -j VGname -Y -N new_nim_mksysb_name

Waking up and putting to sleep the migrated disk

There may be times that you want to wake up the migrated disk. This should be done from the nim server, not the client, using the following command:

# nimadm –W -c client -s spot -d target_disk

After waking it up, be sure to put it back to sleep before rebooting. That can be done with the following command:

# nimadm –S -c client -s spot

Using a post-migration script with nimadm

There may be times when you want to remove and/or install some other filesets after the migration completes. You can do that with a post-migration NIM script.

The nimadm utility can perform both pre and post migration tasks. This is accomplished by running NIM scripts either before or after a migration. The tool accepts the following flags for pre and post migration script resources:

-a PreMigrationScript Specifies the pre-migration NIM script resource. -z PostMigrationScript Specifies the post-migration NIM script resource.

pre-migration

This script resource that is run on the NIM master, but in the environment of the client's alt_inst file system that is mounted on the master (this is done by using the chroot command). This script is run before the migration begins.

post-migration

This script resource is similar to the pre-migration script, but it is executed after the migration is complete.

I will give an example of a post-migration only, although the configuration is the same for both.

In this example I will show you how to uninstall and install a fileset.

Debug techniques if nimadm fails

If nimadm fails, it’s best to first look at the logs to determine the cause of failure. If there isn’t enough information in the logs to determine the problem it may be necessary to run nimadm in debug mode.

Debug mode can be enabled by using the –D option if you are using command line or by setting Debug output? to yes if you are using smitty nimadm

There are many times that problems with nimadm can be resolved by using disk caching. That will eliminate issues that arise due to slow networks and/or NFS issues. Disk caching can be enabled by using the –j option and specifying the volume group to use on the nim master to perform the nimadm. If you are using smitty nimadm you would specify the volume group to use with DISK CACHE volume group name.

Since this is a migration you will want to verify that your system is in a consistent state before performing the nimadm. On the client, run the following commands:

# oslevel –s this will tell you what level the system is currently at
# oslevel –sq the highest level that is output should match the oslevel –s output from above.
# lppchk –v this should return to the prompt with no output
# lsvg rootvg | grep SIZE verify the size is at least 32 megabytes
# bosboot –ad /dev/ipldevice verify the boot image is created and there are no errors.

For additional checks you may want to do prior to the nimadm you may want to review the Preparing to Migrate document at

http://www-01.ibm.com/support/docview.wss?uid=isg3T1011431

If you have a pre-migration script resource defined on your nim master you can specify for that to be run. If you do it will be run during phase 4 of the nimadm process.

If you get an error similar to the following:

0505-205 nimadm: The level of bos.alt_disk_install.rte installed in SPOT <spot> (0.0.0.0) does not match the NIM master's level

You will need to verify the level of bos.alt_disk_install.rte on the master and in the spot. To check the master:

# lslpp –l bos.alt_disk_install.rte

To check the spot:

# nim –o showres <spot> | grep bos.alt_disk_install.rte

If the levels aren’t the same (or if the spot doesn’t have the fileset) you can install/update it with

smitty nim_inst_all

Then select the spot and the lpp_source that has the same level fileset that is on the master, then for

Software to Install enter bos.alt_disk_install.rte

Although there are many different kinds of errors possible with nimadm, having the correct level of bos.alt_disk_install.rte in the spot and doing the pre-migration checks and using the cache option will eliminate most of them. For anything else a debug output will probably be needed for further analysis.

aix/alt_disk_migration.txt · Last modified: 2024/07/02 14:37 by manu