Saturday, November 13, 2010

Live Upgrade - Basic

Its a method of upgrading a Solaris box while the system is operational. It s done by creating a parallel environment that resembles the current boot environment and making the upgrade on the newly created environment. All this is done while still the old environment is completely functional.

Once the upgrade is done on the newly created environment, the system can be started on the newly created environment with just a reboot thus reducing downtime for an upgrade within the time of a reboot.

It is also possible to do a flash intallation on the alternate environment which is similar to new installation even while the system is active.

One more advantage of this is if there is an issue with the booting of new environment, we can easily fall back to the old environment where the machine was functioning well before.

Live Upgrade process:
1. Create a boot environment
2. Upgrade an inactive boot environment
3. Activate the inactive boot environment with a reboot
4. Reboot the machine to boot from the newly created and activated BE
5. (Optional) Fallback to the original boot environment if issues with new BE.

Command involved in performing Live Upgrade:-
  • luactivate - Activate an inactive boot environment.
  • lucancel - Cancel a scheduled copy or create job.
  • lucompare - Compare an active boot environment with an inactive boot environment.
  • lumake - Recopy file systems to update an inactive boot environment.
  • lucreate - Create a boot environment.
  • lucurr - Name the active boot environment.
  • ludelete - Delete a boot environment.
  • ludesc - Add a description to a boot environment name.
  • lufslist - List critical file systems for each boot environment.
  • lumount - Enable a mount of all of the file systems in a boot environment. This command enables you to modify the files in a boot environment while that boot environment is inactive.
  • lurename - Rename a boot environment.
  • lustatus - List status of all boot environments.
  • luumount - Enable an unmount of all the file systems in a boot environment. This command enables you to modify the files in a boot environment while that boot environment is inactive.
  • luupgrade - Upgrade an OS or install a flash archive on an inactive boot environment.
Before using Live Upgrade 3 packages are required. SUNWlucfg, SUNWlur, SUNWluu - These should be installed in the order specified.

# lustatus
ERROR: No boot environments are configured on this system
ERROR: cannot determine list of all boot environment names

If the following error is displayed when you run the lustatus command, it is an indication that a new installation was performed and that Solaris Live Upgrade was not used. Before any BEs can be acknowledged in the lustatus output, a new BE must be first created on the system.
 
# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
sol10-u6                   yes      no     no        yes    -
sol10-u8                   yes      yes    yes       no     -
#

This shows there are 2 BE's configured one is active.

Normally when a Live upgrade is performed, the OS critical filesystems(/,/var,/opt,/usr) are copied on to the new BE. While creating new environments, the filesytems can be either split or can be merged.

For example if in the current Environment filesystems /var,/opt are not seperate filesystems, while creating new environment, we could split these filesystems seperately or vise-versa.
 
Setting up New Environment:

For setting up an alternate BE, we need sufficient space. The alt-BE should have space to hold the copy of the existsing BE and the updates. Reformatting of disk might be necessary.

Prepare the disk by creating the slices necessary or creating mirrors or creating zpools necessary.
Create the BE

# lucreate -c sol10-u6 -n sol10-u8 -p rpool

# lucreate -c first_disk -m /:/dev/dsk/c0t4d0s0:ufs -n second_disk


It is also possible to detach an exsistsing mirror and using the unconfigured mirror as the alt-BE

Applying the upgrades:

Once the new BE is created, upgrades are applied onto that.

# luupgrade -n c0t15d0s0 -u -s /net/ins-svr/export/Solaris_10 \
combined.solaris_wos


All upgrades/patches are done to this alternate BE.

Activating the alt-BE:

Once the upgrades are done, we can prepare this BE to become the BE on next reboot. To achieve that, we need to activate this alt-BE.

Before luactivate:
# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
s10s_u9wos_14a             yes      yes    yes       no     -
testBE                     yes      no     no        yes    -
# luactivate testBE
A Live Upgrade Sync operation will be performed on startup of boot environment .


**********************************************************************

The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.

**********************************************************************

In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Enter the PROM monitor (ok prompt).

2. Boot the machine to Single User mode using a different boot device
(like the Solaris Install CD or Network). Examples:

     At the PROM monitor (ok prompt):
     For boot to Solaris CD:  boot cdrom -s
     For boot to network:     boot net -s

3. Mount the Current boot environment root slice to some directory (like
/mnt). You can use the following commands in sequence to mount the BE:

     zpool import rpool
     zfs inherit -r mountpoint rpool/ROOT/s10s_u9wos_14a
     zfs set mountpoint= rpool/ROOT/s10s_u9wos_14a
     zfs mount rpool/ROOT/s10s_u9wos_14a

4. Run  utility with out any arguments from the Parent boot
environment root slice, as shown below:

     /sbin/luactivate

5. luactivate, activates the previous working boot environment and
indicates the result.

6. Exit Single User mode and reboot the machine.

**********************************************************************

Modifying boot archive service
Activation of boot environment  successful.
#
After activation observe the difference
# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
s10s_u9wos_14a             yes      yes    no        no     -
testBE                     yes      no     yes       no     -

Now perform the reboot for switching the BE's. Thus an ugraded system is achieved with the downtime of just a reboot.

This is the core of how Live upgrade happens. But a lot of other important details are to be taken care depending on the type of filesystems used(like SVM, VXfs, ZFS...) etc. This is just an introduction.

Saturday, November 6, 2010

To turn off password aging

To turn off password aging

(Server:/)# for i in server1 server2 server3 server4 server5
> do
> ssh $i "passwd -x -1 schweitzer"
> done

passwd: password information changed for schweitzer
passwd: password information changed for schweitzer
passwd: password information changed for schweitzer
passwd: password information changed for schweitzer
passwd: password information changed for schweitzer

Extracted from man page of passwd:

     -x max              Sets maximum field  for  name.  The  max
                         field  contains  the number of days that
                         the password  is  valid  for  name.  The
                         aging for name is turned off immediately
                         if max is set to -1.

Reset password while system uses both local & ldap accounts

In a machine where the user authentication is depending on both local /etc/passwd file and ldap, reset of local password should be done as below.

(Server1:/)# passwd petuser
New Password:
Re-enter new Password:
Permission denied

(Server1:/)# id
uid=0(root) gid=0(root)
(Server1:/)#

This happens because the user account authentication involves both ldap and files.

(Server1:/)# ps -ef | grep ldap
    root  2925  2430   0   Oct 27 ?           0:47 /usr/lib/ldap/ldap_cachemgr
    root 12024 22230   0 13:57:15 pts/1       0:00 grep ldap

(Server1:/)# passwd -help
usage:
        passwd [-r files | -r nis | -r nisplus | -r ldap] [name]
        passwd [-r files] [-egh] [name]
        passwd [-r files] -sa
        passwd [-r files] -s [name]
        passwd [-r files] [-d|-l|-N|-u] [-f] [-n min] [-w warn] [-x max] name
        passwd -r nis [-eg] [name]
        passwd -r nisplus [-egh] [-D domainname] [name]
        passwd -r nisplus -sa
        passwd -r nisplus [-D domainname] -s [name]
        passwd -r nisplus [-D domainname] [-l|-N|-u] [-f] [-n min] [-w warn]
                [-x max] name
        passwd -r ldap [-egh] [name]
        passwd -r ldap -sa
        passwd -r ldap -s [name]
        passwd -r ldap [-l|-N|-u] [-f] [-n min] [-w warn] [-x max] name
Invalid combination of options

So use -r option with passwd command to reset the local password.

(Server1:/)# passwd -r files petuser
New Password:
Re-enter new Password:
passwd: password successfully changed for petuser