Wednesday, September 18, 2013

Recovering /etc/path_to_inst file

Format command fails

SolHost10:/lue# format
Cannot set directory to /dev/rdsk - No such file or directory
SolHost10:/lue#


/etc/path_to_inst file is lost

no files in /dev/rds/ and /dev/dsk


SolHost10:/lue# ls -l /dev/rdsk | wc -l
/dev/rdsk: No such file or directory
0
SolHost10:/lue# ls -l /dev/dsk | wc -l
/dev/dsk: No such file or directory
0
SolHost10:/lue#


Copy a good backup instance of /etc/path_to_inst of this host from a very recent backup or explorer.

SolHost10:/root# ls -l /var/tmp/path_to_inst
-r--r--r--   1 soluse dvel     260809 Oct 21 13:05 /var/tmp/path_to_inst
SolHost10:/root#


cp path_to_inst /etc/path_to_inst

chown root:root /etc/path_to_inst

chmod 444 /etc/path_to_inst

ls -l /etc/path_to_inst

more /etc/path_to_inst


devfsadm -Cv

This will restore the device tree

Thursday, July 11, 2013

Importing a zpool into a ldom

Importing a zpool into a ldom

Identify the disk to create the zpool on source server.

lrwxrwxrwx   1 root     root          67 Aug 22 12:47 /dev/rdsk/c6t6072482462EDF0001432HH90000203Bd0s2 -> 
../../devices/scsi_vhci/ssd@g60760e80164ef90000014ef90000203b:c,raw


Create the zpool and zfs

(MySource:/)# zpool create MyZpool /dev/rdsk/c6t6072482462EDF0001432HH90000203Bd0
(MySource:/)#

(MySource:/)# df -h /MyFilesystem
Filesystem             size   used  avail capacity  Mounted on
MyZpool/MyFilesystem
                        98G    62G    36G    63%    /MyFilesystem
(MySource:/)#


(MySource:/)# zpool get all MyZpool
NAME          PROPERTY       VALUE               SOURCE
MyZpool  size           99.5G               -
MyZpool  capacity       61%                 -
MyZpool  altroot        -                   default
MyZpool  health         ONLINE              -
MyZpool  guid           917911360396256368  -
MyZpool  version        32                  default
MyZpool  bootfs         -                   default
MyZpool  delegation     on                  default
MyZpool  autoreplace    off                 default
MyZpool  cachefile      -                   default
MyZpool  failmode       wait                default
MyZpool  listsnapshots  on                  default
MyZpool  autoexpand     off                 default
MyZpool  free           38.0G               -
MyZpool  allocated      61.5G               -
MyZpool  readonly       off                 -
(MySource:/)#


Export the zpool

(MySource:/)# zpool export MyZpool
(MySource:/)#


/// On the physical host of the ldom's map this disk and then map the disks to the virtual disk services

Physical host - MyTargetPhy

lrwxrwxrwx   1 root     root          67 Aug 22 16:39 /dev/rdsk/c12t6072482462EDF0001432HH90000203Bd0s2 -> 
../../devices/scsi_vhci/ssd@g60760e80164ef90000014ef90000203b:c,raw


(MyTargetPhy:/root)# zpool import
  pool: MyZpool
    id: 917911360396256368
 state: UNAVAIL
status: The pool is formatted using an incompatible version.
action: The pool cannot be imported.  Access the pool on a system running newer
        software, or recreate the pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-A5
config:

        MyZpool                              UNAVAIL  newer version
          c12t6072482462EDF0001432HH90000203Bd0  ONLINE




status: The pool is formatted using an incompatible version. 

=> the zpool was created in a machine with a higher version and the target physical host has a lower version of OS that is unable to import it. However the ldom is in the same level as of the source and the zpool can be imported into the ldom



Source:

(MySource:/)# cat /etc/release
                   Oracle Solaris 10 1/13 s10s_u11wos_24a SPARC
  Copyright (c) 1983, 2013, Oracle and/or its affiliates. All rights reserved.
                            Assembled 17 January 2013
(MySource:/)#


Destination - Physical 

(MyTargetPhy:/root)# cat /etc/release
                   Oracle Solaris 10 8/11 s10s_u10wos_17b SPARC
  Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.
                            Assembled 23 August 2011
(MyTargetPhy:/root)#


Destination LDOM

(MyLdom:/root)# cat /etc/release
                   Oracle Solaris 10 1/13 s10s_u11wos_24a SPARC
  Copyright (c) 1983, 2013, Oracle and/or its affiliates. All rights reserved.
                            Assembled 17 January 2013
(MyLdom:/root)#




Map the disk to the ldom using the virtual disk service


(MyTargetPhy:/root)# ldm add-vdsdev /dev/rdsk/c12t6072482462EDF0001432HH90000203Bd0s2 ZP-MyLdom-temp@primary-vds0
(MyTargetPhy:/root)#
(MyTargetPhy:/root)# ldm add-vdisk ZP-MyLdom-temp ZP-MyLdom-temp@primary-vds0 MyLdom
(MyTargetPhy:/root)#

(MyTargetPhy:/root)# ldm list -l primary | grep MyLdom
                     MyLdom-boot                                    /dev/rdsk/c12t6072482462EDF0001432HH900004530d0s2
                     ZP-MyLdom-temp                                 /dev/rdsk/c12t6072482462EDF0001432HH90000203Bd0s2
(MyTargetPhy:/root)#


Check if the zpool is visible in the ldom


(MyLdom:/root)# zpool import
  pool: MyZpool
    id: 917911360396256368
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

        MyZpool  ONLINE
          c0d5      ONLINE
(MyLdom:/root)#


Import the zpool

(MyLdom:/root)# zpool import MyZpool


(MyLdom:/root)# df -h /MyFilesystem
Filesystem             size   used  avail capacity  Mounted on
MyZpool/MyFilesystem
                        98G    62G    36G    63%    /MyFilesystem
(MyLdom:/root)#



Thursday, June 27, 2013

NFS enable root authorization for client



client

root@HPHost:/hroot# grep /NFSShare /etc/mnttab
SolHost:/NFSShare /NFSShare nfs rsize=32768,wsize=32768,NFSv3,dev=4 0 0 1374220408
root@HPHost:/hroot#
root@HPHost:/hroot#
root@HPHost:/hroot# ls -ld /NFSShare
drwxrwxrwx  16 100028       grplos          16 Jul 22 15:52 /NFSShare
root@HPHost:/hroot# 

root@HPHost:/NFSShare/CS4_SPM# touch test3


-rw-r-----   1 nobody     nogroup          0 Jul 22 16:07 test3



Server

(SolHost:/root)# dfshares
RESOURCE                                  SERVER ACCESS    TRANSPORT
  SolHost:/NFSShare               SolHost  -         -
(SolHost:/root)#


enable root access to all host

(SolHost:/root)# share -F nfs -o rw,anon=0 /NFSShare


(SolHost:/root)# exportfs -v
share -F nfs
-               /NFSShare   rw,anon=0   ""
(SolHost:/root)#
(SolHost:/root)#


client

root@HPHost:/NFSShare/CS4_SPM# touch test7
root@HPHost:/NFSShare/CS4_SPM# ls -ltr test7
-rw-r-----   1 root       sys              0 Jul 22 16:17 test7
root@HPHost:/NFSShare/CS4_SPM#

Thursday, May 16, 2013

VxVM - Fixing plex in DISABLED IOFAIL state


Filesystem is un-mountable because of plex in DISABLED IOFAIL state:


The filesystem fails to mount 


(HostSol10:/root)# mount -F vxfs /dev/vx/dsk/DGDB1/VOLFSARC /DGDB1
UX:vxfs mount: ERROR: V-3-20003: Cannot open /dev/vx/dsk/DGDB1/VOLFSARC: No such device or address
UX:vxfs mount: ERROR: V-3-24996: Unable to get disk layout version
(HostSol10:/root)#


The volume is shown as in disabled state

(HostSol10:/root)# vxprint -g DGDB1 -v
TY NAME         ASSOC        KSTATE   LENGTH   PLOFFS   STATE    TUTIL0  PUTIL0
v  VOLFSARC fsgen      DISABLED 9013559296 -      ACTIVE   -       -
(HostSol10:/root)#

Starting the volume also fails

(HostSol10:/root)# vxvol -g DGDB1 startall
VxVM vxvol ERROR V-5-1-1198 Volume VOLFSARC has no CLEAN or non-volatile ACTIVE plexes
(HostSol10:/root)#

Detailed status of the volume shows DISABLED IOFAIL state

(HostSol10:/root)# vxprint -htg DGDB1 -p | more
PL NAME         VOLUME       KSTATE   STATE    LENGTH   LAYOUT    NCOL/WID MODE
SD NAME         PLEX         DISK     DISKOFFS LENGTH   [COL/]OFF DEVICE   MODE
SV NAME         PLEX         VOLNAME  NVOLLAYR LENGTH   [COL/]OFF AM/NM    MODE
SC NAME         PLEX         CACHE    DISKOFFS LENGTH   [COL/]OFF DEVICE   MODE

pl VOLFSARC-01 VOLFSARC DISABLED IOFAIL 9013559296 CONCAT -    RW
sd DISK-222A-01 VOLFSARC-01 DISK-222A 0 209643136 0 emc0_222a ENA
sd DISK-222B-01 VOLFSARC-01 DISK-222B 0 209643136 209643136 emc0_222b ENA
sd DISK-222C-01 VOLFSARC-01 DISK-222C 0 209643136 419286272 emc0_222c ENA
sd DISK-222D-01 VOLFSARC-01 DISK-222D 0 209643136 628929408 emc0_222d ENA
sd DISK-222E-01 VOLFSARC-01 DISK-222E 0 209643136 838572544 emc0_222e ENA
sd DISK-222F-01 VOLFSARC-01 DISK-222F 0 209643136 1048215680 emc0_222f ENA
sd DISK-2228-01 VOLFSARC-01 DISK-2228 0 209643136 1257858816 emc0_2228 ENA
sd DISK-2229-01 VOLFSARC-01 DISK-2229 0 209643136 1467501952 emc0_2229 ENA
(HostSol10:/root)#

Once all is verified and no errors seen in disk or SAN, clean the plex

(HostSol10:/root)# vxmend fix clean VOLFSARC-01

(HostSol10:/root)# vxprint -htg DGDB1 -v | grep v
v  VOLFSARC -          DISABLED ACTIVE   9013559296 SELECT  -        fsgen
(HostSol10:/root)#
(HostSol10:/root)#

There seems to be some disk reporting failing status. failing status is differnt from failed status reported by vxvm. The disk showing failing status may be because of a transient error rather than a truly failing disk. In this case, you can simply clear the status. If the failing status continues to reappear for the same disk, it may be a sign that there is genuine, hardware problem with the disk, or with the SAN connectivity

(HostSol10:/root)# vxdisk list | grep -i fai
emc0_24ab    auto:cdsdisk    DISK-DB3-24AB  DGDB3   online thinrclm failing
emc0_24a3    auto:cdsdisk    DISK-DB3-24A3  DGDB3   online thinrclm failing
emc0_38c0    auto:cdsdisk    DISK-DB4-38c0  DGDB4   online thinrclm failing
emc0_39a0    auto:cdsdisk    DISK-DB6-39A0  DGDB6   online thinrclm failing
(HostSol10:/root)#


The volume can be forcefully started and the FS mounted.

(HostSol10:/root)#
(HostSol10:/root)# vxvol -g DGDB1 -f start VOLFSARC
(HostSol10:/root)#
(HostSol10:/root)# vxprint -g DGDB1 -v
TY NAME         ASSOC        KSTATE   LENGTH   PLOFFS   STATE    TUTIL0  PUTIL0
v  VOLFSARC fsgen      ENABLED  9013559296 -      ACTIVE   -       -
(HostSol10:/root)#

Saturday, May 4, 2013

VxVM - vxconfigd debug mode operation

VERITAS Volume Manager (tm) provides the option of logging console output to a file. The Volume Manager configuration daemon vxconfigdcontrols whether such logging is turned on or off, which by default, is disabled. If enabled, the default log file is vxconfigd.log, and its location varies by operating system:

SolHost10:/root#vxconfigd -x 6 -k -x log -x logfile=/tmp/vxconfigd.out
VxVM vxconfigd DEBUG V-5-1-24577

VOLD STARTUP pid=54427 debug-level=6 logfile=/tmp/vxconfigd.out

VxVM vxconfigd DEBUG V-5-1-681 IOCTL GET_VOLINFO: return 0(0x0)
VxVM vxconfigd DEBUG V-5-1-23909 Kernel version 5.1_SP1
VxVM vxconfigd DEBUG V-5-1-681 IOCTL KTRANS_ABORT: failed: errno=22 (Invalid argument)
VxVM vxconfigd DEBUG V-5-1-681 IOCTL GET_KMEM id=0 size=84: return 0(0x0)
        Results: got 84 bytes
VxVM vxconfigd DEBUG V-5-1-681 IOCTL GET_KMEM id=2 size=6384: return 0(0x0)
        Results: got 6384 bytes
VxVM vxconfigd DEBUG V-5-1-681 IOCTL GET_KMEM id=1 size=0: return 0(0x0)
        Results: got 0 bytes
VxVM vxconfigd DEBUG V-5-1-681 IOCTL SET_KMEM id=1 size=0: return 0(0x0)
VxVM vxconfigd DEBUG V-5-1-681 IOCTL SET_KMEM id=0 size=84: return 0(0x0)
VxVM vxconfigd DEBUG V-5-1-5657 mode_set: oldmode=none newmode=enabled
VxVM vxconfigd DEBUG V-5-1-5656 mode_set: locating system disk devices
VxVM vxconfigd DEBUG V-5-1-681 IOCTL GET_DISKS rids: 0.17: failed: errno=2 (No such file or directory)
VxVM vxconfigd DEBUG V-5-1-5477 find_devices_in_system: locating system disk devices
VxVM vxconfigd DEBUG V-5-1-16309 ddl_find_devices_in_system: Thread pool initialization succeed
VxVM vxconfigd DEBUG V-5-1-16366 devintf_find_soldevices: Using libdevinfo for scanning device tree
VxVM vxconfigd DEBUG V-5-1-14578 ctlr_list_insert: Creating entry for c-1, state=V
VxVM vxconfigd DEBUG V-5-1-5294 ctlr_list_insert: check for c-1
VxVM vxconfigd DEBUG V-5-1-14579 ctlr_list_insert: entry already present for c-1, state=V
VxVM vxconfigd DEBUG V-5-1-5294 ctlr_list_insert: check for c-1
VxVM vxconfigd DEBUG V-5-1-14579 ctlr_list_insert: entry already present for c-1, state=V
VxVM vxconfigd DEBUG V-5-1-21563 ddl_add_hba: Added hba c1
VxVM vxconfigd DEBUG V-5-1-21567 ddl_add_port: Added port c1_p0 under hba c1
VxVM vxconfigd DEBUG V-5-1-21569 ddl_add_target: Added target c1_p0_t0 under port c1_p0
.....
...
VxVM vxconfigd DEBUG V-5-1-14886 ddl_vendor_info: nvlist[3] is ANAME=PILLAR-AXIOM
VxVM vxconfigd DEBUG V-5-1-14885 ddl_vendor_info: name = ANAME values = 1
VxVM vxconfigd DEBUG V-5-1-14884 ddl_vendor_info: library: libvxpillaraxiom.so name: ANAME value[0]: PILLAR-AXIOM

VxVM vxconfigd DEBUG V-5-1-14886 ddl_vendor_info: nvlist[4] is ASL_VERSION=vm-5.1.100-rev-1
VxVM vxconfigd DEBUG V-5-1-14885 ddl_vendor_info: name = ASL_VERSION values = 1
VxVM vxconfigd DEBUG V-5-1-14884 ddl_vendor_info: library: libvxpillaraxiom.so name: ASL_VERSION value[0]: vm-5.1.100-rev-1

VxVM vxconfigd DEBUG V-5-1-14882 ddl_vendor_info exits with success for library = libvxpillaraxiom.so
VxVM vxconfigd DEBUG V-5-1-0 Check ASL - libvxpp.so
VxVM vxconfigd DEBUG V-5-1-14563 checkasl: ASL Key file - /etc/vx/aslkey.d/libvxpp.key
VxVM vxconfigd DEBUG V-5-1-14880 ddl_vendor_info entered for library = libvxpp.so
VxVM vxconfigd ERROR V-5-1-0 Segmentation violation - core dumped
SolHost10:/root#

The higher the debug level (0-9), the greater the output

Thursday, May 2, 2013

Solaris - Booting into a zfs root from ok prompt

If the root is in zfs, to boot into a specific Boot Environment, below process can be used

In our case we have 2 BE's but one is not visible 

{0} ok boot -L
Boot device: /pci@400/pci@0/pci@8/scsi@0/disk@0:a  File and args: -L
1 sol10u8
Select environment to boot: [ 1 - 1 ]: 1

To boot the selected entry, invoke:
boot [] -Z rpool/ROOT/sol10u8


Program terminated

However, we can still invoke it from ok prompt, if we know the zfs root fs name.


{0} ok boot -Z rpool/ROOT/sol10u9


T5240, No Keyboard
Copyright (c) 1998, 2012, Oracle and/or its affiliates. All rights reserved.
OpenBoot 4.33.6.b, 130848 MB memory available, Serial #94648388.
Ethernet address 0:21:28:a4:38:44, Host ID: 85a43844.



Boot device: /pci@400/pci@0/pci@8/scsi@0/disk@0:a  File and args: -Z rpool/ROOT/sol10-u9
SunOS Release 5.10 Version Generic_147440-11 64-bit
Copyright (c) 1983, 2012, Oracle and/or its affiliates. All rights reserved.
WARNING: /scsi_vhci/ssd@g60060e80153269000001326900002023 (ssd16):

        Corrupt label; wrong magic number

Thursday, March 28, 2013

Unix Move a process to background and bring it back to foreground

It is possible to be move a process to background so that we have the prompt back for other activities and bring the process that was send background to foreground.



Use CTRL+z to suspends a process (SIGTSTP)

(MySolaris10:/var/adm)#
(MySolaris10:/var/adm)# cp -rp lastlog lastlog.old
^Z
[1]+  Stopped                 cp -rp lastlog lastlog.old
(MySolaris10:/var/adm)#
(MySolaris10:/var/adm)#

Use bg command to send the command to background 

(MySolaris10:/var/adm)#
(MySolaris10:/var/adm)# bg
[1]+ cp -rp lastlog lastlog.old &
(MySolaris10:/var/adm)#

Once command completes, it returns the status of the pid

(MySolaris10:/var/adm)#
[1]+  Done                    cp -rp lastlog lastlog.old
(MySolaris10:/var/adm)#
(MySolaris10:/var/adm)#

Using fg will bring the process to foreground.

Tuesday, March 26, 2013

diff - /usr/sbin/ps vs /usr/ucb/ps

Difference between /usr/sbin/ps & /usr/ucb/ps command


The command that is equivalent to /usr/usb/ps -axu is "ps -aef"

/usr/ucb/ps is BSD 
/usr/sbin/ps is SVR4 


/usr/ucb/ps -www  ==as wide as possible, this over comes the 80 column restriction that is seen in normal ps -ef command.

To see which executable is being run by a process, '/usr/ucb/ps auxwww' can be used. it shows you the exact command that invoked all the processes you see via 'ps'.

Wednesday, February 20, 2013

modinfo


solaris1:/root# modinfo
 Id Loadaddr   Size Info Rev Module Name
  0  1000000 1f0bb8   -   0  unix ()
  1  10af380 23c6b0   -   0  genunix ()
  2  1294a90   1620   -   0  platmod ()
  3  1295f00   d5c8   -   0  FJSV,SPARC64-VII ()
  5  129e000   4be8   1   1  specfs (filesystem for specfs)
  6  12a29e8   38a8   3   1  fifofs (filesystem for fifo)
  7 7afa0000  18750 218   1  dtrace (Dynamic Tracing)
  8  12a60c8   4248  16   1  devfs (devices filesystem 1.16)
  9  12a9f78  1c0a8   5   1  procfs (filesystem for proc)
 12  12c7248   39f8   1   1  TS (time sharing sched class)
 13  12ca460    8dc   -   1  TS_DPTBL (Time sharing dispatch table)
 14  12ca4f0  384a0   2   1  ufs (filesystem for ufs)
 15  1300138    21c   -   1  fssnap_if (File System Snapshot Interface)
 16  13002b0   1d28   1   1  rootnex (sun4 root nexus 1.15)
 17  1301b20    1bc  57   1  options (options driver)
..............


solaris1:/root# modinfo -c
 Id    Loadcnt Module Name                            State
  0          1 unix                             LOADED/INSTALLED
  1          1 genunix                          LOADED/INSTALLED
  2          1 platmod                          LOADED/INSTALLED
  3          1 FJSV,SPARC64-VII                 LOADED/INSTALLED
  4          0 cl_bootstrap                     UNLOADED/UNINSTALLED
  5          1 specfs                           LOADED/INSTALLED
  6          1 fifofs                           LOADED/INSTALLED
  7          1 dtrace                           LOADED/INSTALLED
  8          1 devfs                            LOADED/INSTALLED
  9          1 procfs                           LOADED/INSTALLED
 10          1 swapgeneric                      UNLOADED/UNINSTALLED
 11          0 lbl_edition                      UNLOADED/UNINSTALLED
 12          1 TS                               LOADED/INSTALLED
..............


modload - load a kernel module (filename)
modunload - unload a module (using module_id)

Sunday, February 10, 2013

Solaris x86 - Console GUI and CLI



On Solaris x86 servers, Console can be launched from both GUI and the CLI.

start /SP/console

or

Web GUI -> Remote Control -> Launch Remote Console

Both the consoles can show the POSt and grub load process. 
But once the OS is loaded after the OS selection screen, one of the screen can go Black based on the eeprom setting.

eg)

MySolaris:/root# prtdiag -v | head -1
System Configuration: Sun Microsystems SUN FIRE X4250
MySolaris:/root#
MySolaris:/root# eeprom | grep console
console=text
MySolaris:/root#


If console = 'ttya' then the serial console (/SP/console) will be active
If console = 'text' then the VGA port (i.e., the redirected remote console) will be active.

If you change the eeprom console setting, an O/S reboot will be needed.

Tuesday, January 29, 2013

Resetting SC from the machine - v490


Sun-Fire-V490's rsc can be reset from the machine


(MySolaris:/root)# cd /usr/platform/`uname  -i`/
(MySolaris:/usr/platform/SUNW,Sun-Fire-V490)#


(MySolaris:/usr/platform/SUNW,Sun-Fire-V490)# cd rsc/
(MySolaris:/usr/platform/SUNW,Sun-Fire-V490/rsc)#


(MySolaris:/usr/platform/SUNW,Sun-Fire-V490/rsc)# ls
rsc-config      rsc-initscript  rscadm
(MySolaris:/usr/platform/SUNW,Sun-Fire-V490/rsc)#


(MySolaris:/usr/platform/SUNW,Sun-Fire-V490/rsc)# ./rscadm resetrsc
Are you sure you want to reboot RSC (y/n)?  y
(MySolaris:/usr/platform/SUNW,Sun-Fire-V490/rsc)#


(MySolaris:/usr/platform/SUNW,Sun-Fire-V490/rsc)# uname -a
SunOS MySolaris 5.10 Generic_147440-11 sun4u sparc SUNW,Sun-Fire-V490
(MySolaris:/usr/platform/SUNW,Sun-Fire-V490/rsc)#

Sunday, January 27, 2013

Login delays



su - < nis user> is taking a long time ( more than 20 sec )


Some introduction to the problem:

When is the slowness observed su only or even with telnet and ssh?

Answer: Even telnet and ssh is slow. Basically “su - ” is taking long time for all the user.

When slowness is observed,is it just before you get logged in or even after that with commands?

Answer: Only the login is very slow. Once login everything looks normal.


(MySolaris:/)# time su - schweitz -c "hostname"
Sun Microsystems Inc. SunOS 5.10 Generic January 2005

#############################################################
# This server is using NIS 
#############################################################
MySolaris

real 0m20.180s
user 0m0.040s
sys 0m0.070s
(MySolaris:/)#


First test:

Difference in time between "su schweitz" and "su - schweitz".

With "su - username" the HOME directory has to be mounted and the shell profiles (system and user specific) get executed. (sh,ksh: /etc/profile, $HOME/.profile; for C-shell ist equivalent). May be those profiles contain command wich run slow (like 'quota').

(MySolaris:/)# time su schweitz -c "hostname"
(MySolaris:/)# time su - schweitz -c "hostname"

su is working fine. su - is taking long time

Second Test:

All users have csh as its shell

rename /etc/.login (used by csh") :
# mv /etc/.login /etc/.login.not

Then test "su - xxx" :
# time su - schweitz -c "hostname"

after moving the /etc/.login, it is indeed fast

(MySolaris:/)# time su - schweitz -c "hostname"
MySolaris

real 0m0.055s
user 0m0.016s
sys 0m0.027s
(MySolaris:/)# 

The real problem:

In the resultant truss output, it was noted that quota command takes more than 20 sec. 
So issue seems to be with quota command. The "quota" command also checks NFS mounted file systems. One of the NFS servers seems not to respond. So removed an NFS mount and then the su was fast. 

The 20 sec delay is caused by checking quotas on NFS mounted file systems; it occurs when some NFS server does not respond.


Thursday, January 17, 2013

Clearing a minor fmadm faulty alert



(MySolaris:/)# fmadm faulty
--------------- ------------------------------------  -------------- ---------
TIME            EVENT-ID                              MSG-ID         SEVERITY
--------------- ------------------------------------  -------------- ---------
Dec 16 22:36:07 96c775c2-6764-6eae-ea5b-ea57f62cc2c0  FMD-8000-0W    Minor

Host        : MySolaris
Platform    : SUNW,Sun-SPARC Enterprise T5240        Chassis_id  :
Product_sn  :

Fault class : defect.sunos.fmd.nosub
FRU         : None
                  faulty

Description : The Solaris Fault Manager received an event from a component to
              which no automated diagnosis software is currently subscribed.
              Refer to http://sun.com/msg/FMD-8000-0W for more information.

Response    : Error reports from the component will be logged for examination
              by Sun.

Impact      : Automated diagnosis and response for these events will not occur.

Action      : Run pkgchk -n SUNWfmd to ensure that fault management software is
              installed properly.  Contact Sun for support.

(MySolaris:/)#



The FMADM fault currently logged on this system is caused by a logical inconsistency in the checkpointed data, causing the system do disable cpumem-diagnosis. As described in the attached document, this in turn, causes the FMD-8000-0W defect.sunos.fmd.nosub on the next transient memory error which should have been handled by the cpumem-diagnosis module. We can clear the FMD-8000-0W, but any little thing which cpumem-diagnosis would normally handle will trigger another FMD-8000-0W defect.sunos.fmd.nosub. Please see belowtThe resolution for this issue:

First roll the logs and restart the FMA daemon to keep the history.

logadm -p now -s 1b /var/fm/fmd/errlog
logadm -p now -s 1b /var/fm/fmd/fltlog

svcadm restart fmd


...wait two minutes...

Now scrub the checkpoint files


svcadm disable -st fmd
find /var/fm/fmd/ckpt -type f | xargs rm

svcadm enable fmd


...wait 2 minutes...

Now see if everything is clear

fmadm config - check that cpumem-diagnosis is active

fmadm faulty -a - shouldn't return anything

also check to see if we logged any new errors on fmd startup; if we did, we'll need to check further...

fmdump -e

should return nothing