Solaris Commands
Tuesday, March 12, 2013
Cleaning up the Operating System device tree after removing LUNs - Solaris 10
You must clean up the device tree after removing LUNs. The OS commands may vary for Solaris versions. This procedure uses Solaris 10 .
To clean up the device tree after you remove LUNs
1. The removed devices show up as drive not available in the output of the format command:
2. "413. c3t5006048ACAFE4A7Cd252 drive not available
/pci@1d,700000/SUNW,qlc@1,1/fp@0,0/ssd@w5006048acafe4a7c,fc"
3. After the LUNs are unmapped using Array management or the command line, Solaris also displays the devices as either unusable or failing.
4. bash-3.00# cfgadm -al -o show_SCSI_LUN | grep -i unusable
5. c2::5006048acafe4a73,256 disk connected configured unusable
c3::5006048acafe4a7c,255 disk connected configured unusable
bash-3.00# cfgadm -al -o show_SCSI_LUN | grep -i failing
c2::5006048acafe4a73,71 disk connected configured failing
c3::5006048acafe4a7c,252 disk connected configured failing
6. If the removed LUNs show up as failing, you need to force a LIP on the HBA. This operation probes the targets again, so that the device shows up as unusable. Unless the device shows up as unusable, it cannot be removed from the device tree.
luxadm -e forcelip /devices/pci@1d,700000/SUNW,qlc@1,1/fp@0,0:devctl
7. To remove the device from the cfgadm database, run the following commands on the Solaris
8. cfgadm -c unconfigure -o unusable_SCSI_LUN c2::5006048acafe4a73
cfgadm -c unconfigure -o unusable_SCSI_LUN c3::5006048acafe4a7c
OR
cfgadm -o unusable_FCP_dev -c unconfigure c2::5006048acafe4a73
9. Repeat step 2 to verify that the LUNs have been removed.
10. Clean up the device tree. The following command removes the /dev/rdsk... links to /devices.
$devfsadm -Cv
--------------------------------------------------------------------------------------------------------------------
Monday, January 14, 2013
Check and change Solaris Locale
To check and change Solaris Locale. Below document is from Sunsolve.
How to view the current locale setting
The current locale setting can be viewed with:
# locale
LANG=en_US
LC_CTYPE= "en_US"
LC_NUMERIC= "en_US"
LC_TIME= "en_US"
LC_COLLATE= "en_US"
LC_MONETARY= "en_US"
LC_MESSAGES= "en_US"
LC_ALL=en_US
How to change the locale setting
Locales can be set or changed in 3 ways:
Via the CDE login locale
As a user-specific locale
As a system default locale
To change the current locale setting, first confirm that the desired locale is installed on the system with:
# locale -a
de
en_AU
en_CA
en_UK
C
If the desired locale is not in the list, you will need to install the appropriate packages for that locale. (See the Note below for more information about locale packages.)
How to change the locale via the CDE login locale
To change the locale for a new CDE session by selecting a different locale from the CDE login screen:
On the CDE login banner:
Choose options - languages
Under languages - choose the new locale
The CDE banner will re-cycle and then you can login to the selected locale.
Note: If a user has a different default locale set in their environment, then that locale setting will override the selected CDE login locale.
How to set a user-specific locale
Note: The syntax for setting the locale variables (LANG and LC_*) is shell dependent.
For sh, ksh:
# LANG=; export LANG
# LC_ALL=; export LC_ALL
Example:
# LANG=C; export LANG
# LC_ALL=C; export LC_ALL
For csh:
# setenv LANG
# setenv LC_ALL
Example:
# setenv LANG C
# setenv LC_ALL C
Note: To set a default locale for a user's environment, set the LANG or LC_* variables in a user's shell initialization file such as $HOME/.profile or $HOME/.cshrc
How to change the locale by setting the system default locale
To set or change the system default locale, edit the /etc/default/init file and set the LANG and LC_* variables.
LANG=C
LC_ALL=C
Example from the /etc/default/init file:
# Lines of this file should be of the form VAR=value, where VAR is one of
# TZ, LANG, or any of the LC_* environment variables.
LANG=C
LC_ALL=C
Note: The system must be rebooted after making changes to the /etc/default/init file in order for the changes to take effect.
After setting or changing the locale, verify that the locale is set correctly:
Check if the locale is set correctly by running the locale command without any options:
# locale
LANG=C
LC_CTYPE= "C"
LC_NUMERIC= "C"
LC_TIME= "C"
LC_COLLATE= "C"
LC_MONETARY= "C"
LC_MESSAGES= "C"
LC_ALL=C
How to view the current locale setting
The current locale setting can be viewed with:
# locale
LANG=en_US
LC_CTYPE= "en_US"
LC_NUMERIC= "en_US"
LC_TIME= "en_US"
LC_COLLATE= "en_US"
LC_MONETARY= "en_US"
LC_MESSAGES= "en_US"
LC_ALL=en_US
How to change the locale setting
Locales can be set or changed in 3 ways:
Via the CDE login locale
As a user-specific locale
As a system default locale
To change the current locale setting, first confirm that the desired locale is installed on the system with:
# locale -a
de
en_AU
en_CA
en_UK
C
If the desired locale is not in the list, you will need to install the appropriate packages for that locale. (See the Note below for more information about locale packages.)
How to change the locale via the CDE login locale
To change the locale for a new CDE session by selecting a different locale from the CDE login screen:
On the CDE login banner:
Choose options - languages
Under languages - choose the new locale
The CDE banner will re-cycle and then you can login to the selected locale.
Note: If a user has a different default locale set in their environment, then that locale setting will override the selected CDE login locale.
How to set a user-specific locale
Note: The syntax for setting the locale variables (LANG and LC_*) is shell dependent.
For sh, ksh:
# LANG=; export LANG
# LC_ALL=; export LC_ALL
Example:
# LANG=C; export LANG
# LC_ALL=C; export LC_ALL
For csh:
# setenv LANG
# setenv LC_ALL
Example:
# setenv LANG C
# setenv LC_ALL C
Note: To set a default locale for a user's environment, set the LANG or LC_* variables in a user's shell initialization file such as $HOME/.profile or $HOME/.cshrc
How to change the locale by setting the system default locale
To set or change the system default locale, edit the /etc/default/init file and set the LANG and LC_* variables.
LANG=C
LC_ALL=C
Example from the /etc/default/init file:
# Lines of this file should be of the form VAR=value, where VAR is one of
# TZ, LANG, or any of the LC_* environment variables.
LANG=C
LC_ALL=C
Note: The system must be rebooted after making changes to the /etc/default/init file in order for the changes to take effect.
After setting or changing the locale, verify that the locale is set correctly:
Check if the locale is set correctly by running the locale command without any options:
# locale
LANG=C
LC_CTYPE= "C"
LC_NUMERIC= "C"
LC_TIME= "C"
LC_COLLATE= "C"
LC_MONETARY= "C"
LC_MESSAGES= "C"
LC_ALL=C
Thursday, June 30, 2011
Correct NFS mount entry for Oracle Rman backup
Here is the correct entry showing for your NFS mount . This NFS mount is using by oracle for storing backup files taking by Rman. It was showing error with normal NFS mount which is commented out.
spaninfo100:[/root]
# cat /etc/vfstab | grep -i /oracle/backups/share
#nfs://motdalsun114:/oracle/backups/share - /oracle/backups/share nfs - yes bg,hard
nfs://motdalsun114:/oracle/backups/share - /oracle/backups/share nfs - yes vers=4,proto=tcp,sec=sys,hard,intr,rsize=1048576,wsize=1048576,retrans=5,timeo=600
spaninfo100::[/root]
Error Reported by Rman
ORA-27054: NFS file system where the file is created or resides is not mounted with correct options"
spaninfo100:[/root]
# cat /etc/vfstab | grep -i /oracle/backups/share
#nfs://motdalsun114:/oracle/backups/share - /oracle/backups/share nfs - yes bg,hard
nfs://motdalsun114:/oracle/backups/share - /oracle/backups/share nfs - yes vers=4,proto=tcp,sec=sys,hard,intr,rsize=1048576,wsize=1048576,retrans=5,timeo=600
spaninfo100::[/root]
Error Reported by Rman
ORA-27054: NFS file system where the file is created or resides is not mounted with correct options"
Friday, February 25, 2011
SUN Fire M4000/5000 XSCF commands
showboards -av
showfru
showhardconf
Eg:
XSCF> showboards -av
XSB R DID(LSB) Assignment Pwr Conn Conf Test Fault COD
---- - -------- ----------- ---- ---- ---- ------- -------- ----
00-0 00(00) Assigned y y y Passed Normal n
01-0 00(01) Assigned y y y Passed Normal n
XSCF> showdcl -a
DID LSB XSB Status
00 Running
00 00-0
01 01-0
XSCF> showfru -a sb 0
Device Location XSB Mode Memory Mirror Mode
sb 00 Uni no
sb 01 Uni no
XSCF> showhardconf
SPARC Enterprise M5000;
+ Serial:BEF10044A4; Operator_Panel_Switch:Locked;
+ Power_Supply_System:Single; SCF-ID:XSCF#0;
+ System_Power:On; System_Phase:Cabinet Power On;
Domain#0 Domain_Status:Running;
showfru
showhardconf
Eg:
XSCF> showboards -av
XSB R DID(LSB) Assignment Pwr Conn Conf Test Fault COD
---- - -------- ----------- ---- ---- ---- ------- -------- ----
00-0 00(00) Assigned y y y Passed Normal n
01-0 00(01) Assigned y y y Passed Normal n
XSCF> showdcl -a
DID LSB XSB Status
00 Running
00 00-0
01 01-0
XSCF> showfru -a sb 0
Device Location XSB Mode Memory Mirror Mode
sb 00 Uni no
sb 01 Uni no
XSCF> showhardconf
SPARC Enterprise M5000;
+ Serial:BEF10044A4; Operator_Panel_Switch:Locked;
+ Power_Supply_System:Single; SCF-ID:XSCF#0;
+ System_Power:On; System_Phase:Cabinet Power On;
Domain#0 Domain_Status:Running;
Monday, February 21, 2011
Solaris Live upgrade Commands and steps
lustauts - To know the Boot Environment status
lucreate - To Create new Boot Environment
luactivate - To activate a Boot Environment
ludelete - To delete a Boot Environment
The Below Steps shows ho to create new boot eenvironment,upgrade the Solaris to new release,Patch the Solaris kernel and update the netbackup client . This task is performed the server which is having two zones in it. Please avoid the steps for zones , if you don't have zone in your server .
1. Check mirrors:
# metastat
2. Reboot the server an verify all is ok.
3. Halt the zones
zoneadm -z osstelsun121b halt
zoneadm -z osstelsun121a halt
4. Create new devices for solaris zones
BE OK BACKUP
Sol10_1009 Sol10_0606
d101 d201 /zones
d102 d202 osstelsun121a
d103 d203 osstelsun121b
metainit d201 -p d100 1g
metainit d202 -p d100 3g
metainit d203 -p d100 3g
newfs /dev/md/rdsk/d201
newfs /dev/md/rdsk/d202
newfs /dev/md/rdsk/d203
mkdir /zones_bu
mount /dev/md/dsk/d201 /zones_bu
cd /zones
find . -mount cpio -pmduv /zones_bu
mkdir /zones_bu/osstelsun121a /zones_bu/osstelsun121b
mount /dev/md/dsk/d202 /zones_bu/osstelsun121a
cd /zones/osstelsun121a
find . -mount cpio -pmduv /zones_bu/osstelsun121a
mount /dev/md/dsk/d203 /zones_bu/osstelsun121b
cd /zones/osstelsun121b
find . -mount cpio -pmduv /zones_bu/osstelsun121b
5. Check the fs permissions /zones_bu/XXX with 700.
6.Unmount all backup metadevice
umount /zones_bu/osstelsun121a
umount /zones_bu/osstelsun121b
umount /zones_bu
6. Dettach d20
# metadettach d0 d20
7. Mount d20 on /mnt and modify vfstab (change d0 as d20,d101 as 201,d102 as d202, d103 as d203) .
Modify /mnt/etc/system and change the autoboot as false if you want and change the new boot device as d20.
eeprom auto-boot?=false
edit /mnt/etc/system, modify to this for booting from d20
rootdev:/pseudo/md@0:0,20,blk
8. Boot from d20 and verify all work fine (also zones)
8.1 . Configure nic on vlan5 and mount nfs
#ifconfig ipge3 192.132.7.220 netmask 255.255.255.0 up
# mount -F nfs 192.32.8.11:/export/install /mnt
8.2 Install or reinstall liveupgrade packages from /mnt
# pkgrm SUNWlucfg
# pkgrm SUNWluu
# pkgrm SUNWlur
# pkadd SUNWlucfg
# pkadd SUNWluu
# pkadd SUNWlur
9. Install the patch cluster
# init s
# ./installcluster --s10cluster
# reboot
Note: Make a backup of sendmail.cf, bp.conf.
10. Create the BEs. (Current BE is d20-Sol10_0106. We are creating new BE on first metadevice of root aond other FS)
lucreate -c Sol10_0606 -n Sol10_1009 -m /:/dev/md/dsk/d0:preserve,ufs -m /zones:/dev/md/dsk/d101:preserve,ufs -m /zones/osstelsun121a:/dev/md/dsk/d102:preserve,ufs -m /zones/osstelsun121b:/dev/md/dsk/d103:preserve,ufs
Note: Stop nimbus to avoid monitoring the new fs due to this step mount all new fs as /.altxxxx.
11. Verify status of BEs.
# lustatus
# lufslist Sol10_1009
# lufslist Sol10_0606
12. Boot with new BE Sol10_1009 and check all work ok.
# luactivate Sol10_1009
# lustatus
# init 6
13. Boot backup BE Sol10_0606 and check all work ok.
# luactivate Sol10_0606
# lustatus
# init 6
14. Configure nic on vlan5
ifconfig ipge3 192.132.7.220 netmask 255.255.255.0 up
15. Monunt NFS server media:
# ifconfig ipge3 192.132.7.220 netmask 255.255.255.0 up
# mount -F nfs 192.32.8.11:/export/install /mnt
16. Start the liveupgrade process.
# luupgrade -u -n Sol10_1009 -s /mnt/media/Solaris_10_1009
17. Activate the upgraded BE.
# luactivate Sol10_1009
# lustatus
# init 6
18. Check release and patch kernel.
# showrev
# more /etc/release
19. Install the patch cluster.
# init s
# ./installcluster --s10cluster
# reboot
20. Check release and patch kernel.
# showrev
# more /etc/release
21. Install oracle patches:
# ls -ltr
total 29664
-rw-r--r-- 1 roperator sysadmin 1784548 Jan 7 20:20 119963-21.zip
-rw-r--r-- 1 roperator sysadmin 332942 Jan 7 20:20 120753-08.zip
-rw-r--r-- 1 roperator sysadmin 11357181 Jan 7 20:20 124861-19.zip
-rw-r--r-- 1 roperator sysadmin 1665130 Jan 7 20:20 137321-01.zip
osstelsun121:[/patches/oracle]
22. Install the liveupgrade patches and bundle if you want.Maybe this can help in the future if we need to do another upgrade.
total 15820
-rw-r--r-- 1 roperator sysadmin 6767812 Jan 7 20:19 119246-38.zip
-rw-r--r-- 1 roperator sysadmin 65044 Jan 7 20:19 121428-13.zip
-rw-r--r-- 1 roperator sysadmin 802446 Jan 7 20:19 121430-53.zip
-rw-r--r-- 1 roperator sysadmin 107546 Jan 7 20:19 138623-02.zip
-rw-r--r-- 1 roperator sysadmin 79108 Jan 7 20:19 140914-02.zip
-rw-r--r-- 1 roperator sysadmin 223232 Jan 7 20:34 123121-02.tar
osstelsun121:[/patches/liveupgrade]
23. Modifiy file descriptors to 65536 on /etc/system
* File Descriptors
set rlim_fd_cur=65536
set rlim_fd_max=65536
24. Install Netbackup 6.5 client.
# more version
NetBackup-Solaris9 6.0MP4
osstelsun121:[/usr/openv/netbackup/bin]
# mv /usr/openv /usr/openv.ori
# mkdir openv
# cd /patches/veritas
# tar -xvf NB_65_CLIENTS3_20070723.tar
# cd NB_65_CLIENTS3_20070723
# ./install
# more /usr/openv/netbackup/bin/version
25. Install the 6.5.4 update
# tar -xvf NB_6.5.4_Patches_Solaris.tar
# ./NB_update.install Seleccionar: NB_ORA_6.5.4
NB_CLT_6.5.4 *
NB_DMP_6.5.4
NB_ENC_6.5.4
NB_INX_6.5.4
NB_JAV_6.5.4 *
NB_LOT_6.5.4
NB_NOM_6.5.4
NB_ORA_6.5.4 *
NB_SAP_6.5.4
# more /usr/openv/netbackup/bin/version
26. Install the Netbackup fix for 6.5.4.
# unzip NB_6.5.4_ET1862252_1_347226.zip
# chmod 755 eebinstaller.1862252.1.solaris10
# ./eebinstaller.1862252.1.solaris10
Note. After this step DBA need to run the script with oracle account. Notify him.
27. Copy the backup of bp.conf and exclude_list in the new paths
# cp /usr/openv.ori/netbackup/bp.conf /usr/openv/netbackup/bp.conf
# cp /usr/openv.ori/netbackup/exclude_list /usr/openv/netbackup/exclude_list
28. Create a script on init.d and rc2.d for Netbackup client parameters:
init.d:
-rwxr-xr-x 1 root root 292 Feb 4 18:07 set_netbackup_parm
rc2.d:
lrwxrwxrwx 1 root root 30 Feb 4 18:07 S99set-netbackup-parm -> /etc/init.d/set_netbackup_parm
# cat S99set-netbackup-parm
#!/bin/sh
/usr/sbin/ndd -set /dev/udp udp_smallest_anon_port 9000
/usr/sbin/ndd -set /dev/udp udp_largest_anon_port 65500
/usr/sbin/ndd -set /dev/tcp tcp_smallest_anon_port 9000
/usr/sbin/ndd -set /dev/tcp tcp_largest_anon_port 65500
/usr/sbin/ndd -set /dev/tcp tcp_largest_anon_port 65535
29. Configure Sendmail.
30. Reboot the server and verify all is ok.
31.Delete old BE after two days and attach d20 to d0 ( ludelete Sol10_0606)
32. Remove backup partition created for zones (d201,d202 & d203)
osstelsun121:[/etc/mail
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
Sol10_0606 yes no no yes -
Sol10_1009 yes yes yes no -
osstelsun121:[/etc/mail]
# cd
osstelsun121:[/root]
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
Sol10_0606 yes no no yes -
Sol10_1009 yes yes yes no -
osstelsun121:[/root]
# cat /etc/release
Solaris 10 10/09 s10s_u8wos_08a SPARC
Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 16 September 2009
osstelsun121:[/root]
# showrev
Hostname: osstelsun121
Hostid: 847e45e8
Release: 5.10
Kernel architecture: sun4v
Application architecture: sparc
Hardware provider: Sun_Microsystems
Domain:
Kernel version: SunOS 5.10 Generic_142909-17
osstelsun121:[/root]
#
osstelsun121:[/root]
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
Sol10_0606 yes no no yes -
Sol10_1009 yes yes yes no -
osstelsun121:[/root]
# ludelete Sol10_0606
Determining the devices to be marked free.
Updating boot environment configuration database.
Updating boot environment description database on all BEs.
Updating all boot environment configuration databases.
Boot environmentdeleted.
osstelsun121:[/root]
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
Sol10_1009 yes yes yes no -
osstelsun121:[/root]
#
osstelsun121:[/root]
# lufslist Sol10_1009
boot environment name: Sol10_1009
This boot environment is currently active.
This boot environment will be active on next system boot.
Filesystem fstype device size Mounted on Mount Options
----------------------- -------- ------------ ------------------- --------------
/dev/md/dsk/d1 swap 34360688640 - -
/dev/md/dsk/d0 ufs 18021777408 / logging
/dev/md/dsk/d50 ufs 12171018240 /var/audit logging
/dev/md/dsk/d101 ufs 1073741824 /zones logging
/dev/md/dsk/d102 ufs 3221225472 /zones/osstelsun121a logging
/dev/md/dsk/d103 ufs 3221225472 /zones/osstelsun121b logging
osstelsun121:[/root]
#
Issues and work around
==================================
1.) Try this command if you are unable to create new BE
lucreate -C /dev/dsk/c1t1d0s0 -c Sol10_0106 -n Sol10_1009 -m /:/dev/md/dsk/d0:preserve,ufs
2.)Follow below steps if the server failed to boot after live upgrade and showing below error while rebooting after live upgrade
This was the issue
init 6
Creating boot_archive for /.alt.Sol10_1009
mkdir: Failed to make directory "/.alt.Sol10_1009/var/tmp/create_ramdisk.20062.tmp"; Read-only file system
Could not create temporary directory //.alt.Sol10_1009/var/tmp/create_ramdisk.20062.tmp
bootadm: Command '/.alt.Sol10_1009//boot/solaris/bin/create_ramdisk -R /.alt.Sol10_1009' failed to create boot archive
cannot unmount '/var/audit': Device busy
cannot unmount '/usr/sap/MTT': Device busy
cannot unmount '/usr/sap/EPT': Device busy
cannot unmount '/sapmnt/MTT': Device busy
cannot unmount '/sapmnt/EPT': Device busy
svc.startd: The system is down.
syncing file systems... done
rebooting...
work around done .
# mount /dev/md/dsk/d0 /mnt
osstelsun121:[/root]
# bootadm update-archive -v -R /mnt
stale /mnt//kernel/strmod/sparcv9/nattymod
cannot find: /mnt/etc/cluster/nodeid: No such file or directory
cannot find: /mnt/etc/mach: No such file or directory
Creating boot_archive for /mnt
updating /mnt/platform/sun4v/boot_archive
15+0 records in
15+0 records out
============================================
lucreate - To Create new Boot Environment
luactivate - To activate a Boot Environment
ludelete - To delete a Boot Environment
The Below Steps shows ho to create new boot eenvironment,upgrade the Solaris to new release,Patch the Solaris kernel and update the netbackup client . This task is performed the server which is having two zones in it. Please avoid the steps for zones , if you don't have zone in your server .
1. Check mirrors:
# metastat
2. Reboot the server an verify all is ok.
3. Halt the zones
zoneadm -z osstelsun121b halt
zoneadm -z osstelsun121a halt
4. Create new devices for solaris zones
BE OK BACKUP
Sol10_1009 Sol10_0606
d101 d201 /zones
d102 d202 osstelsun121a
d103 d203 osstelsun121b
metainit d201 -p d100 1g
metainit d202 -p d100 3g
metainit d203 -p d100 3g
newfs /dev/md/rdsk/d201
newfs /dev/md/rdsk/d202
newfs /dev/md/rdsk/d203
mkdir /zones_bu
mount /dev/md/dsk/d201 /zones_bu
cd /zones
find . -mount cpio -pmduv /zones_bu
mkdir /zones_bu/osstelsun121a /zones_bu/osstelsun121b
mount /dev/md/dsk/d202 /zones_bu/osstelsun121a
cd /zones/osstelsun121a
find . -mount cpio -pmduv /zones_bu/osstelsun121a
mount /dev/md/dsk/d203 /zones_bu/osstelsun121b
cd /zones/osstelsun121b
find . -mount cpio -pmduv /zones_bu/osstelsun121b
5. Check the fs permissions /zones_bu/XXX with 700.
6.Unmount all backup metadevice
umount /zones_bu/osstelsun121a
umount /zones_bu/osstelsun121b
umount /zones_bu
6. Dettach d20
# metadettach d0 d20
7. Mount d20 on /mnt and modify vfstab (change d0 as d20,d101 as 201,d102 as d202, d103 as d203) .
Modify /mnt/etc/system and change the autoboot as false if you want and change the new boot device as d20.
eeprom auto-boot?=false
edit /mnt/etc/system, modify to this for booting from d20
rootdev:/pseudo/md@0:0,20,blk
8. Boot from d20 and verify all work fine (also zones)
8.1 . Configure nic on vlan5 and mount nfs
#ifconfig ipge3 192.132.7.220 netmask 255.255.255.0 up
# mount -F nfs 192.32.8.11:/export/install /mnt
8.2 Install or reinstall liveupgrade packages from /mnt
# pkgrm SUNWlucfg
# pkgrm SUNWluu
# pkgrm SUNWlur
# pkadd SUNWlucfg
# pkadd SUNWluu
# pkadd SUNWlur
9. Install the patch cluster
# init s
# ./installcluster --s10cluster
# reboot
Note: Make a backup of sendmail.cf, bp.conf.
10. Create the BEs. (Current BE is d20-Sol10_0106. We are creating new BE on first metadevice of root aond other FS)
lucreate -c Sol10_0606 -n Sol10_1009 -m /:/dev/md/dsk/d0:preserve,ufs -m /zones:/dev/md/dsk/d101:preserve,ufs -m /zones/osstelsun121a:/dev/md/dsk/d102:preserve,ufs -m /zones/osstelsun121b:/dev/md/dsk/d103:preserve,ufs
Note: Stop nimbus to avoid monitoring the new fs due to this step mount all new fs as /.altxxxx.
11. Verify status of BEs.
# lustatus
# lufslist Sol10_1009
# lufslist Sol10_0606
12. Boot with new BE Sol10_1009 and check all work ok.
# luactivate Sol10_1009
# lustatus
# init 6
13. Boot backup BE Sol10_0606 and check all work ok.
# luactivate Sol10_0606
# lustatus
# init 6
14. Configure nic on vlan5
ifconfig ipge3 192.132.7.220 netmask 255.255.255.0 up
15. Monunt NFS server media:
# ifconfig ipge3 192.132.7.220 netmask 255.255.255.0 up
# mount -F nfs 192.32.8.11:/export/install /mnt
16. Start the liveupgrade process.
# luupgrade -u -n Sol10_1009 -s /mnt/media/Solaris_10_1009
17. Activate the upgraded BE.
# luactivate Sol10_1009
# lustatus
# init 6
18. Check release and patch kernel.
# showrev
# more /etc/release
19. Install the patch cluster.
# init s
# ./installcluster --s10cluster
# reboot
20. Check release and patch kernel.
# showrev
# more /etc/release
21. Install oracle patches:
# ls -ltr
total 29664
-rw-r--r-- 1 roperator sysadmin 1784548 Jan 7 20:20 119963-21.zip
-rw-r--r-- 1 roperator sysadmin 332942 Jan 7 20:20 120753-08.zip
-rw-r--r-- 1 roperator sysadmin 11357181 Jan 7 20:20 124861-19.zip
-rw-r--r-- 1 roperator sysadmin 1665130 Jan 7 20:20 137321-01.zip
osstelsun121:[/patches/oracle]
22. Install the liveupgrade patches and bundle if you want.Maybe this can help in the future if we need to do another upgrade.
total 15820
-rw-r--r-- 1 roperator sysadmin 6767812 Jan 7 20:19 119246-38.zip
-rw-r--r-- 1 roperator sysadmin 65044 Jan 7 20:19 121428-13.zip
-rw-r--r-- 1 roperator sysadmin 802446 Jan 7 20:19 121430-53.zip
-rw-r--r-- 1 roperator sysadmin 107546 Jan 7 20:19 138623-02.zip
-rw-r--r-- 1 roperator sysadmin 79108 Jan 7 20:19 140914-02.zip
-rw-r--r-- 1 roperator sysadmin 223232 Jan 7 20:34 123121-02.tar
osstelsun121:[/patches/liveupgrade]
23. Modifiy file descriptors to 65536 on /etc/system
* File Descriptors
set rlim_fd_cur=65536
set rlim_fd_max=65536
24. Install Netbackup 6.5 client.
# more version
NetBackup-Solaris9 6.0MP4
osstelsun121:[/usr/openv/netbackup/bin]
# mv /usr/openv /usr/openv.ori
# mkdir openv
# cd /patches/veritas
# tar -xvf NB_65_CLIENTS3_20070723.tar
# cd NB_65_CLIENTS3_20070723
# ./install
# more /usr/openv/netbackup/bin/version
25. Install the 6.5.4 update
# tar -xvf NB_6.5.4_Patches_Solaris.tar
# ./NB_update.install Seleccionar: NB_ORA_6.5.4
NB_CLT_6.5.4 *
NB_DMP_6.5.4
NB_ENC_6.5.4
NB_INX_6.5.4
NB_JAV_6.5.4 *
NB_LOT_6.5.4
NB_NOM_6.5.4
NB_ORA_6.5.4 *
NB_SAP_6.5.4
# more /usr/openv/netbackup/bin/version
26. Install the Netbackup fix for 6.5.4.
# unzip NB_6.5.4_ET1862252_1_347226.zip
# chmod 755 eebinstaller.1862252.1.solaris10
# ./eebinstaller.1862252.1.solaris10
Note. After this step DBA need to run the script with oracle account. Notify him.
27. Copy the backup of bp.conf and exclude_list in the new paths
# cp /usr/openv.ori/netbackup/bp.conf /usr/openv/netbackup/bp.conf
# cp /usr/openv.ori/netbackup/exclude_list /usr/openv/netbackup/exclude_list
28. Create a script on init.d and rc2.d for Netbackup client parameters:
init.d:
-rwxr-xr-x 1 root root 292 Feb 4 18:07 set_netbackup_parm
rc2.d:
lrwxrwxrwx 1 root root 30 Feb 4 18:07 S99set-netbackup-parm -> /etc/init.d/set_netbackup_parm
# cat S99set-netbackup-parm
#!/bin/sh
/usr/sbin/ndd -set /dev/udp udp_smallest_anon_port 9000
/usr/sbin/ndd -set /dev/udp udp_largest_anon_port 65500
/usr/sbin/ndd -set /dev/tcp tcp_smallest_anon_port 9000
/usr/sbin/ndd -set /dev/tcp tcp_largest_anon_port 65500
/usr/sbin/ndd -set /dev/tcp tcp_largest_anon_port 65535
29. Configure Sendmail.
30. Reboot the server and verify all is ok.
31.Delete old BE after two days and attach d20 to d0 ( ludelete Sol10_0606)
32. Remove backup partition created for zones (d201,d202 & d203)
osstelsun121:[/etc/mail
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
Sol10_0606 yes no no yes -
Sol10_1009 yes yes yes no -
osstelsun121:[/etc/mail]
# cd
osstelsun121:[/root]
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
Sol10_0606 yes no no yes -
Sol10_1009 yes yes yes no -
osstelsun121:[/root]
# cat /etc/release
Solaris 10 10/09 s10s_u8wos_08a SPARC
Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 16 September 2009
osstelsun121:[/root]
# showrev
Hostname: osstelsun121
Hostid: 847e45e8
Release: 5.10
Kernel architecture: sun4v
Application architecture: sparc
Hardware provider: Sun_Microsystems
Domain:
Kernel version: SunOS 5.10 Generic_142909-17
osstelsun121:[/root]
#
osstelsun121:[/root]
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
Sol10_0606 yes no no yes -
Sol10_1009 yes yes yes no -
osstelsun121:[/root]
# ludelete Sol10_0606
Determining the devices to be marked free.
Updating boot environment configuration database.
Updating boot environment description database on all BEs.
Updating all boot environment configuration databases.
Boot environment
osstelsun121:[/root]
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
Sol10_1009 yes yes yes no -
osstelsun121:[/root]
#
osstelsun121:[/root]
# lufslist Sol10_1009
boot environment name: Sol10_1009
This boot environment is currently active.
This boot environment will be active on next system boot.
Filesystem fstype device size Mounted on Mount Options
----------------------- -------- ------------ ------------------- --------------
/dev/md/dsk/d1 swap 34360688640 - -
/dev/md/dsk/d0 ufs 18021777408 / logging
/dev/md/dsk/d50 ufs 12171018240 /var/audit logging
/dev/md/dsk/d101 ufs 1073741824 /zones logging
/dev/md/dsk/d102 ufs 3221225472 /zones/osstelsun121a logging
/dev/md/dsk/d103 ufs 3221225472 /zones/osstelsun121b logging
osstelsun121:[/root]
#
Issues and work around
==================================
1.) Try this command if you are unable to create new BE
lucreate -C /dev/dsk/c1t1d0s0 -c Sol10_0106 -n Sol10_1009 -m /:/dev/md/dsk/d0:preserve,ufs
2.)Follow below steps if the server failed to boot after live upgrade and showing below error while rebooting after live upgrade
This was the issue
init 6
Creating boot_archive for /.alt.Sol10_1009
mkdir: Failed to make directory "/.alt.Sol10_1009/var/tmp/create_ramdisk.20062.tmp"; Read-only file system
Could not create temporary directory //.alt.Sol10_1009/var/tmp/create_ramdisk.20062.tmp
bootadm: Command '/.alt.Sol10_1009//boot/solaris/bin/create_ramdisk -R /.alt.Sol10_1009' failed to create boot archive
cannot unmount '/var/audit': Device busy
cannot unmount '/usr/sap/MTT': Device busy
cannot unmount '/usr/sap/EPT': Device busy
cannot unmount '/sapmnt/MTT': Device busy
cannot unmount '/sapmnt/EPT': Device busy
svc.startd: The system is down.
syncing file systems... done
rebooting...
work around done .
# mount /dev/md/dsk/d0 /mnt
osstelsun121:[/root]
# bootadm update-archive -v -R /mnt
stale /mnt//kernel/strmod/sparcv9/nattymod
cannot find: /mnt/etc/cluster/nodeid: No such file or directory
cannot find: /mnt/etc/mach: No such file or directory
Creating boot_archive for /mnt
updating /mnt/platform/sun4v/boot_archive
15+0 records in
15+0 records out
============================================
Thursday, February 17, 2011
Clone the Partitiontable of the first HD to the second.Prtvtoc-fmthard
This command clone the Partitiontable of the first HD to the second.
Quite useful for partitioning a large number of identical disks.
#prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c0t1d0s2
Quite useful for partitioning a large number of identical disks.
#prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c0t1d0s2
Boot from second mirror disk d20 - configuration
Here is the final method:
Disk layout
d0 = default boot disk (root partition)
d10 = sub mirror one
d20 = sub mirror two.
1. Confirm standard boot device d0, with sub-mirrors of d10 and d20
2. determine physical devices
metastat d10 --> /dev/dsk/c0t1d0s0
metastat d20 --> /dev/dsk/c0t0d0s0
3. determine physical address of device
ls -l /dev/rdsk/c0t1d0s0 -->
/devices/sbus@2,0/SUNW,socal@d,10000/sf@0,0/ssd@w2100002037a86c65,0:a,raw
ls -l /dev/rdsk/c0t0d0s0 -->
/devices/sbus@2,0/SUNW,socal@d,10000/sf@0,0/ssd@w210000203796fb42,0:a,raw
4. setup boot names at "ok" prompt
nvalias bootdisk
/sbus@2,0/SUNW,socal@d,10000/sf@0,0/ssd@w2100002037a86c65,0:a
nvalias mirrdisk
/sbus@2,0/SUNW,socal@d,10000/sf@0,0/ssd@w210000203796fb42,0:a
5. Break mirror and setup each boot disk
metadetach d0 d20
touch /d0.disk (create dummy file to verify disk
name)
mount /dev/md/dsk/d20 /usr/dummy (mount into a dummy directory)
cd /usr/dummy
metaroot -n /dev/md/dsk/d20 (obtain what changes are
required)
edit vfstab, set root disk to d20 (get all this info from metaroot
command)
edit system, modify to this
rootdev:/pseudo/md@0:0,20,blk
touch /usr/dummy/d20.disk
6. Confirm boot off each disk, verify dummy file
boot bootdisk
verify /d0.disk exists
boot mirrdisk
verify /d20.disk exists
At this point, can apply patches to d0 or make config changes to d0 - with a
good backup of original system on d20.
7. If all goes well with patches the copy changes from d10 to d20
boot bootdisk
confirm /d0.disk exists
metattach d0 d20
metastat d0 (to confirm sync complete)
confirm /d0.disk exists
reboot
8. If patch application/config changes FAIL, copy d20 to d10
boot mirrdisk
verify /d20.disk exists
mount /dev/md/dsk/d0 /usr/dummy (not really necessary)
verify /usr/dummy/d0.disk exists
umount /usr/dummy
metaclear d0 (remove old d0 .. d10 does not
change)
metainit -f d0 -m d20 (force creation of d0 to d20 mirror)
metaroot -n /dev/md/dsk/d0 (print what todo without doing it)
metaroot /dev/md/dsk/d0 (actually run the command)
reboot (boots off default of d0)
metattach d0 d10
watch re-sync!
=========================
Another solution suggested was
I haven't had a chance to test it, but in theory, it looks like you can
(unsupported) edit /etc/lvm/md.cf to tell it that d0 is made up of d20 with
d10 as a mirror.
/etc/lvm/md.conf should initially have said something like:
d0 -m d10 d20 1
d10 1 1 c0t0d0s0
d20 1 1 c1t0d0s0
But after the metadetatch says something like:
d0 -m d10 1
d10 1 1 c0t0d0s0
d20 1 1 c1t0d0s0
Edit it to:
d0 -m d20 d10 1
d20 1 1 c1t0d0s0
d10 1 1 c0t0d0s0
Reboot.
There doesn't seem to be a real win in this process over the other though;
the only place I could possibly see it as being useful is with live upgrade
where you can't run meta* commands on your inactive boot environment but may
wish to only do one reboot to change BE's and fix your disk setup - even in
that case, I suspect you really should be using etc/lvm/md.tab which IS
supported.
=========================
Disk layout
d0 = default boot disk (root partition)
d10 = sub mirror one
d20 = sub mirror two.
1. Confirm standard boot device d0, with sub-mirrors of d10 and d20
2. determine physical devices
metastat d10 --> /dev/dsk/c0t1d0s0
metastat d20 --> /dev/dsk/c0t0d0s0
3. determine physical address of device
ls -l /dev/rdsk/c0t1d0s0 -->
/devices/sbus@2,0/SUNW,socal@d,10000/sf@0,0/ssd@w2100002037a86c65,0:a,raw
ls -l /dev/rdsk/c0t0d0s0 -->
/devices/sbus@2,0/SUNW,socal@d,10000/sf@0,0/ssd@w210000203796fb42,0:a,raw
4. setup boot names at "ok" prompt
nvalias bootdisk
/sbus@2,0/SUNW,socal@d,10000/sf@0,0/ssd@w2100002037a86c65,0:a
nvalias mirrdisk
/sbus@2,0/SUNW,socal@d,10000/sf@0,0/ssd@w210000203796fb42,0:a
5. Break mirror and setup each boot disk
metadetach d0 d20
touch /d0.disk (create dummy file to verify disk
name)
mount /dev/md/dsk/d20 /usr/dummy (mount into a dummy directory)
cd /usr/dummy
metaroot -n /dev/md/dsk/d20 (obtain what changes are
required)
edit vfstab, set root disk to d20 (get all this info from metaroot
command)
edit system, modify to this
rootdev:/pseudo/md@0:0,20,blk
touch /usr/dummy/d20.disk
6. Confirm boot off each disk, verify dummy file
boot bootdisk
verify /d0.disk exists
boot mirrdisk
verify /d20.disk exists
At this point, can apply patches to d0 or make config changes to d0 - with a
good backup of original system on d20.
7. If all goes well with patches the copy changes from d10 to d20
boot bootdisk
confirm /d0.disk exists
metattach d0 d20
metastat d0 (to confirm sync complete)
confirm /d0.disk exists
reboot
8. If patch application/config changes FAIL, copy d20 to d10
boot mirrdisk
verify /d20.disk exists
mount /dev/md/dsk/d0 /usr/dummy (not really necessary)
verify /usr/dummy/d0.disk exists
umount /usr/dummy
metaclear d0 (remove old d0 .. d10 does not
change)
metainit -f d0 -m d20 (force creation of d0 to d20 mirror)
metaroot -n /dev/md/dsk/d0 (print what todo without doing it)
metaroot /dev/md/dsk/d0 (actually run the command)
reboot (boots off default of d0)
metattach d0 d10
watch re-sync!
=========================
Another solution suggested was
I haven't had a chance to test it, but in theory, it looks like you can
(unsupported) edit /etc/lvm/md.cf to tell it that d0 is made up of d20 with
d10 as a mirror.
/etc/lvm/md.conf should initially have said something like:
d0 -m d10 d20 1
d10 1 1 c0t0d0s0
d20 1 1 c1t0d0s0
But after the metadetatch says something like:
d0 -m d10 1
d10 1 1 c0t0d0s0
d20 1 1 c1t0d0s0
Edit it to:
d0 -m d20 d10 1
d20 1 1 c1t0d0s0
d10 1 1 c0t0d0s0
Reboot.
There doesn't seem to be a real win in this process over the other though;
the only place I could possibly see it as being useful is with live upgrade
where you can't run meta* commands on your inactive boot environment but may
wish to only do one reboot to change BE's and fix your disk setup - even in
that case, I suspect you really should be using etc/lvm/md.tab which IS
supported.
=========================
Subscribe to:
Posts (Atom)