Thursday, June 30, 2011

Correct NFS mount entry for Oracle Rman backup

Here is the correct entry showing for your NFS mount . This NFS mount is using by oracle for storing backup files taking by Rman. It was showing error with normal NFS mount which is commented out. 

spaninfo100:[/root]
# cat /etc/vfstab | grep -i /oracle/backups/share
#nfs://motdalsun114:/oracle/backups/share        -       /oracle/backups/share   nfs     -       yes     bg,hard

nfs://motdalsun114:/oracle/backups/share   -  /oracle/backups/share  nfs -  yes vers=4,proto=tcp,sec=sys,hard,intr,rsize=1048576,wsize=1048576,retrans=5,timeo=600
spaninfo100::[/root]





Error Reported by Rman


ORA-27054: NFS file system where the file is created or resides is not mounted with correct options"

Friday, February 25, 2011

SUN Fire M4000/5000 XSCF commands

showboards -av


showfru
 
showhardconf



Eg:
XSCF> showboards -av


XSB R DID(LSB) Assignment Pwr Conn Conf Test Fault COD

---- - -------- ----------- ---- ---- ---- ------- -------- ----

00-0 00(00) Assigned y y y Passed Normal n

01-0 00(01) Assigned y y y Passed Normal n

XSCF> showdcl -a

DID LSB XSB Status

00 Running

00 00-0

01 01-0



XSCF> showfru -a sb 0

Device Location XSB Mode Memory Mirror Mode

sb 00 Uni no

sb 01 Uni no



XSCF> showhardconf

SPARC Enterprise M5000;

+ Serial:BEF10044A4; Operator_Panel_Switch:Locked;

+ Power_Supply_System:Single; SCF-ID:XSCF#0;

+ System_Power:On; System_Phase:Cabinet Power On;

Domain#0 Domain_Status:Running;

Monday, February 21, 2011

Solaris Live upgrade Commands and steps

lustauts  - To know the Boot Environment status

lucreate - To Create new Boot Environment

luactivate - To activate a Boot Environment


ludelete    - To delete a  Boot Environment


 
The Below Steps shows ho to create new boot eenvironment,upgrade the Solaris to new release,Patch the Solaris kernel and update the netbackup client . This task is performed the server which is having two zones in it. Please avoid the steps for zones , if you don't have zone in your server .


1. Check mirrors:


# metastat

2. Reboot the server an verify all is ok.

3. Halt the zones

zoneadm -z osstelsun121b halt

zoneadm -z osstelsun121a halt

4. Create new devices for solaris zones

BE OK                     BACKUP

Sol10_1009           Sol10_0606

d101                   d201 /zones

d102                  d202 osstelsun121a

d103                  d203 osstelsun121b


metainit d201 -p d100 1g

metainit d202 -p d100 3g

metainit d203 -p d100 3g

newfs /dev/md/rdsk/d201

newfs /dev/md/rdsk/d202

newfs /dev/md/rdsk/d203

mkdir /zones_bu

mount /dev/md/dsk/d201 /zones_bu

cd /zones

find . -mount  cpio -pmduv /zones_bu

mkdir /zones_bu/osstelsun121a /zones_bu/osstelsun121b

mount /dev/md/dsk/d202 /zones_bu/osstelsun121a

cd /zones/osstelsun121a

find . -mount cpio -pmduv /zones_bu/osstelsun121a

mount /dev/md/dsk/d203 /zones_bu/osstelsun121b

cd /zones/osstelsun121b

find . -mount cpio -pmduv /zones_bu/osstelsun121b

5. Check the fs permissions /zones_bu/XXX with 700.

6.Unmount all backup metadevice

umount /zones_bu/osstelsun121a

umount /zones_bu/osstelsun121b

umount /zones_bu


6. Dettach d20

# metadettach d0 d20

7. Mount d20 on /mnt and modify vfstab (change d0 as d20,d101 as 201,d102 as d202, d103 as d203) .

Modify /mnt/etc/system and change the autoboot as false if you want and change the new boot device as d20.
eeprom auto-boot?=false

edit /mnt/etc/system, modify to this for booting from d20

rootdev:/pseudo/md@0:0,20,blk

8. Boot from d20 and verify all work fine (also zones)

8.1 . Configure nic on vlan5 and mount nfs

#ifconfig ipge3 192.132.7.220 netmask 255.255.255.0 up


# mount -F nfs 192.32.8.11:/export/install /mnt

8.2 Install or reinstall liveupgrade packages from /mnt


# pkgrm SUNWlucfg

# pkgrm SUNWluu

# pkgrm SUNWlur

# pkadd SUNWlucfg

# pkadd SUNWluu

# pkadd SUNWlur


9. Install the patch cluster

# init s

# ./installcluster --s10cluster

# reboot

Note: Make a backup of sendmail.cf, bp.conf.

10. Create the BEs. (Current BE is d20-Sol10_0106. We are creating new BE on first metadevice of root aond other FS)

lucreate -c Sol10_0606 -n Sol10_1009 -m /:/dev/md/dsk/d0:preserve,ufs -m /zones:/dev/md/dsk/d101:preserve,ufs -m /zones/osstelsun121a:/dev/md/dsk/d102:preserve,ufs -m /zones/osstelsun121b:/dev/md/dsk/d103:preserve,ufs

Note: Stop nimbus to avoid monitoring the new fs due to this step mount all new fs as /.altxxxx.

11. Verify status of BEs.

# lustatus

# lufslist Sol10_1009

# lufslist Sol10_0606

12. Boot with new BE Sol10_1009 and check all work ok.

# luactivate Sol10_1009

# lustatus

# init 6

13. Boot backup BE Sol10_0606 and check all work ok.

# luactivate Sol10_0606

# lustatus

# init 6


14. Configure nic on vlan5

ifconfig ipge3 192.132.7.220 netmask 255.255.255.0 up




15. Monunt NFS server media:

# ifconfig ipge3 192.132.7.220 netmask 255.255.255.0 up

# mount -F nfs 192.32.8.11:/export/install /mnt

16. Start the liveupgrade process.

# luupgrade -u -n Sol10_1009 -s /mnt/media/Solaris_10_1009

17. Activate the upgraded BE.

# luactivate Sol10_1009

# lustatus

# init 6

18. Check release and patch kernel.

# showrev

# more /etc/release

19. Install the patch cluster.

# init s

# ./installcluster --s10cluster

# reboot


20. Check release and patch kernel.

# showrev

# more /etc/release

21. Install oracle patches:

# ls -ltr

total 29664

-rw-r--r-- 1 roperator sysadmin 1784548 Jan 7 20:20 119963-21.zip

-rw-r--r-- 1 roperator sysadmin 332942 Jan 7 20:20 120753-08.zip

-rw-r--r-- 1 roperator sysadmin 11357181 Jan 7 20:20 124861-19.zip

-rw-r--r-- 1 roperator sysadmin 1665130 Jan 7 20:20 137321-01.zip

osstelsun121:[/patches/oracle]

22. Install the liveupgrade patches and bundle if you want.Maybe this can help in the future if we need to do another upgrade.

total 15820

-rw-r--r-- 1 roperator sysadmin 6767812 Jan 7 20:19 119246-38.zip

-rw-r--r-- 1 roperator sysadmin 65044 Jan 7 20:19 121428-13.zip

-rw-r--r-- 1 roperator sysadmin 802446 Jan 7 20:19 121430-53.zip

-rw-r--r-- 1 roperator sysadmin 107546 Jan 7 20:19 138623-02.zip

-rw-r--r-- 1 roperator sysadmin 79108 Jan 7 20:19 140914-02.zip

-rw-r--r-- 1 roperator sysadmin 223232 Jan 7 20:34 123121-02.tar

osstelsun121:[/patches/liveupgrade]

23. Modifiy file descriptors to 65536 on /etc/system

* File Descriptors

set rlim_fd_cur=65536

set rlim_fd_max=65536


24. Install Netbackup 6.5 client.

# more version

NetBackup-Solaris9 6.0MP4

osstelsun121:[/usr/openv/netbackup/bin]



# mv /usr/openv /usr/openv.ori

# mkdir openv

# cd /patches/veritas

# tar -xvf NB_65_CLIENTS3_20070723.tar

# cd NB_65_CLIENTS3_20070723

# ./install

# more /usr/openv/netbackup/bin/version

25. Install the 6.5.4 update

# tar -xvf NB_6.5.4_Patches_Solaris.tar

# ./NB_update.install Seleccionar: NB_ORA_6.5.4

NB_CLT_6.5.4 *

NB_DMP_6.5.4

NB_ENC_6.5.4

NB_INX_6.5.4

NB_JAV_6.5.4 *

NB_LOT_6.5.4

NB_NOM_6.5.4

NB_ORA_6.5.4 *

NB_SAP_6.5.4

# more /usr/openv/netbackup/bin/version

26. Install the Netbackup fix for 6.5.4.

# unzip NB_6.5.4_ET1862252_1_347226.zip

# chmod 755 eebinstaller.1862252.1.solaris10

# ./eebinstaller.1862252.1.solaris10


Note. After this step DBA need to run the script with oracle account. Notify him.

27. Copy the backup of bp.conf and exclude_list in the new paths

# cp /usr/openv.ori/netbackup/bp.conf /usr/openv/netbackup/bp.conf

# cp /usr/openv.ori/netbackup/exclude_list /usr/openv/netbackup/exclude_list


28. Create a script on init.d and rc2.d for Netbackup client parameters:

init.d:

-rwxr-xr-x 1 root root 292 Feb 4 18:07 set_netbackup_parm

rc2.d:

lrwxrwxrwx 1 root root 30 Feb 4 18:07 S99set-netbackup-parm -> /etc/init.d/set_netbackup_parm



# cat S99set-netbackup-parm

#!/bin/sh



/usr/sbin/ndd -set /dev/udp udp_smallest_anon_port 9000

/usr/sbin/ndd -set /dev/udp udp_largest_anon_port 65500

/usr/sbin/ndd -set /dev/tcp tcp_smallest_anon_port 9000

/usr/sbin/ndd -set /dev/tcp tcp_largest_anon_port 65500

/usr/sbin/ndd -set /dev/tcp tcp_largest_anon_port 65535



29. Configure Sendmail.



30. Reboot the server and verify all is ok.

31.Delete old BE after two days and attach d20 to d0 ( ludelete Sol10_0606)

32. Remove backup partition created for zones (d201,d202 & d203)

osstelsun121:[/etc/mail

# lustatus

Boot Environment Is Active Active Can Copy

Name Complete Now On Reboot Delete Status

-------------------------- -------- ------ --------- ------ ----------

Sol10_0606 yes no no yes -

Sol10_1009 yes yes yes no -

osstelsun121:[/etc/mail]

# cd

osstelsun121:[/root]

# lustatus

Boot Environment Is Active Active Can Copy

Name Complete Now On Reboot Delete Status

-------------------------- -------- ------ --------- ------ ----------

Sol10_0606 yes no no yes -

Sol10_1009 yes yes yes no -

osstelsun121:[/root]

# cat /etc/release

Solaris 10 10/09 s10s_u8wos_08a SPARC

Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.

Use is subject to license terms.

Assembled 16 September 2009

osstelsun121:[/root]

# showrev

Hostname: osstelsun121

Hostid: 847e45e8

Release: 5.10

Kernel architecture: sun4v

Application architecture: sparc

Hardware provider: Sun_Microsystems

Domain:

Kernel version: SunOS 5.10 Generic_142909-17

osstelsun121:[/root]

#



osstelsun121:[/root]

# lustatus

Boot Environment Is Active Active Can Copy

Name Complete Now On Reboot Delete Status

-------------------------- -------- ------ --------- ------ ----------

Sol10_0606 yes no no yes -

Sol10_1009 yes yes yes no -

osstelsun121:[/root]

# ludelete Sol10_0606

Determining the devices to be marked free.

Updating boot environment configuration database.

Updating boot environment description database on all BEs.

Updating all boot environment configuration databases.

Boot environment deleted.

osstelsun121:[/root]

# lustatus

Boot Environment Is Active Active Can Copy

Name Complete Now On Reboot Delete Status

-------------------------- -------- ------ --------- ------ ----------

Sol10_1009 yes yes yes no -

osstelsun121:[/root]

#

osstelsun121:[/root]

# lufslist Sol10_1009

boot environment name: Sol10_1009

This boot environment is currently active.

This boot environment will be active on next system boot.



Filesystem fstype device size Mounted on Mount Options

----------------------- -------- ------------ ------------------- --------------

/dev/md/dsk/d1 swap 34360688640 - -

/dev/md/dsk/d0 ufs 18021777408 / logging

/dev/md/dsk/d50 ufs 12171018240 /var/audit logging

/dev/md/dsk/d101 ufs 1073741824 /zones logging

/dev/md/dsk/d102 ufs 3221225472 /zones/osstelsun121a logging

/dev/md/dsk/d103 ufs 3221225472 /zones/osstelsun121b logging

osstelsun121:[/root]

#

Issues and work around

==================================

1.) Try this command if you are unable to create new BE

lucreate -C /dev/dsk/c1t1d0s0 -c Sol10_0106 -n Sol10_1009 -m /:/dev/md/dsk/d0:preserve,ufs


2.)Follow below steps if the server failed to boot after live upgrade and showing below error while rebooting after live upgrade

This was the issue

init 6
Creating boot_archive for /.alt.Sol10_1009

mkdir: Failed to make directory "/.alt.Sol10_1009/var/tmp/create_ramdisk.20062.tmp"; Read-only file system

Could not create temporary directory //.alt.Sol10_1009/var/tmp/create_ramdisk.20062.tmp

bootadm: Command '/.alt.Sol10_1009//boot/solaris/bin/create_ramdisk -R /.alt.Sol10_1009' failed to create boot archive

cannot unmount '/var/audit': Device busy

cannot unmount '/usr/sap/MTT': Device busy

cannot unmount '/usr/sap/EPT': Device busy

cannot unmount '/sapmnt/MTT': Device busy

cannot unmount '/sapmnt/EPT': Device busy

svc.startd: The system is down.

syncing file systems... done

rebooting...


work around done .

# mount /dev/md/dsk/d0 /mnt

osstelsun121:[/root]



# bootadm update-archive -v -R /mnt

stale /mnt//kernel/strmod/sparcv9/nattymod

cannot find: /mnt/etc/cluster/nodeid: No such file or directory

cannot find: /mnt/etc/mach: No such file or directory

Creating boot_archive for /mnt

updating /mnt/platform/sun4v/boot_archive

15+0 records in

15+0 records out


============================================

Thursday, February 17, 2011

Clone the Partitiontable of the first HD to the second.Prtvtoc-fmthard

This command clone the Partitiontable of the first HD to the second.


Quite useful for partitioning a large number of identical disks.


#prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c0t1d0s2

Boot from second mirror disk d20 - configuration

Here is the final method:


Disk layout

d0 = default boot disk (root partition)

d10 = sub mirror one

d20 = sub mirror two.

1. Confirm standard boot device d0, with sub-mirrors of d10 and d20

2. determine physical devices

metastat d10 --> /dev/dsk/c0t1d0s0

metastat d20 --> /dev/dsk/c0t0d0s0

3. determine physical address of device

ls -l /dev/rdsk/c0t1d0s0 -->



/devices/sbus@2,0/SUNW,socal@d,10000/sf@0,0/ssd@w2100002037a86c65,0:a,raw

ls -l /dev/rdsk/c0t0d0s0 -->



/devices/sbus@2,0/SUNW,socal@d,10000/sf@0,0/ssd@w210000203796fb42,0:a,raw

4. setup boot names at "ok" prompt

nvalias bootdisk

/sbus@2,0/SUNW,socal@d,10000/sf@0,0/ssd@w2100002037a86c65,0:a

nvalias mirrdisk

/sbus@2,0/SUNW,socal@d,10000/sf@0,0/ssd@w210000203796fb42,0:a

5. Break mirror and setup each boot disk

metadetach d0 d20

touch /d0.disk (create dummy file to verify disk

name)

mount /dev/md/dsk/d20 /usr/dummy (mount into a dummy directory)

cd /usr/dummy

metaroot -n /dev/md/dsk/d20 (obtain what changes are

required)

edit vfstab, set root disk to d20 (get all this info from metaroot

command)

edit system, modify to this

rootdev:/pseudo/md@0:0,20,blk

touch /usr/dummy/d20.disk

6. Confirm boot off each disk, verify dummy file

boot bootdisk

verify /d0.disk exists

boot mirrdisk

verify /d20.disk exists

At this point, can apply patches to d0 or make config changes to d0 - with a

good backup of original system on d20.

7. If all goes well with patches the copy changes from d10 to d20

boot bootdisk

confirm /d0.disk exists

metattach d0 d20

metastat d0 (to confirm sync complete)

confirm /d0.disk exists

reboot

8. If patch application/config changes FAIL, copy d20 to d10

boot mirrdisk

verify /d20.disk exists

mount /dev/md/dsk/d0 /usr/dummy (not really necessary)

verify /usr/dummy/d0.disk exists

umount /usr/dummy

metaclear d0 (remove old d0 .. d10 does not

change)

metainit -f d0 -m d20 (force creation of d0 to d20 mirror)

metaroot -n /dev/md/dsk/d0 (print what todo without doing it)

metaroot /dev/md/dsk/d0 (actually run the command)

reboot (boots off default of d0)

metattach d0 d10

watch re-sync!

=========================

Another solution suggested was

I haven't had a chance to test it, but in theory, it looks like you can

(unsupported) edit /etc/lvm/md.cf to tell it that d0 is made up of d20 with

d10 as a mirror.

/etc/lvm/md.conf should initially have said something like:

d0 -m d10 d20 1

d10 1 1 c0t0d0s0

d20 1 1 c1t0d0s0

But after the metadetatch says something like:

d0 -m d10 1

d10 1 1 c0t0d0s0

d20 1 1 c1t0d0s0

Edit it to:

d0 -m d20 d10 1

d20 1 1 c1t0d0s0

d10 1 1 c0t0d0s0

Reboot.

There doesn't seem to be a real win in this process over the other though;

the only place I could possibly see it as being useful is with live upgrade

where you can't run meta* commands on your inactive boot environment but may

wish to only do one reboot to change BE's and fix your disk setup - even in

that case, I suspect you really should be using etc/lvm/md.tab which IS

supported.

=========================

Wednesday, February 16, 2011

Add interface specific route

In solaris , you can add a route whose traffic should go out a specific interface by adding -ifp [ifname] to the route command line. For instance, suppose a host has two interfaces (eri0 and hme0) on the same IP subnet (10.4.2.9/24 with gateway 10.4.2.254), and traffic for just a few hosts needs to go out the secondary hme0 interface. One reason this setup may be needed is for monitoring both some firewalls and the apps that those firewalls protect from a single network management station. On the firewalls you would add host-specific routes for the network management station’s secondary interface via the firewall management network, allowing that interface to talk directly to the firewalls. The primary interface of the network management station gets routed normally, though, and so is able to talk to hosts protected by the same firewalls.


The following command makes this happen:

# route add -host 172.29.4.3 10.4.2.254 -ifp hme0

add host 172.29.4.3: gateway 10.4.2.254

# route add -host 172.29.4.4 10.4.2.254 -ifp hme0

add host 172.29.4.4: gateway 10.4.2.254

# route add -host 172.29.7.31 10.4.2.254 -ifp hme0

add host 172.29.7.31: gateway 10.4.2.254

# route add -host 172.29.7.32 10.4.2.254 -ifp hme0

add host 172.29.7.31: gateway 10.4.2.254

Now all traffic for the four hosts above will go out hme0 instead of eri0.

This trick is actually buried in a tiny section of the route(1M) man page that is worded such that my tiny brain didn’t get it. I’m not even sure what ifp stands for. The obvious candidate, the -iface or -interface flag, can’t be right because it requires the use of proxy ARP.

Monday, February 14, 2011

Netmask Conversions

If you have ever needed to know what a netmask looks like expressed in some other format this table of equivalents should help. It contains common IPv4 netmasks expressed in four different formats.


Friday, January 28, 2011

Add new printer queue on solaris server using lpadmin command

Add new printer queue on solaris server using lpadmin command




1.) Add printer name and port details on /etc/printers.conf file

Eg:

mtysap:\
:bsdaddr=mtyun118,mtysap,Solaris:

mtysapq:\
:bsdaddr=mtysun118,mtysapq,Solaris:

mtysapt:\
:bsdaddr=mtysun118,mtysapt,Solaris:


In above mtysap is Print queue name and mtysun118 is Server name.

2.) Copy the existing interface configuration file to new printer name

Eg:

cp /etc/lp/interfaces/mtysap /etc/lp/interfaces/mtysaptest

3.) Copy the existing configuration file from file to new printer name

Eg:

mkdir /etc/lp/printers/mtysaptest

cp –p /etc/lp/printers/ mtysap/* /etc/lp/printers/mtysaptest

ls –l /etc/lp/printers/mtysaptest

total 10

-rwxrwx--- 1 lp lp 1347 Aug 2 2006 alert.sh
-rw-rw---- 1 lp lp 4 Aug 2 2006 alert.vars
-rw-rw-r-- 1 lp lp 168 Aug 2 2006 configuration
-rw-rw-r-- 1 lp lp 16 Aug 2 2006 faultMessage
-rw-rw-r-- 1 lp lp 0 Aug 2 2006 users.deny

4.) Edit new printer configuration file and replace the IP address with IP of your network printer


Eg:
vi /etc/lp/printers/mtysaptest/configuration

# cat /etc/lp/printers/ mtysaptest /configuration

Banner: on
Content types: any
Device: /dev/null
Interface: /usr/lib/lp/model/netstandard
Printer type: unknown
Modules:
Options: protocol=tcp,dest=134.200.172.26:9100

5.) Change the ownership of files

chmod 775 /etc/lp/interfaces/ mtysaptest

chown lp:lp /etc/lp/interfaces/ mtysaptest

chown-R lp:lp /etc/lp/printers/mtysaptest

6.) Create and enable the printer

Eg:

lpadmin -p mtysaptest -v /dev/null -i /etc/lp/interfaces/ mtysaptest

accept mtysaptest

enable mtysaptest

7.) Check the new printer status

Eg:

# lpstat -p mtysaptest

printer mtysaptest is idle. enabled since Fri 28 Jan 2011 01:47:21 PM GMT. available.



Note : Another way you can also run below script after adding printer in /etc/printers.conf

#!/bin/ksh
# execute the script followed by the printer name and IP.
/usr/sbin/lpadmin -p $1 -v /dev/null -A write -i /usr/lib/lp/model/ne
tstandard -o dest=$2 -o protocol=bsd -o nobanner -I simple,postscript -u allow:all
/usr/bin/enable $1
/usr/sbin/accept $1
echo "Printer "$1" created!"
echo "Printer configuration..."
/usr/bin/lpstat -lp $1
exit