Sunday, February 14, 2010

Uinstall the Veritas Cluster Service .

Below are the general outline for uninstall veritas cluster services without removing Veritas Volume manger and file system . The below stpes need to perform in both the cluster nodes . Please perfrom the steps in one node at a time.

Make a backup of the VCS configuration file, to reference for file systems, shares and apps if needed

o cp /etc/VRTSvcs/conf/config/main.cf $HOME/main.cf

Stop VCS, but keep all applications running

o hastop -all -force

Disable VCS startup

o mv /etc/rc3.d/S99vcs /etc/rc3.d/s99vcs
o mv /etc/rc2.d/S92gab /etc/rc2.d/s92gab
o mv /etc/rc2.d/S92gab /etc/rc2.d/s92gab
o rm /etc/llttab
o rm /etc/gabtab
o rm /etc/llthost

Add all file systems to /etc/vfstab

Add Solaris Services back that were removed per VCS requirement

o /network/nfs/status
o /network/nfs/server
o /network/nfs/mapid
o Here are the commands used to remove those services:
 svccfg delete -f svc:/network/nfs/status:default
 svccfg delete -f svc:/network/nfs/server:default
 svccfg delete -f svc:/network/nfs/mapid:default
o The service manifest files to import should be located in /var/svc/manifest/network/nfs
o Below steps will import the manifest (You will get the error that “ partial import because of default-milestone is already online” while importing the nfs server manifest , but it will come online after server reboot)
 svccfg
 svc:> validate /var/svc/manifest/network/nfs/server.xml
 svc:> import /var/svc/manifest/network/nfs/server.xml

 svc:> validate /var/svc/manifest/network/nfs/status.xml
 svc:> import /var/svc/manifest/network/nfs/status.xml

 svc:> validate /var/svc/manifest/network/nfs/mapid.xml
 svc:> import /var/svc/manifest/network/nfs/mapid.xml

Configure NFS shares in /etc/dfs/dfstab

Enable the nfs/server service (and all dependency services)

Ensure NFS shares are shared

o May need to log into every server that mounts these shares to verify they are still mounted, and remount if they are not.

Configure IPMP
o Generally, the Primary Interface is “ce0” and the secondary interface is “ce4”, but verify within the main.cf file, looking for the MultiNICA definition stanza.

Ensure all apps that were managed by VCS (Oracle, SAP, etc) have the appropriate startup/shutdown scripts in system startup/shutdown. The app teams will need to write the startup/shutdown scripts.
o Ensure you link the shutdown scripts into every rc directory except rc3.d (app teams typically state to only put into one rc directory, but that is incorrect for Solaris)

Reboot the servers and verify everything comes up appropriately

o IPMP
o File systems
o NFS Shares
o Apps

May need to log into every server that mounts these shares to verify they are still mounted, and remount if they are not.

Once verified, uninstall VCS
o Not VxVM or VxFS as those are still used, just the VCS components. This should be the complete list, but verify:
 VRTSsap
 VRTSvcs
 VRTSvcsag
 VRTSvcsdc
 VRTSvcsmg
 VRTSvcsmn
 VRTSvcsor
 VRTSvcsvr
 VRTSvcsw
 VRTScscw
 VRTScsocw
 VRTSagtfw

Reboot the servers again with VCS uninstalled, ensuring proper startup

May need to log into every server that mounts these shares to verify they are still mounted, and remount if they are not.



Some of the work can be done before the actual scheduled maintenance window:

Backup copy of the VCS configuration file

Add all file systems to /etc/vfstab, though commented out in the event the server is rebooted before the maintenance window

Configure NFS shares in /etc/dfs/dfstab (though cannot add the services back yet)

Obtain the needed IP Addresses to properly configure IPMP

Pre-create the new /etc/hostname.* files, though naming them differently in case the server is rebooted before the maintenance window

Place the app startup/shutdown scripts into place, though naming their links to ensure they do not run on server startup/shutdown in the event the server is rebooted before the maintenance window

Thursday, January 28, 2010

Solaris 10 - Increasing Number of Processes Per User

The below example is given for increasing the number of processes on Solaris 10 system on PER UID. The hardware used here is UltraSPARC T2 based system with Solaris 10 and 32 GB RAM.
We needed to increase the number of processesper user to more than current setting of 30000

bash-3.00# ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
open files (-n) 260000
pipe size (512 bytes, -p) 10
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 29995
virtual memory (kbytes, -v) unlimited

Trying to increase the "max user processes" would fail with the following error:

bash-3.00# ulimit -u 50000
bash: ulimit: max user processes: cannot modify limit: Invalid argument
bash-3.00#

After going through the Solaris 10 Tunable Guide for Process sizing learned that there are 5 related parameters related to process sizing.

maxusers - The maximum number of processes on the system, The number of quota structures held in the system. The size of the directory name look-up cache (DNLC)
reserved_procs - Specifies the number of system process slots to be reserved in the process table for processes with a UID of root
pidmax - Specifies the value of the largest possible process ID. Specifies the value of the largest possible process ID. Valid for Solaris 8 and later releases.
max_nprocs - Specifies the maximum number of processes that can be created on a system. Includes system processes and user processes. Any value specified in /etc/system is used in the computation of maxuprc.
maxuprc - Specifies the maximum number of processes that can be created on a system by any one user

Looked at the current values for these parameter:

bash-3.00# echo reserved_procs/D | mdb -k
reserved_procs:
reserved_procs: 5

bash-3.00# echo pidmax/D| mdb -k
pidmax:
pidmax: 30000

bash-3.00# echo maxusers/D | mdb -k
maxusers:
maxusers: 2048
bash-3.00#

bash-3.00# echo max_nprocs/D | mdb -k
max_nprocs:
max_nprocs: 30000
bash-3.00#

bash-3.00# echo maxuprc/D| mdb -k
maxuprc:
maxuprc: 29995

So, in order to set the max per user processes in this scenario, we were required to make the changes to "pidmax" (upper cap), maxusers, max_nprocs & maxuprc
Sample entries in /etc/system & reboot


set pidmax=60000
set maxusers = 4096
set maxuprc = 50000
set max_nprocs = 50000

After making the above entries, we were able to increase the max user processes to 50000.

bash-3.00# ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
open files (-n) 260000
pipe size (512 bytes, -p) 10
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 49995
virtual memory (kbytes, -v) unlimited
bash-3.00#

bash-3.00# echo reserved_procs/D |mdb -k
reserved_procs:
reserved_procs: 5
bash-3.00# echo pidmax/D |mdb -k
pidmax:
pidmax: 60000
bash-3.00# echo max_nprocs/D |mdb -k
max_nprocs:
max_nprocs: 50000
bash-3.00# echo maxuprc/D | mdb -k
maxuprc:
maxuprc: 50000
bash-3.00#

Note: If you are operating within the 30000 limit (default pidmax setting) the blog entry referred above seems to work fine. If you are looking at increasing the processes beyond 30000, it we need to make adjustment to other dependent parameters stated in this blog entry.

Friday, July 31, 2009

Enable veritas DMP path for single LUN

#usr/sbin/vxdmpadm enable path=C#t#d#s2

Example:

The below command is for enabling path for "c2t50060482D52DAF4Cd145s2" Lun

#/usr/sbin/vxdmpadm enable path=c2t50060482D52DAF4Cd145s2
Command terminated successfully

Monday, April 13, 2009

Disable SSH login for individual users

1.)Edit the /etc/ssh/sshd_config file

Add the user names after “DenyUsers” string which you want disable .

Example :


DenyUsers ftpuser fdpuser2

2.) Restart the ssh service

For Solaris 10
--------------
svcadm refresh svc:/network/ssh:default

For Solaris 7,8 & 9
------------------
/etc/init.d/sshd restart


Friday, February 20, 2009

Adding a disk to VxVM

Use the vxdisksetup program to initialize the target disk:

# /usr/lib/vxvm/bin/vxdisksetup -i c#t#d# privoffset=0 \ privlen=XXXX publen=YYYY


#/etc/vx/bin/vxdisksetup –if (New Disk)

Example :

#/etc/vx/bin/vxdisksetup –if EMC0_42


#/etc/vx/bin/vxdisksetup –if c4t2d0




where XXXX is the size of the source disk’s private region, and YYYY is the
size of its public region.

Note: If your system is configured to use enclosure-based naming instead of OS-based naming, replace the c#t#d#name with the enclosure-based name for the disk.

Placing disks under VxVM control

When you add a disk to a system that is running VxVM, you need to put the disk under VxVM control so that VxVM can control the space allocation on the disk.


you can change the naming scheme from the command line. The
following commands select enclosure-based and operating system-based
naming respectively:

# vxddladm set namingscheme=ebn [persistence={yes|no}]

# vxddladm set namingscheme=osn [persistence={yes|no}]

The change is immediate whichever method you use. The optional persistence
argument allows you to select whether the names of disk devices that are
displayed by VxVM remain unchanged after disk hardware has been
reconfigured and the system rebooted. By default, both enclosure-based naming
and operating system-based naming are persistent.

To find the relationship between a disk and its paths, run one of the following
commands:

# vxdmpadm getsubpaths dmpnodename=disk_access_name

# vxdisk list disk_access_name

To update the disk names so that they correspond to the new path names
1 Remove the file that contains the existing persistent device name database:

# rm /etc/vx/disk.info

2 Restart the VxVM configuration demon:

# vxconfigd -k

This regenerates the persistent name database.

Discovering the association between enclosure-based disk names
and OS-based disk names
If you enable enclosure-based naming, and use the vxprint command to display
the structure of a volume, it shows enclosure-based disk device names (disk
access names) rather than c#t#d#s# names. To discover the c#t#d#s# names
that are associated with a given enclosure-based disk name, use either of the
following commands:

# vxdisk -e list enclosure-based_name

# vxdmpadm getsubpaths dmpnodename=enclosure-based_name

For example, to find the physical device that is associated with disk ENC0_21,
the appropriate commands would be:

# vxdisk -e list ENC0_21

# vxdmpadm getsubpaths dmpnodename=ENC0_21

To obtain the full pathname for the block and character disk device from these
commands, append the displayed device name to /dev/vx/dmp or /dev/vx/rdmp

Simple or nopriv disks in the boot disk group
If the boot disk group (usually aliased as bootdg) is comprised of only simple and/or nopriv disks, the vxconfigd daemon goes into the disabled state after the naming scheme change.
To remove the error state for simple or nopriv disks in the boot disk group
1 Use vxdiskadm to change back to c#t#d#s# naming.
2 Enter the following command to restart the VxVM configuration daemon:

# vxconfigd -kr reset

3 If you want to use enclosure-based naming, use vxdiskadm to add a sliced disk to the bootdg disk group, change back to the enclosure-based naming scheme, and then run the following command:

# /etc/vx/bin/vxdarestore

Discovering and configuring newly added disk Devices in Veritas

You can also use the vxdiskscandisks command to scan devices in the operating system device tree, and to initiate dynamic reconfiguration of multipathed disks.
If you want VxVM to scan only for new devices that have been added to the
system, and for devices that have been enabled or disabled, specify the -f option
to either of the commands, as shown here:

# vxdctl -f enable# vxdisk -f scandisks

However, a complete scan is initiated if the system configuration has been
modified by changes to:
■ Installed array support libraries.
■ The devices that are listed as being excluded from use by VxVM.
■ DISKS (JBOD), SCSI3, or foreign device definitions.

The next example discovers fabric devices (that is, devices with the characteristic DDI_NT_FABRIC property set on them):

# vxdisk scandisks fabric

The following command scans for the devices c1t1d0 and c2t2d0: # vxdisk scandisks device=c1t1d0,c2t2d0
Alternatively, you can specify a ! prefix character to indicate that you want to scan for all devices except those that are listed:

# vxdisk scandisks !device=c1t1d0,c2t2d0

You can also scan for devices that are connected (or not connected) to a list of logical or physical controllers. For example, this command discovers and configures all devices except those that are connected to the specified logical controllers:

# vxdisk scandisks !ctlr=c1,c2

The next command discovers devices that are connected to the specified physical controller:

# vxdisk scandisks pctlr=/pci@1f,4000/scsi@3/