Monday, 12 November 2012

TSM Backup Client Installation on X86/Sparc Solaris Servers



Description: The Tivoli Storage Manager clients work in colligation with the Tivoli Storage Manager server. It helps you protect the information of your server. We also can maintain the backup version of our server files that we can restore if the original files are damaged or lost. We can also archive the files, and preserve them current state, and recall them when necessary.
Installation Process:
 .1.  Download the latest version of the TSM client software into our target server
       EX: For x86 servers: tr-ibm-tsmclient-solaris.6.1.0.2.x86.1.0.0.zip
              For Sparc serverstr-ibm-tsmclient-solaris.6.1.0.2.Sparc.1.0.0.zip
       #cd /tmp
       #mkdir -p /tmp/tsm
       #unzip tr-ibm-tsmclient-solaris.6.1.0.2.x86.1.0.0.zip -d tsm/
       #cd tsm
       #ls
             dsm.opt                   install.ksh                         README_enu.htm        TIVsmCba.pkg
             dsm.sys                  install.ksh.17jan2011       removetsm.ksh           tsmadmin
             EXP-ICOS2863.txt    NOTICES.TXT                    S95tsmsched                upgrade.ksh
             inclexcl.txt               README_api_enu.htm    TIVsmCapi.pkg              version.txt
  
  2. Copy the configuration files and script required location
       #cp S95tsmsched /etc/rc3.d/
       #cp inclexcl.txt /opt/tivoli/tsm/client/ba/bin
       #cp dsm.opt /opt/tivoli/tsm/client/ba/bin
       #cp dsm.sys /opt/tivoli/tsm/client/ba/bin

  3.  Stop the TSM client scheduler, and delete the old packages if we have any
       #/etc/rc3.d/S95tsmsched stop
       #ps -ef|grep dsmc
       #pkgrm TIVsmCba.pkg
       #pkgrm TIVsmCapi.pkg
  
  4.  Install latest TSM client packages, and change permission of the startup script
       #pkgadd -d TIVsmCba.pkg
       #pkgadd -d TIVsmCapi.pkg
       #chmod -R 755 /etc/rc3.d/S95tsmsched stop
  
  5.  Give the  the TSM server entries on dsm.sys and dsm.opt files
        NOTE: check the communication between the server and client
        #telnet <server name> <port number>
        #vi /opt/tivoli/tsm/client/ba/bin/dsm.sys
           SERVERNAME                        <tsm server name>
              TCPServeraddress                  <tsm server name>
              COMMmethod                         tcpip
              tcpport                                      <port number>
             COMPression                           off
             passwordaccess                      generate
             TCPBUFFSIZE                          32
             TCPWINDOWSIZE                   64
             schedlogret                              10
             errorlogret                                 10
             schedmode                               polling
             txnbytelimit                                2097152
             commrestartduration                60
             commrestartinterval                 15
             inclexcl                                      /opt/tivoli/tsm/client/ba/bin/inclexcl.txt
             errorlogname                               /opt/tivoli/tsm/client/ba/bin/dsmerror.log
           #vi /opt/tivoli/tsm/client/ba/bin/dsm.opt
            SErvername         <tsm server name>
               SUbdir                  Yes
              dateformat            2
              replace                 prompt
              compressalways no
         NOTE: check the inclexcl.txt file
    
    5.  Test the TSM client connectivity to the TSM server
          #dsmc query session

    6.  Start the TSM client scheduler
         #/etc/rc3.d/S95tsmsched start
         #ps -ef|grep dsmc 
           root 20270     1   0   Nov 10 ?           0:27    ./dsmc schedule
     
    7. check the client connectivity to the server
         IBM Tivoli Storage Manager
           Command Line Backup-Archive Client Interface
           Client Version 6, Release 1, Level 0.2
           Client date/time: 13-11-2012 05:57:15
           (c) Copyright by IBM Corporation and other(s) 1990, 2009. All Rights Reserved.

           Node Name: <TSM client server name>
           Session established with server <TSM server name: Solaris SPARC
           Server Version 5, Release 5, Level 4.2
           Server date/time: 13-11-2012 05:57:15  Last access: 13-11-2012 00:24:53

           TSM Server Connection Information

           Server Name.............: <TSM server Name>
           Server Type.............: Solaris SPARC
           Archive Retain Protect..: "No"
           Server Version..........: Ver. 5, Rel. 5, Lev. 4.2
           Last Access Date........: 13-11-2012 00:24:53
           Delete Backup Files.....: "No"
           Delete Archive Files....: "Yes"

          Node Name...............: <TSM client server name>
          User Name...............: root
 
 NOTE:Whenever we upgrade TSM client with the most recent one, just copy the latest software into the target location, stop the tsm scheduler before going to upgrade. Run the “upgrade.ksh” script once you change the permissions of the script. 
 #/etc/rc3.d/S95tsmsched stop
 #chmod -R 755 /tmp/tsm/upgrade.ksh
 #./upgrade.ksh
 #/etc/rc3.d/S95tsmsched stop

Monday, 5 November 2012

New Volume Configuration To The Existing Veritas Cluster


Reason for Change
Mysql database is generating more replication logs and elastic logs on hourly basis, and it causes the platform outages as it gets filled up volume. Hence finally we decided to dedicate the new volume to the DB cluster for these logs
Procedure
1.Two new LUNs need to be available on each host ( should be shared storage)
2.Once we add new LUNs to the server, need to label them, and should bring the LUNs under   
   VERITAS control 
   #cfgadm -al -o show_FCP_dev
    #export PATH=$PATH:/opt/VRTS/bin:/opt/VRTSvcs/bin:/etc/vx/bin
    #vxdisk scandisks new
    #vxdisk list
3.Initialize the each disk and create new disk group with new LUNs
   #vxdisksetup -i <new LUN> 
    #vxdisksetup -i <2nd new LUN>
    #vxdisk list
    #vxdg init new_bindg new_bindisk01=<new LUN>
    #vxdg -g new_bindg adddisk new_bindisk02=<2nd new LUN>
4.Check whether the new disk group is working or not 
   #vxdg deport new_bindg   [needs to be done on A node
    #vxdisk -o alldgs list
    #vxdg import new_bindg   [Needs to be done on B node]
    #vxvol -g new_bindg start new_binlogs [start the volume if we have any]
5.Create new striped volume using new LUNs
    #vxassit -g new_bindg make new_binlogs 398g layout=stripe ncol=2 stwidth=8k 
       new_bindisk01 new_bindisk02
    #vxprint -hvpst
6.check the volume with temporary mount point
   #mkfs -F vxfs /dev/vx/rdsk/new_bindg/new_binlogs 
    #mkdir -p /binlogs  [ do it on both the nodes]
    #chown -R dba:other /binlogs [do it on both the nodes]
    #mount -F vxfs /dev/vx/rdsk/new_bindg/new_binlogs /binlogs
    #umount /binlogs
7. Freeze the cluster before going to add the volume
   #cp /etc/VRTSvcs/conf/config/main.cf /etc/VRTSvcs/conf/config/main.cf.bkp
    #cp /etc/VRTSvcs/conf/config/types.cf /etc/VRTSvcs/conf/config/types.cf.bkp
    #haconf makerw   [ Make the configuration as writable]
    #hagrp -freeze newsDB -persistent
    #hastatus -sum
8.Add disk group resource to the cluster
   #hares -add newsDBbin-dg DiskGroup newsDB
       VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors  [ignore notice]

   #hares -modify newsDBbin-dg DiskGroup new_bindg
   #hares -modify newsDBbin-dg Enabled 1
9.Add volume resource to the cluster 
   #hares -add newsDBbin-vol Volume newsDB


         VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors [ignore notice]
   #hares -modify newsDBbin-vol Volume new_binlogs
    #hares -modify newsDBbin-vol DiskGroup new_bindg
    #hares -modify newsDBbin-vol Enabled 1
10.Add mount resource to the cluster
     #hares -add newsDBbin-mnt Mount newsDB
          VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors [ignore notice]
    #hares -modify newsDBbin-mnt MountPoint /binlogs
    #hares -modify newsDBbin-mnt BlockDevice /dev/vx/rdsk/new_bindg/new_binlogs
    #hares -modify newsDBbin-mnt FSType vxfs
    #hares -modify newsDBbin-mnt MountOpt rw
    #hares -modify newsDBbin-mnt FsckOpt %y
    #hares -modify newsDBbin-mnt Enabled 1
11.Define the dependencies
    #hares -link newsDB-app newsDBbin-mnt
    #hares -link newsDBbin-mnt newsDB-mnt
    #hares -link newsDBbin-mnt newsDBbin-vol
    #hares -link newsDBbin-vol newsDBbin-dg
    #hares -link newsDBbin-mnt newsDBlog-mnt
12. Go through the configuration
    #more /etc/VRTSvcs/conf/config/main.cf 


13.Unfreeze the cluster
     #hagrp -unfreeze newsDB -persistent
     #haconf -dump -makero [Make the configuration as readable]
     #hacf -verify /etc/VRTSvcs/conf/config/  [check the configuration]
     #hastop -all  [ need stop the cluster for reconfiguration for the new resources]
     #hastart  [ do it on A after that B node]
     #hastatus -sum