28 March, 2012

Oracle10g RAC Patch set Upgrade from 10.2.0.1 to 10.2.0.4

Oracle10g RAC Patch set Upgrade from 10.2.0.1 to 10.2.0.4/10.2.0.5
Flavor: Read Hat Linux AS5
Note:
In Patchset up gradation the patch p6810189_10204_Linux-x86.zip is same for Clusterware and RAC database.

Oracle 10g RAC Upgrading

In Oracle10g RAC upgradation, 1st we must upgrade clusterware software, 2nd ASM home and 3rd upgrade Database home.
1.      Upgrade Clusterware software (CRS_HOME)
2.      Upgrade ASM Home (This may or may not be a home separate from the RDBMS home)
3.      Upgrade Database Home (RDBMS_HOME)
4.      Then finally upgrade the RAC Database (Manual or DBCA etc)
Upgrading Clusterware Software:
Upgrading Oracle Clusterware software must be at the same or newer level as the Oracle software in the Oracle RAC Oracle home. Therefore, you should always upgrade Oracle Clusterware before you upgrade Oracle RAC. This patch does not allow you to upgrade Oracle RAC before you upgrade Oracle Clusterware.
I strongly advise you before upgrading you must read the patchset release notes or Doc ID 316900.1
1. Download and extract the patch set installation software:
  • Download the patch for 10.2.0.4 patch number is p6810189_10204_Linux-x86.zip. (P8202632_10205_LINUX.zip for 10.2.0.5) the patch set installation archive to a directory that is not the Oracle home directory or under the Oracle home directory.
  • Enter the following command to unzip and extract the installation files: 
  • $ unzip p6810189_10204_Linux-x86
2. Manage your data with Time Zone before upgrade
The 10.2.0.4 patch set includes an update to the Oracle time zone definitions to Version 4. This version of the time zone definitions includes the changes to daylight saving time in the USA in 2007 and other updates.
From 9i onwards Oracle has 2 data types that may have data stored affected by a update of the RDBMS DST (Daylight Saving Time) definitions, those are TIMESTAMP WITH LOCAL TIME ZONE (TSLTZ) and TIMESTAMP WITH TIME ZONE (TSTZ).
If you have TZ data stored in your database you need to go through the following steps to ensure the integrity of your data while the database upgrade.
Check which TIMEZONE version file you are currently using.
SQL> select version from v$timezone_file;
   VERSION
----------
         2
If this gives 4 then you may simply proceed with the upgrade even if you have TZ data.
If this gives higher then 4, look at the Meta link note: Note 553812.1
If this gives lower then 4, perform the following steps: (according Metalink Note 553812.1)
Download utltzpv4.sql and run it.
SQL> @utltzpv4.sql   
DROP TABLE sys.sys_tzuv2_temptab CASCADE CONSTRAINTS
              *
ERROR at line 1:
ORA-00942: table or view does not exist
Table created.
DROP TABLE sys.sys_tzuv2_affected_regions CASCADE CONSTRAINTS
               *
ERROR at line 1:
ORA-00942: table or view does not exist
Table created.
Your current timezone version is 2!
Do a select * from sys.sys_tzuv2_temptab; to see if any TIMEZONE data is affected by version 4 transition rules.
Any table with YES in the nested_tab column (last column) needs a manual check as these are nested tables.
PL/SQL procedure successfully completed.
Commit complete.
/* Once the script finishes successfully execute the following query */
column table_owner format a4
column column_name format a18
sql> select * from sys_tzuv2_temptab;
TABL TABLE_NAME                                        COLUMN_NAME                 ROWCOUNT NES
---- ------------------------------                                 ------------------                        ---------- ---
SYS  SCHEDULER$_JOB                                    LAST_ENABLED_TIME       3
SYS  SCHEDULER$_JOB                                   LAST_END_DATE                  1
SYS  SCHEDULER$_JOB                                   LAST_START_DATE               1
SYS  SCHEDULER$_JOB                                   NEXT_RUN_DATE                  1
SYS  SCHEDULER$_JOB                                   START_DATE                           1
SYS  SCHEDULER$_JOB_RUN_DETAILS       REQ_START_DATE                  1
SYS  SCHEDULER$_JOB_RUN_DETAILS       START_DATE                            1
SYS  SCHEDULER$_WINDOW                          LAST_START_DATE              2
SYS  SCHEDULER$_WINDOW                          NEXT_START_DATE             2
9 rows selected.
If it returns no rows, there is nothing that needs to be done. Just proceed with the upgrade.
If it returns the detail of columns that contain TZ data which may be affected by the upgrade, see metalink Note: Note 553812.1
As My understand the Metalink Note 553812.1 states that if you see SYS owned SCHEDULER objects then ignore them and proceed, if you have seen user data or user created jobs here then you need to take a backup of data before upgrade and restore it back after you upgrade, And Remove any user created jobs and re-create them after the upgrade.
Clusterware Patchset Installation:
Oracle 9i onwards we have rolling upgrade and non rolling upgrade:
·         Rolling Upgrade (No Downtime)
·         Non Rolling Upgrade (Complete Downtime)

3.Rolling Upgrade (No Downtime) for clusterware:
By default, Oracle supports rolling upgrade for Oracle Clusterware.
·         Shutdown The Oracle Instance on Node 1
·         Apply the patch to the Oracle Instance on Node 1
·         Start the Oracle Instance on Node 1
·         Shutdown the Oracle Instance on Node 2
·         Apply the patch to the Oracle Instance on Node2
·         Start the oracle Instance on Node2
The following steps only on one node:
In my case I am going to upgrade first master node (Node1- RAC1)
Shut down all processes in the Oracle home on the node that might be accessing a database, for example Oracle Enterprise Manager Database Control or iSQL*Plus:
$ emctl stop dbconsole
$ isqlplusctl stop
Note:
Before you shut down all processes that are monitored by Oracle Enterprise Manager Grid Control, set a blackout in Grid Control for the processes that you intend to shut down. This is necessary so that the availability records for these processes indicate that the shutdown was planned downtime rather than an unplanned system outage.
a.      Shut down all services in the Oracle home on the node that might be accessing a database:
$ srvctl stop service -d db_name [-s service_name_list [-i inst_name]]
Ex:  $ srvctl stop service -d racdb 
b.      Shut down all Oracle RAC instances on the node on which you intend to perform the rolling upgrade. To shut down Oracle RAC instances on individual nodes in the database, enter the following command where db_name is the name of the database:
$ srvctl stop instance -d db_name -i inst_name
Ex: $ srvctl stop instance -d racdb -i racdb1
c.       If an Automatic Storage Management instance is present, then shut down the Automatic Storage Management instance on the node on which you intend to perform the rolling upgrade. To shut down an Automatic Storage Management instance, enter the following command where node is the name of the node where the Automatic Storage Management instance is running:
$ srvctl stop asm -n node
Ex: $ srvctl stop asm -n rac1 
d.      Stop all node applications on the node on which you intend to perform the rolling upgrade. To stop node applications running on a node, log in as the root user and enter the following command, where node is the name of the node where the applications are running:
# srvctl stop nodeapps -n node
Ex: $ srvctl stop nodeapps -n rac1
e.       If the database is on Instance1,Relocate database to second instance
#. /crs_relocate   ora.racdb.db

What files to Backup in Oracle Clusterware patch setup?

As per as Note: 754369.1

Oracle recommended you consider using operating system (OS) level backup tools and strategies whenever possible to backup the whole node for faster restore and recovery of Oracle Installation to previous consistent state.

When OS level backup of whole node is not a option due to time, space constraints or otherwise, you can backup following list of files.

Scripts ‘init.’ (init.cssd, init.crsd, etc)

These files are used to start the daemons (CRS stack), On Linux they are located under /etc/init.d directory.

‘inittab’ file : Hear is where the scripts “init.”  Are registered, Oracle clusterware is started by “init.”

Control files: these files are used to control some aspects of oracle clusterware like enable/disable and other.

These files also known as SCLS_SRC files under  /etc/oracle  or /var/opt/oracle

Example:

For linux..

/etc/init.d/init.cssd

/etc/init.d/init.crs

/etc/init.d/init.crsd

/etc/init.d/init.evmd

/etc/oracle

/oracle/inittba

Use cp/tar to backup the above files…

Oracle recommends that you create a backup of the Oracle Inventory, Oracle 10g home and Oracle 10g Database before you install the patch set. If you are planning to apply this patch to Oracle RAC, ensure that you create a backup of these components on all nodes before applying the patch.

Copy oracle homes on all nodes:

#   cd /oracle/product/10.2.0/
crs    asm  db_1  oraInventory
# cp /oracle/old_oraInventory     oraInventory
# tar czf /oracle/OraCRSHomebkp.tar.gz        crs
# tar czf /oracle/OraASMHomebkp.tar.gz      asm
# tar czf /oracle/OraDBHomebkp.tar.gz         db_1
i.  Backup the Clusterware components:
The clusterware components are OCR & VOTEDISKs
Use “dd” OS command for backup of voting/ocr disk files if that are placed on raw storage device.
Ex:
# ocrconfig –export  /oracle/ocrexpbkp.dump
# ocrconfig –showbackup (shows the auto backup location for ocr disks)
#ocrcheck
Votedisk:
$ crsctl query css votedisk
$ dd if=votedisk_name  of=backup_votedisk_name  bs=4k
Eg: dd if=/dev/raw/raw2 of=/oracle/votebkp.bak bs=4k
ii. It’s good if you take the RMAN full backup for RAC database.

4. Installing Clusterware patchsetup:
a.    Log in as the oracle user.If you are not installing the software on the local computer, then run the following command on remote machine: 
For Bourne, Bash, or Korn shell: 
$ export DISPLAY=localhost: 0.0 
For C shell: 
$ % setenv DISPLAY local_host:0.0
In this example, local_host is the host name or IP address of the computer that you want to use to display Oracle Universal Installer.
Now to enable X applications, run the following command on the machine that you want to use to display Oracle Universal Installer:
$ xhost + [fully_qualified_remote_host_name]
b.   Enter the following commands to start Oracle Universal Installer, where patchset_directory is the directory where you unpacked the patch set software:
Before runInstaller, check once whether all the services are down or not.
[root@rac1 bin]#. /crs_stat -t
$ Cd patchset_directory/Disk1
$ . /runInstaller









Note:
The following instructions are displayed on the Oracle Universal Installer screen:
To complete the installation of this patch set, perform the following tasks on each node:
On Node1:
a.  Log in as the root user and enter the following command to shut down the Oracle Clusterware:
# CRS_home/bin/crsctl stop crs
c.    Run the root102.sh script to automatically start the Oracle Clusterware on the patched node:
# CRS_home/install/root102.sh
On node2:
a.  Log in as the root user and enter the following command to shut down the Oracle Clusterware:
# CRS_home/bin/crsctl stop crs
c.    Run the root102.sh script to automatically start the Oracle Clusterware on the patched node:
# CRS_home/install/root102.sh
After patch on node2, All resources are back now in ONLINE state on both nodes. You successfully applied the 10.2.0.2 patch set for oracle clusterware in a rolling fashion.
Note: Rolling Upgrade is not supported on Shared Oracle Homes. 
Non Rolling Upgrade (complete downtime) for clusterware:
For non rolling upgrade, Oracle Clusterware needs to be shut down completely. If shard oracle home we need to choose complete downtime.
Complete the following steps:
Note:  You must perform these steps in the order listed.
1.Shut down all processes in the Oracle home on each node that might be accessing a database, for example Oracle Enterprise Manager Database Control or iSQL*Plus:
2. $ emctl stop dbconsole
3.  $ isqlplusctl stop
Note: 
Before you shut down all processes that are monitored by Oracle Enterprise Manager Grid Control, set a blackout in Grid Control for the processes that you intend to shut down. This is necessary so that the availability records forthese processes indicate that the shutdown was planned downtime rather than an unplanned system outage.
4. Shut down all services in the Oracle home on each node that might be accessing a database:
$ srvctl stop service -d db_name [-s service_name_list [-i inst_name]]
5.Shut down all Oracle RAC instances on all cluster nodes by entering the following command where db_name is the name of the database:
$ srvctl stop database -d db_name
6.If Automatic Storage Management instance is present, then shut down Automatic Storage Management instances on all cluster nodes by entering the following command where node is the name of the node where the Automatic Storage Management instance is running:
$ srvctl stop asm -n node
7.Stop all node applications on all cluster nodes by entering the following command as the root user, where node is the name of the node where the applications are running:
# srvctl stop nodeapps -n node
8.Shut down the Oracle Clusterware processes by entering the following command on all nodes as the root user:
# CRS_home/bin/crsctl stop crs
 Note:  The following instructions are displayed on the Oracle Universal Installer screen:
To complete the installation of this patch set, perform the following tasks on each node:
1.Log in as the root user and enter the following command to shut down the Oracle Clusterware:
# CRS_home/bin/crsctl stop crs
2.Run the root102.sh script to automatically start the Oracle Clusterware on the patched node:
# CRS_home/install/root102.sh

5.Oracle RAC Patchset Installation:

Let’s find the version of RAC database:
[oracle@rac2 OPatch]$. /opatch lsinventory
Invoking OPatch 10.2.0.1.0
Oracle interim Patch Installer version 10.2.0.1.0
Copyright (c) 2005, Oracle Corporation.  All rights reserved..
Oracle Home       : /oracle/product/10.2.0/rdbms
Central Inventory : /oracle/product/10.2.0/oraInventory
from           : /oracle/product/10.2.0/rdbms/oraInst.loc
OPatch version    : 10.2.0.1.0
OUI version       : 10.2.0.1.0
OUI location      : /oracle/product/10.2.0/rdbms/oui
--------------------------------------------------------------------------------
Installed Top-level Products (1):
Oracle Database 10g                       10.2.0.1.0
There are 1 products installed in this Oracle Home.
There are no Interim patches installed in this Oracle Home.
Rac system comprising of multiple nodes
Local node = rac2
Remote node = rac1
--------------------------------------------------------------------------------
OPatch succeeded.
And we can find from the following method:
SQL> select * from v$version;
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE    10.2.0.1.0      Production
TNS for Linux: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
 Note: For RAC Database Patchset we need to down hole database on both nodes, for RAC Database there no rolling upgradation.
Shut down the following Oracle Database 10g processes before installing the patch set:
1. Shut down all processes in the Oracle home on each node that might be accessing a database, for example Oracle Enterprise Manager Database Control or iSQL*Plus:
$ emctl stop dbconsole
$ isqlplusctl stop
Note:
Before you shut down all processes that are monitored by Oracle Enterprise Manager Grid Control, set a blackout in Grid Control for the processes that you intend to shut down. This is necessary so that the availability records for these processes indicate that the shutdown was planned downtime rather than an unplanned system outage.
2.      Shut down all services in the Oracle home on each node that might be accessing a database:
$ srvctl stop service -d db_name [-s service_name_list [-i inst_name]]
$ srvctl stop service –d racdb –s racdb 
3.      Shut down all Oracle RAC instances on the nodes, which run from the Oracle home on which you are going to apply the patch set. To shut down all Oracle RAC instances for a database, enter the following command where db_name is the name of the database:
$ srvctl stop database -d db_name
$ srvctl stop database –d racdb
4.  If Automatic Storage Management instance is present, then shut down all Automatic Storage Management instances on all nodes. To shut down an Automatic Storage Management instance, enter the following command where node is the name of the node where the Automatic Storage Management instance is running:
$ srvctl stop asm -n node
$ srvctl stop asm –n rac1
$ srvctl stop asm –n rac2
This needs to be done only when patching the home from where ASM runs, It is recommended that ASM run from a different ORACLE_HOME than the database.
5.      Stop any listeners that are running from the Oracle home that you are patching on all nodes. To stop the listener running on a node, enter the following command, where node is the name of the node where the listener is running:
$ srvctl stop listener -n node [-l listenername]
$ srvctl stop listener –n rac1
$ srvctl stop listener –n rac2
Note:
Using a logical standby database, you can upgrade Oracle Database software and patch sets with almost no downtime. Using Data Guard SQL Apply, you can perform a rolling upgrade of the Oracle Database software from release 10.2.0.1 to the current patch set release. During a rolling upgrade, you can run different releases of Oracle Database on the primary and logical standby databases, while you upgrade them, one at a time, incurring minimal downtime on the primary database.

RAC Patch Installation Tasks

Note: 
In my scenario I have ASM, RDBMS Oracle Homes, so I need to Patch first on ASM and then RDBMS home.
We can install the patch set either interactively or non interactively, in this scenario I am going to interactively. 
Note:
If you also plan to apply this patch to Oracle Clusterware, ensure that you apply the patch to Oracle Clusterware before applying it to Oracle Database. This is because Oracle Clusterware should always be at the same or later version as the database.

Installing the Oracle Database 10g Patch Set Interactively

To install the Oracle Database 10g patch set interactively:
Note:
If you attempt to install this patch set in an Oracle home directory that does not contain an Oracle Database 10g release 10.2.0.1 or higher installation, Oracle Universal Installer displays a warning dialog with the following error:
OUI-10091: There are no patches that need to be applied from the patch set Oracle Database 10g Release 2 Patch Set 3 10.2.0.4
The Oracle Universal Installer does not allow the installation to proceed. Click OK, then click Cancel to end the installation.
1. Log in as the oracle user.
2. f you are not installing the software on the local computer, then run the following command on remote machine:
a.Bourne, Bash, or Korn shell:
b.$ export DISPLAY=localhost:0.0
c. C shell:
d. % setenv DISPLAY local_host:0.0
In this example, local_host is the host name or IP address of the computer that you want to use to display Oracle Universal Installer.
Export DISPLAY=192.68.1.100:0.0
Now to enable X applications, run the following command on the machine that you want to use to display Oracle Universal Installer:
$ xhost +  [fully_qualified_remote_host_name]
3.  Enter the following commands to start Oracle Universal Installer, where patchset_directory is the directory where you unpacked the patch set software:
$cd patchset_directory/Disk1
$. /runInstaller
On the Welcome screen, click next.

 4. On the Specify Home Details screen, select the name of the Oracle home that you want to update, or select the Path that you want to update from the list, and then click Next.
 5.  If you are installing the patch set on an Oracle RAC cluster, click next when the Selected Nodes screen appears.
6.   On the Product-specific Prerequisite Checks screen, correct any reported errors, and click Next.
 7.  Enter details regarding the CSI Number, OracleMetaLink Account user name and Country code on the Oracle Configuration Manager Registration screen and click Next.
  1. The registration and configuration can also be done manually after patchset installation.
  2. The OCM registration page appears only while applying 10.2.0.4 patch over existing DB instances alone.
10.  On the Summary screen, click Install.
This screen lists all of the patches available for installation.

11.  When prompted, run the $ORACLE_HOME/root.sh script as the root user, If you are applying the patch set to an Oracle RAC installation, and then run the root.sh script on each node of the cluster.
On node1:
Log in as “root” user
#/oracle/product/10.2.0/asm/root.sh
On node2:
Log in as “root” user
#/oracle/product/10.2.0/asm/root.sh
12.  On the End of Installation screen, click Exit, then click Yes to exit from Oracle Universal Installer.
Now check the patch version:
[oracle@rac2 OPatch]$ ./opatch version
Invoking OPatch 10.2.0.4.2
OPatch Version: 10.2.0.4.2
OPatch succeeded.
6. Patchset installation on RDBMS Home:
For RDBMS follow the same steps above asm patchset installation steps.
7. Manually Upgrading a RAC Database
After you install the patch set, you must perform the following steps on every database associated with the upgraded Oracle home:
Note:
If you do not run the catupgrd.sql script as described in this section and you start up a database for normal operation, then ORA-01092: ORACLE instance terminated. Disconnection forced errors will occur and the error ORA-39700: database must be opened with UPGRADE option will be in the alert log.
1. Log in as the Oracle software owner user.
2.For Oracle RAC installations, start listener on each node of the cluster as follows:
$ srvctl start listener -n RAC1
3.If you are using Automatic Storage Management, start the Automatic Storage Management instance.
$ srvctl start asm –n  RAC1
4.For Oracle RAC installations:
a.Use SQL*Plus to log in to the database as the SYS user with SYSDBA privileges:
b.  $ sqlplus /nolog 
SQL> CONNECT SYS AS SYSDBA
c. Enter password: SYS_password
SQL> STARTUP NOMOUNT
d. Set the CLUSTER_DATABASE initialization parameter to FALSE:
SQL> ALTER SYSTEM SET CLUSTER_DATABASE=FALSE SCOPE=spfile; 
e. Shut down the database:
SQL> SHUTDOWN
5.Enter the following SQL*Plus commands to start the RAC database in upgrade mode:
SQL> STARTUP UPGRADE
SQL> SPOOL patch.log
SQL> @?/rdbms/admin/catupgrd.sql
SQL> SPOOL OFF
6.Review the patch.log file for errors and inspect the list of components that is displayed at the end of catupgrd.sql script.This list provides the version and status of each SERVER component in the database.
7.If necessary, rerun the catupgrd.sql script after correcting any problems.
8.Restart the database:
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP
9.Run the utlrp.sql script to recompile all invalid PL/SQL packages now instead of when the packages are accessed for the first time. This step is optional but recommended.
SQL> @?/rdbms/admin/utlrp.sql
Note:
When the 10.2.0.4 patch set is applied to an Oracle Database 10g Standard Edition database, there may be 54 invalid objects after the utlrp.sql script runs. These objects belong to the unsupported components and do not affect the database operation.
Ignore any messages indicating that the database contains invalid recycle bin objects similar to the following:
BIN$4lzljWIt9gfgMFeM2hVSoA==$0
10.  Run the following command to check the status of all the components after the upgrade:
11.  SQL> select comp_name, version, status from sys.dba_registry;
In the output of the preceding command, the status of all the components should be VALID for a successful upgrade.
12.  If you are using the Oracle Recovery Manager catalog, enter the following command:
13.  $ rman catalog username/password@alias 
14.  For RAC database:
a. Set the CLUSTER_DATABASE initialization parameter to TRUE:
b. SQL> ALTER SYSTEM SET CLUSTER_DATABASE=TRUE SCOPE=spfile;
c.Restart the database:
SQL> SHUTDOWN
SQL> STARTUP
d. Start any database services that you want to use:
$ srvctl start service -d db_name -s service_name
15.  To configure and secure Enterprise Manager follow these steps:
In the case of Oracle Real Application Clusters (RAC), execute
      $ emca -upgrade db –cluster
Finally check the RAC Database upgradation..