Sunday, 21 January 2018

Oracle 12c RAC DATABASE OCT 2017 (p26635815)PSU STEP BY STEP

Oracle 12c RAC DATABASE OCT2017 PSU

Summary


1)Download and Unzip the Latest OPatch to all cluster nodes
2)Stop oem Agent and Oswatcher on both node in case it is running
3)Stop database instance on node1
4)Take backup of oracle RAC home on node1(delete unnecessary log from RAC home, otherwise backup will take more time like old trc, aud log
5)Star database instance on node1 
Since opatchauto will stop automatically while patching .Database instance should be up and running
verify 
select to_char(action_time,'DD-MON-YYYY') as action_time_2, patch_id, patch_uid, action, version,  description from dba_registry_sqlpatch  order by action_time;
check invalid objects , components on database
set head on feed on pagesize 200 linesize 200
select owner, object_name, object_type from dba_objects where owner in ('SYS','SYSTEM') and status like 'INVALID';
col action_time format a13;
col action format a10;
col namespace format a10;
col comments format a25;
col version format a13;
set lin 200 head on feed on pagesize 100;
select i.instance_name,r.* from v$instance i, registry$history r; 

 EXECUTE dbms_stats.gather_dictionary_stats;

6)Perform Validation of Oracle Home Inventory on both node
 <ORACLE_HOME>/OPatch/opatch lsinventory -detail -oh <ORACLE_HOME>

10. Verify the Patches on both GI and RDBMS HOMES:
$ORACLE_HOME/OPatch/opatch lsinventory -oh $RDBMS_HOME
$ORACLE_HOME/OPatch/opatch lsinventory -oh $RDBMS_HOME | grep <patch no>

7)Check for conflicts in  the 12.1 GI home from root user
#ORACLE_HOME/OPatch/opatchauto apply <UNZIPPED_PATCH_LOCATION>/26635815 -analyze -oh <ORACLE_RAC_HOME>
8)Upgrade the Opatch version to 12.1.0.1.7 or later.

9)Apply the patch 26635815 using opatchauto utility from root user
#ORACLE_HOME/OPatch/opatchauto apply <UNZIPPED_PATCH_LOCATION>/26635815 -analyze -oh <ORACLE_RAC_HOME>
9)Start database instance instances on node1 if not started automatically
10)Stop database instance on node2
4)Take backup of oracle RAC home on node2
5)Star database instance on node2 
Since opatchauto will stop automatically while patching .Database instance  should be up and running 
11)Repeat Steps  from rows – 6-9 on node2
12)Start database database instances on node2 in case not started automatically
13)verify select * from dba_registry_sqlpatch where PATCH_ID=26635815;
14)Validate objects , components and verify if patch is installed properly
15)Start the application and Release all related jobs which were on hold
16)Application team will check the application if that working as it is


Backout steps :


Roll back the patches applied and steps as below


1)Stop oem Agent and Oswatcher on both node
2)Stop database instance  on  node1 and  Validation of Oracle Inventory
3) "On the 1st node,In case you are rolling back the patch, run this command:
4)#GRID_HOME/OPatch/opatchauto rollback <UNZIPPED_PATCH_LOCATION>/26635815 -analyze
5)"On the 1st node, roll back the patch from the GI Home using the opatchauto command. As root user, execute the following command:
6)# <GI_HOME>/OPatch/opatchauto rollback <UNZIPPED_PATCH_LOCATION>/26635815 -oh <GI_HOME>"
7)from node2  -#GRID_HOME/OPatch/opatchauto rollback <UNZIPPED_PATCH_LOCATION>/26635815 -analyze
8)Start all database instance  on  node1 and Stop the all instance  on  node2
9)"On the 2nd node, roll back the patch from the GI Home using the opatchauto command. As root user, execute the following command:
10)# <GI_HOME>/OPatch/opatchauto rollback <UNZIPPED_PATCH_LOCATION>/26635815 -oh <GI_HOME>"
11)"Verify clusterware  components and patch is installed properly and verify patch log
12)Check the following log files in $ORACLE_HOME/sqlpatch/26713565/ for errors:
13)26713565_rollback_<database SID>_<CDB name>_<timestamp>.log

14)Start database instances  on  node2
15)Check for any corruption in the inventory
16)downgrade database  datapatch -rollback all -force
17)Start oem Agent and Oswatcher on both node
18)Inform Application to start the application
19)Application Verification and Check Business Objects
20)Free all the related jobs which were on hold
21)Start DB monitoring on the Database :



The Opatch utility from 11g onward has automated the patch application for the Oracle Grid Infrastructure (GI) home and the Oracle RAC database homes when run with root privileges. It must be executed on each node in the cluster if the GI home or Oracle RAC database home is in non-shared storage. The utility should not be run in parallel on the cluster nodes.



The automated patch application process will automatically shutdown and restart all database processes on the node. However, we don't want the automatic restart – because we are applying two PSUs (one for grid and one for database). Disable the instance auto-start for node collabn1 and manually shutdown the instance for patch application.


Information This is rolling patch mean ,patch will applied on one node at a time and other node will available for application


    On a production system, all active connections would need to be migrated to the other instance before doing this (for example, with services).


collabn1$ srvctl disable instance -d rac -i RAC1

collabn1$ srvctl stop instance -d rac -i RAC1



Note : Oracle Home which is being patched should have sufficient space (approx 20 GB) and There is not need to stop database instance and clusterware services when opatch auto utilities is being used As opatch auto will stop and start it automatically and also check hidden directory .patch_storage and clusterware patching is required from 4 to 6 hour for completion

Details steps

Prerequisite


1) check sufficient on grid home and /tmp and  check health on both Node.
    on  grid home ,minimum 20 gb should be available
To check space on Oracle home and /tmp
    df -g
2) run below command

1) ps -ef|grep pmon -to see Database status
2) ps -ef|grep tns – to see listerner status
3) crsctl status resource -t --to see the status
4) crs_stat -t    – to see the status
5) crsctl check crs

from Grid user


 srvctl status database -d <Database Name>
srvctl config database -d <Database Name>

 ps -ef | grep d.bin


server1[/u02/oracle/12.1.0/grid]$ ps -ef | grep d.bin

  oracrs  7340258        1   3 11:12:06      -  5:26 /u02/oracle/12.1.0/grid/bin/gipcd.bin

    root  7471224        1   6 11:11:26      -  7:59 /u02/oracle/12.1.0/grid/bin/ohasd.bin reboot

  oracrs  8323106 12714060   1 11:12:07      -  1:48 /u02/oracle/12.1.0/grid/bin/evmlogger.bin -o /u02/oracle/12.1.0/grid/log/[HOSTNAME]/evmd/evmlogger.info -l /u02/oracle/12.1.0/grid/log/[HOSTNAME]/evmd/evmlogger.log

  oracrs  8454172        1   5 11:12:03      -  7:16 /u02/oracle/12.1.0/grid/bin/oraagent.bin

    root  9568452        1   3 11:12:42      -  5:26 /u02/oracle/12.1.0/grid/bin/octssd.bin reboot

    root  9961542        1   9 11:14:18      - 13:06 /u02/oracle/12.1.0/grid/bin/crsd.bin reboot

  oracrs 10027170        1   2 11:12:04      -  1:51 /u02/oracle/12.1.0/grid/bin/mdnsd.bin

  oracrs 11075602        1   1 11:12:05      -  1:58 /u02/oracle/12.1.0/grid/bin/gpnpd.bin

    root 11206846        1   0 11:12:16      -  0:00 /bin/sh /u02/oracle/12.1.0/grid/bin/ocssd

    root 11534474        1  12 12:36:00      - 15:15 /u02/oracle/12.1.0/grid/bin/ologgerd -M -d /u02/oracle/12.1.0/grid/crf/db/server1

  oracrs 11600026        1   0 11:15:01      -  0:28 /u02/oracle/12.1.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit

  oracrs 11730952 11206846   2 11:12:16      -  7:02 /u02/oracle/12.1.0/grid/bin/ocssd.bin

    root 11993100        1   3 11:11:48      -  4:30 /u02/oracle/12.1.0/grid/bin/orarootagent.bin

    root 12058762        1   1 11:12:15      -  2:40 /u02/oracle/12.1.0/grid/bin/cssdagent

    root 12189862        1   2 11:12:15      -  2:36 /u02/oracle/12.1.0/grid/bin/cssdmonitor

    root 12255442        1  19 11:14:17      - 28:39 /u02/oracle/12.1.0/grid/bin/osysmond.bin

  oracrs 12714060        1   4 11:12:04      -  7:52 /u02/oracle/12.1.0/grid/bin/evmd.bin

    root 13435058        1  28 11:14:35      - 34:22 /u02/oracle/12.1.0/grid/bin/orarootagent.bin

  oracrs 13500578        1  21 11:14:34      - 11:59 /u02/oracle/12.1.0/grid/bin/oraagent.bin

   fsdatabase 13828286        1   5 11:15:09      -  7:31 /u02/oracle/12.1.0/grid/bin/oraagent.bin

  oracrs 26083376        1   0 12:32:54      -  0:11 /u02/oracle/12.1.0/grid/bin/tnslsnr LISTENER_SCAN2 -inherit

fuo44ppt 27984070        1   5 11:36:17      -  7:39 /u02/oracle/12.1.0/grid/bin/oraagent.bin

fug44ppt 39583962        1   3 11:39:05      -  7:18 /u02/oracle/12.1.0/grid/bin/oraagent.bin

  oracrs 43909340        1   2 12:32:58      -  2:14 /u02/oracle/12.1.0/grid/bin/scriptagent.bin

fub44ppt 51249192        1   5 11:40:43      -  7:32 /u02/oracle/12.1.0/grid/bin/oraagent.bin

  oracrs 61014038        1   0 12:32:54      -  0:13 /u02/oracle/12.1.0/grid/bin/tnslsnr LISTENER_SCAN3 -inherit

  oracrs 34144900        1   0 12:32:48      -  0:01 /u02/oracle/12.1.0/grid/bin/tnslsnr MGMTLSNR -no_crs_notify -inherit

  oracrs 36438724 38666928   2 06:09:03  pts/0  0:00 grep d.bin

server1[/u02/oracle/12.1.

 crsctl check crs

CRS-4638: Oracle High Availability Services is online

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online


 crsctl status resource -t
 crs_stat -t

server1[/u02/oracle/12.1.0/grid]$  crs_stat -t

Name           Type           Target    State     Host

------------------------------------------------------------

diagsnap       local_resource OFFLINE   OFFLINE

ora.DATA1.dg   ora....up.type ONLINE    ONLINE    server1

ora....FRA1.dg ora....up.type ONLINE    ONLINE    server1

ora....ATA1.dg ora....up.type ONLINE    ONLINE    server1

ora....FRS1.dg ora....up.type ONLINE    ONLINE    server1

ora....ATA1.dg ora....up.type ONLINE    ONLINE    server1

ora....FRS1.dg ora....up.type ONLINE    ONLINE    server1

ora....ER.lsnr ora....er.type ONLINE    ONLINE    server1

ora....N1.lsnr ora....er.type ONLINE    ONLINE    server2

ora....N2.lsnr ora....er.type ONLINE    ONLINE    server1

ora....N3.lsnr ora....er.type ONLINE    ONLINE    server1

ora.MGMTLSNR   ora....nr.type ONLINE    ONLINE    server1

ora.VOTING.dg  ora....up.type ONLINE    ONLINE    server1

ora.asm        ora.asm.type   ONLINE    ONLINE    server1

ora.cvu        ora.cvu.type   ONLINE    ONLINE    server1

ora.fsdatabase.db   ora....se.type ONLINE    ONLINE    server2

ora....4ppt.db ora....se.type ONLINE    ONLINE    server2

ora....4ppt.db ora....se.type ONLINE    ONLINE    server1

ora....4ppt.db ora....se.type ONLINE    ONLINE    server1

ora.mgmtdb     ora....db.type ONLINE    ONLINE    server1

ora....network ora....rk.type ONLINE    ONLINE    server1

ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    server1

ora.ons        ora.ons.type   ONLINE    ONLINE    server1

ora....SM1.asm application    ONLINE    ONLINE    server1

ora....ON.lsnr application    ONLINE    ONLINE    server1

ora....ton.ons application    ONLINE    ONLINE    server1

ora....ton.vip ora....t1.type ONLINE    ONLINE    server1

ora.scan1.vip  ora....ip.type ONLINE    ONLINE    server2

ora.scan2.vip  ora....ip.type ONLINE    ONLINE    server1

ora.scan3.vip  ora....ip.type ONLINE    ONLINE    server1

ora....SM2.asm application    ONLINE    ONLINE    server2

ora....ON.lsnr application    ONLINE    ONLINE    server2

ora.server2.ons  application    ONLINE    ONLINE    server2

ora.server2.vip  ora....t1.type ONLINE    ONLINE    server2

server1[/u02/oracle/12.1.0/grid]$


d) login on each database  running on the server and check session count

  

    select name,open_mode from v$database;

    select count(1),inst_id from gv$session group by inst_id

   alter system archive log current; ---  switch log on database.



e) Check invalid objects and stale stats.

spool pre_patch_invalid_objects_${ORACLE_SID}.log;
set head on feed on pagesize 200 linesize 200
select owner, object_name, object_type from dba_objects where owner in ('SYS','SYSTEM') and status like 'INVALID';
select comp_id, VERSION , STATUS from dba_registry;
col TABLE_NAME for a30
col PARTITION_NAME for a20
col SUBPARTITION_NAME for a20
select OWNER,TABLE_NAME,PARTITION_NAME,SUBPARTITION_NAME,NUM_ROWS,LAST_ANALYZED from dba_TAB_STATISTICS where STALE_STATS='YES';
select to_char(action_time,'DD-MON-YYYY') as action_time_2, patch_id, patch_uid, action, version,  description from dba_registry_sqlpatch  order by action_time;
spool off
If there are stale stats then run the following :
EXECUTE dbms_stats.gather_dictionary_stats;



col action_time format a13;
col action format a10;
col namespace format a10;
col comments format a25;
col version format a13;
set lin 200 head on feed on pagesize 100;
select i.instance_name,r.* from v$instance i, registry$history r;

3) validate Grid Home inventory and Upgrade the Opatch version to 12.1.0.1.7 or later if require

<GRID_HOME>/OPatch/opatch lsinventory -detail -oh <GRID_HOME>


<GRID_HOME>/OPatch/opatch lsinventory

$opatch lsinv –detail

4) Check Opatch version and for each the GI home that are being patched, run the following commands as the home owner to extract the OPatch utility.

$ <ORACLE_HOME>/OPatch/opatch version

server1[/u02/oracle/12.1.0/grid]$ OPatch/opatch version

OPatch Version: 12.2.0.1.8


OPatch succeeded.

server1[/u02/oracle/12.1.0/grid]$

The version output of the previous command should be 12.1.0.1.7 or later.


5) unzip downloaded OCT 2017 PSU from Grid user

$ cd <UNZIPPED_PATCH_LOCATION>

Check that the directory is empty.

$ ls

Unzip the patch as grid home owner.

$ unzip p26635815_121020_<platform>.zip


if require change permission

eg


chmod 775 p26635815_121020_AIX64-5L.zip


From root user

chown oracrs:dba p26635815_121020_AIX64-5L.zip


from oracrs


unzip p26635815_121020_AIX64-5L.zip

[node1:database:OPatch] ls -ltr /staging/12ccluster_patch/26635815
total 176
drwxr-xr-x    5 oracrs   dba             256 Oct  3 12:01 26392164
drwxr-xr-x    5 oracrs   dba             256 Oct  3 12:04 26392192
drwxrwxr-x   15 oracrs   dba            4096 Oct  3 12:04 26713565
drwxr-xr-x    6 oracrs   dba             256 Oct  3 12:04 21436941
drwxr-xr-x    2 oracrs   dba            4096 Oct  3 12:04 automation
-rw-r--r--    1 oracrs   dba               0 Oct  3 12:05 README.txt
-rw-rw-r--    1 oracrs   dba            4987 Oct  3 14:36 bundle.xml
-rw-rw-r--    1 oracrs   dba           73591 Oct 16 16:13 README.html
[heysham:fusppt:OPatch]



6) Check for conflicts in  the 12.1 GI home (this is new in 12c)

a) This will check for all available GRID HOME and ORACLE HOME
run from root user

#GRID_HOME/OPatch/opatchauto apply <UNZIPPED_PATCH_LOCATION>/26635815 -analyze

run from root user


b)To verify on clusterware home only

./opatchauto apply /staging/clusterware_12cpatch/26635815 -analyze -oh /u01/oracle/database/product/12.1.0.2

uid=0(root) gid=0(system) groups=208(tivlogs)

[server1:root:/staging/clusterware_12cpatch:] /staging/clusterware_12cpatch/26635815

ksh: /staging/clusterware_12cpatch/26635815: cannot execute

[server1:root:/staging/clusterware_12cpatch:] ls -ltr /staging/clusterware_12cpatch/26635815

total 176

drwxr-xr-x    5 oracrs   dba             256 Oct  3 12:01 26392164

drwxr-xr-x    5 oracrs   dba             256 Oct  3 12:04 26392192

drwxrwxr-x   15 oracrs   dba            4096 Oct  3 12:04 26713565

drwxr-xr-x    6 oracrs   dba             256 Oct  3 12:04 21436941

drwxr-xr-x    2 oracrs   dba            4096 Oct  3 12:04 automation

-rw-r--r--    1 oracrs   dba               0 Oct  3 12:05 README.txt

-rw-rw-r--    1 oracrs   dba            4987 Oct  3 14:36 bundle.xml

-rw-rw-r--    1 oracrs   dba           73591 Oct 16 16:13 README.html

[server1:root:/staging/clusterware_12cpatch:] cd /u02/oracle/12.1.0/grid/OPatch

[server1:root:/u02/oracle/12.1.0/grid/OPatch:] id

uid=0(root) gid=0(system) groups=208(tivlogs)

[server1:root:/u02/oracle/12.1.0/grid/OPatch:] df -g .

Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on

/dev/u02         139.00     27.37   81%   766737    11% /u02

[server1:root:/u02/oracle/12.1.0/grid/OPatch:]

5) check for conflicts in  the 12.1 Oracle RAC home

run it from root user and opatchauto must run from one of the homes specified


[node1:root:/home/root:] cd /u01/oracle/database/product/12.1.0.2
[node1:root:/u01/oracle/database/product/12.1.0.2:] cd OPatch
[node1:root:/u01/oracle/database/product/12.1.0.2/OPatch:] id
uid=0(root) gid=0(system) groups=208(tivlogs)
[node1:root:/u01/oracle/database/product/12.1.0.2/OPatch:] ./opatchauto apply /staging/clusterware_12cpatch/26635815 -analyze -oh /u01/oracle/database/product/12.1.0.2

OPatchauto session is initiated at Sun Jan 21 12:15:08 2018

System initialization log file is /u01/oracle/database/product/12.1.0.2/cfgtoollogs/opatchautodb/systemconfig2018-01-21_12-15-20PM.log.

Session log file is /u01/oracle/database/product/12.1.0.2/cfgtoollogs/opatchauto/opatchauto2018-01-21_12-17-13PM.log
The id for this session is NWNS

Executing OPatch prereq operations to verify patch applicability on home /u01/oracle/database/product/12.1.0.2
Patch applicablity verified successfully on home /u01/oracle/database/product/12.1.0.2


Verifying patch inventory on home /u01/oracle/database/product/12.1.0.2
Patch inventory verified successfully on home /u01/oracle/database/product/12.1.0.2


Verifying SQL patch applicablity on home /u01/oracle/database/product/12.1.0.2

Following step failed during analysis:
 /bin/sh -c 'ORACLE_HOME=/u01/oracle/database/product/12.1.0.2 ORACLE_SID=FSdatabase1 /u01/oracle/database/product/12.1.0.2/OPatch/datapatch -prereq'

SQL patch applicablity verified successfully on home /u01/oracle/database/product/12.1.0.2

OPatchAuto successful.

--------------------------------Summary--------------------------------

Analysis for applying patches has completed successfully:

Host:node1
RAC Home:/u01/oracle/database/product/12.1.0.2


==Following patches were SKIPPED:

Patch: /staging/clusterware_12cpatch/26635815/21436941
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /staging/clusterware_12cpatch/26635815/26392164
Reason: This patch is not applicable to this specified target type - "rac_database"


==Following patches were SUCCESSFULLY analyzed to be applied:

Patch: /staging/clusterware_12cpatch/26635815/26392192
Log: /u01/oracle/database/product/12.1.0.2/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-21_12-17-30PM_1.log

Patch: /staging/clusterware_12cpatch/26635815/26713565
Log: /u01/oracle/database/product/12.1.0.2/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-21_12-17-30PM_1.log



OPatchauto session completed at Sun Jan 21 12:22:36 2018
Time taken to complete the session 7 minutes, 35 seconds
[node1:root:/u01/oracle/database/product/12.1.0.2/OPatch:]





6) take backup of oracle RAC home

cd $ORACLE_HOME
pwd
tar -cvf /backup/home_database.tar .
ss -ltr /backup/home_database.tar

du -sg /backup/grid_home_database.tar

7) Apply the patch to the GI Home using the opatch auto command. As root user

export PATH=$PATH:<GI_HOME>/Opatch

eg


export PATH=$PATH:/u02/oracle/12.1.0.2/grid/Opatch



ls -ltr /u02/oracle/12.1.0/grid/OPatch

export PATH=$PATH:/u02/oracle/12.1.0/grid/OPatch

ls -ltr /u02/oracle/12.1.0/grid

ls -ltr /patch/clusterware_12cpatch/26635815


# $GRID_HOME/OPatch/opatch auto <PATH_TO_PATCH_DIRECTORY> -oh <GRID_HOME>


echo $PATH


# export PATH=$PATH:/u01/oracle/database/product/12.1.0.2/OPatch

# /opatchauto apply /staging/clusterware_12cpatch/26635815  -oh /u01/oracle/database/product/12.1.0.2

echo $PATH

eg

[node1:root:/u01/oracle/database/product/12.1.0.2/OPatch:] export PATH=$PATH:/u01/oracle/database/product/12.1.0.2/OPatch
[node1:root:/u01/oracle/database/product/12.1.0.2/OPatch:] ./opatchauto apply /staging/clusterware_12cpatch/26635815  -oh /u01/oracle/database/product/12.1.0.2

OPatchauto session is initiated at Sun Jan 21 12:25:45 2018

System initialization log file is /u01/oracle/database/product/12.1.0.2/cfgtoollogs/opatchautodb/systemconfig2018-01-21_12-25-52PM.log.

Session log file is /u01/oracle/database/product/12.1.0.2/cfgtoollogs/opatchauto/opatchauto2018-01-21_12-26-39PM.log
The id for this session is XDTU

Executing OPatch prereq operations to verify patch applicability on home /u01/oracle/database/product/12.1.0.2
Patch applicablity verified successfully on home /u01/oracle/database/product/12.1.0.2


Verifying patch inventory on home /u01/oracle/database/product/12.1.0.2
Patch inventory verified successfully on home /u01/oracle/database/product/12.1.0.2


Verifying SQL patch applicablity on home /u01/oracle/database/product/12.1.0.2
SQL patch applicablity verified successfully on home /u01/oracle/database/product/12.1.0.2


Preparing to bring down database service on home /u01/oracle/database/product/12.1.0.2
Successfully prepared home /u01/oracle/database/product/12.1.0.2 to bring down database service


Bringing down database service on home /u01/oracle/database/product/12.1.0.2
Following database(s) and/or service(s) are stopped and will be restarted later during the session: fsdatabase
Database service successfully brought down on home /u01/oracle/database/product/12.1.0.2


Performing prepatch operation on home /u01/oracle/database/product/12.1.0.2
Perpatch operation completed successfully on home /u01/oracle/database/product/12.1.0.2


Start applying binary patch on home /u01/oracle/database/product/12.1.0.2
Successfully executed command: /usr/sbin/slibclean

Binary patch applied successfully on home /u01/oracle/database/product/12.1.0.2


Performing postpatch operation on home /u01/oracle/database/product/12.1.0.2
Postpatch operation completed successfully on home /u01/oracle/database/product/12.1.0.2


Starting database service on home /u01/oracle/database/product/12.1.0.2
Database service successfully started on home /u01/oracle/database/product/12.1.0.2


Preparing home /u01/oracle/database/product/12.1.0.2 after database service restarted
No step execution required.........
Prepared home /u01/oracle/database/product/12.1.0.2 successfully after database service restarted


Trying to apply SQL patch on home /u01/oracle/database/product/12.1.0.2
SQL patch applied successfully on home /u01/oracle/database/product/12.1.0.2


Verifying patches applied on home /u01/oracle/database/product/12.1.0.2
Patch verification completed with warning on home /u01/oracle/database/product/12.1.0.2

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:node1
RAC Home:/u01/oracle/database/product/12.1.0.2
Summary:

==Following patches were SKIPPED:

Patch: /staging/clusterware_12cpatch/26635815/21436941
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /staging/clusterware_12cpatch/26635815/26392164
Reason: This patch is not applicable to this specified target type - "rac_database"


==Following patches were SUCCESSFULLY applied:

Patch: /staging/clusterware_12cpatch/26635815/26392192
Log: /u01/oracle/database/product/12.1.0.2/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-21_12-33-00PM_1.log

Patch: /staging/clusterware_12cpatch/26635815/26713565
Log: /u01/oracle/database/product/12.1.0.2/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-21_12-33-00PM_1.log



OPatchauto session completed at Sun Jan 21 12:40:58 2018
Time taken to complete the session 15 minutes, 15 seconds
[node1:root:/u01/oracle/database/product/12.1.0.2/OPatch:] ps -ef|grep pmon




8) Once patch completed on first node, clusterware and  all Oracle database instances need to be started if it is not started automatically




we will follow same steps for another node


Verfification steps after clusterware and database instance startup

1) ps -ef|grep pmon -to see listerner status

2) ps -ef|grep tns – to see listerner status

3) crsctl status resource -t --to see the status

4) crs_stat -t    – to see the status




SQL> select to_char(action_time,'DD-MON-YYYY') as action_time_2, patch_id, patch_uid, action, version,  description from dba_registry_sqlpatch  order by action_time;

ACTION_TIME_2          PATCH_ID  PATCH_UID ACTION          VERSION
-------------------- ---------- ---------- --------------- --------------------
DESCRIPTION
--------------------------------------------------------------------------------
25-MAY-2016            22291127   19983562 APPLY           12.1.0.2
Database Patch Set Update : 12.1.0.2.160419 (22291127)

28-DEC-2016            24006101   20648640 APPLY           12.1.0.2
Database Patch Set Update : 12.1.0.2.161018 (24006101)

21-JAN-2018            26713565   21606134 APPLY           12.1.0.2
DATABASE PATCH SET UPDATE 12.1.0.2.171017

select comp_name, version, status from dba_registry order by comp_name;
SQL> exit

set lines 200 pages 200
col action_time format a40;
col action format a30;
col namespace format a10;
col comments format a30;
col version format a13;
set lin 200 head on feed on pagesize 100;

select i.INST_ID,i.instance_name,r.* from gv$instance i, registry$history r;

7) check alert log for ASM and all database instances.

8)  check whether sessions are balanced

select name,open_mode from v$database

select count(1),inst_id from gv$session group by inst_id;

9) validate inventory after patch


<GRID_HOME>/OPatch/opatch lsinventory
 EXECUTE dbms_stats.gather_dictionary_stats;

select to_char(action_time,'DD-MON-YYYY') as action_time_2, patch_id, patch_uid, action, version,  description from dba_registry_sqlpatch  order by action_time;
check invalid objects , components on database

set head on feed on pagesize 200 linesize 200
select owner, object_name, object_type from dba_objects where owner in ('SYS','SYSTEM') and status like 'INVALID';
col action_time format a13;



  somehow if database is not updated with latest patch ,you can run datapatch again after setting ORACLE_HOME and ORACLE_SID from last node
shutdown all instance except last node

and run below commands
$ORACLE_HOME/OPatch/datapatch
 




=============startup and stop step==================




srvctl stop instance –d <database name> –i <database instance> -o immediate


From Root User

cd <grid_home>/bin

crsctl stop crs


startup process



From Root User


cd <grid_home>/bin

crsctl start crs


From GRID HOME


srvctl start instance -d <database name> -i <database instance>





Advanced Option for OPatch

Here are some of the advanced options of OPatch utility.

UpdateRemoteNodes

It is used to propagate/remove files/directories to/from remote nodes using files under ORACLE_HOME/.patch_storage/<ID>/rac/*.

The directories listed in copy_dirs.txt will be copied to remote nodes.

The files listed in copy_files.txt wil be copied to remote nodes.

The directories listed in remove_dirs.txt will be deleted from remote nodes.

The files listed in remove_files.txt will be deleted from remote nodes.

This is used where RAC setup is present.

Cleanup

It is used to clean up ‘restore.sh, make.txt’ files and ‘rac, scratch, backup’ directories in the ORACLE_HOME/.patch_storage directory. If -ps option is used, then it cleans the above specified areas only for that patch, else for all patches under ORACLE_HOME/.patch_storage. You will be still able to rollback patches after this cleanup.

Example: opatch util cleanup -ps 6121183_Ju _21_2007_04_19_42

CopyListedFiles

It is used to copy all files listed in ORACLE_HOME/.patch_storage/<ID>/rac/copy_files.txt to remote nodes. If -fp option is used, then one can specify the path of the file containing the list of files to be copied. The files mentioned in this file will be copied to the remote nodes.

Example: opatch util copylistedfiles -fp a -remote_nodes ceintcb-a5

This is used where RAC setup is present.

CopyListedFilesTest

It is used to copy a single file to remote nodes. The usage remains the same as CopyListedFiles.

Example: opatch util copylistedfilestest -fp /home/oracle/a -remote_nodes ceintcb-a5

This is used where RAC setup is present.

CopyListedDirs

It is used to recursively copy all directories listed in ORACLE_HOME/.patch_storage/<ID>/rac/copy_dirs.txt to remote nodes. If -dp option is used, then one can specify the path of the file containing the list of directories to be copied. The directories mentioned in this file will be copied to the remote nodes.

This is used where RAC setup is present.

CopyListedDirsTest

It is used to copy a single file to remote nodes. The usage remains the same as CopyListedDirs.

This is used where RAC setup is present.

RemoveListedFiles

It is used to remove files listed in ORACLE_HOME/.patch_storage/<ID>/rac/remove_files.txt on remote nodes. If -fr option is used, then one can specify the path of the file containing the list of files to be removed. The files mentioned in this file will be removed from the remote nodes.

This is used where RAC setup is present.

RemoveListedFilesTest

It is used to remove a single file from remote nodes. The usage remains the same as RemoveListedFiles.

This is used where RAC setup is present.

RemoveListedDirs

It is used to recursively remove directories listed in ORACLE_HOME/.patch_storage/<ID>/rac/remove_dirs.txt from remote nodes. If -dr option is used, then one can specify the path of the file containing the list of directories to be removed. The directories mentioned in this file will be removed from the remote nodes.

This is used where RAC setup is present.

RemoveListedDirsTest

It is used to remove a single directory from remote nodes. The usage remains the same as RemoveListedDirs.

This is used where RAC setup is present.

RunLocalMake

It is used to invoke re-link on the local node. The make commands are stored in ORACLE_HOME/.patch_storage/<ID>/make.txt. You need to use the -ps option to specify the Patch ID with timestamp. A directory by this name will be present under ORACLE_HOME/.patch_storage. The make.txt file present under ORACLE_HOME/.patch_storage/<Patch ID with timestamp>/ will be used to perform the local make operation. This command cannot be run if you have already run Cleanup as it would have removed these make.txt files.

Example: opatch util runlocalmake -ps 6121250_ un_21_2007_04_16_11

RunRemoteMake

It is used to invoke re-link on remote nodes. The make commands are stored in

ORACLE_HOME/.patch_storage/<ID>/rac/makes_cmd.txt. The usage remains the same as RunLocalMake.

This is used where RAC setup is present.

RunAnyCommand

It is used to run any command on remote nodes. The command should be specified using the -cmd option.

Example: opatch util runanycommand -remote_nodes ceintcb-a5 -cmd ls

This is used where RAC setup is present.

LoadXML

It is used to check the validity of an XML file. The -xmlInput option can be used to specify the path of the xml file.

@ Support can use this utility to verify the integrity of XML files contained in the local and central inventory.

Example: opatch util loadxml -xmlInput $ORACLE_HOME/inventory/ContentsXML/comps.xml

Verify

It is used to run the patch verification process to ensure that the patch was applied to the ORACLE_HOME. It uses the defined ORACLE_HOME and the given patch location via -ph, to run the check.

@ Support can use this utility to re-run the OPatch verification stage, if required for patch diagnosis. The patch staging area has to be present or created.

Example: opatch util verify -ph ~/6646853/6121183

Troubleshooting

Some times the inventory get corrupted because of some issues. In that case you need to repair the inventory. Following are the two methods which I know can be used to repair the inventory.

1) Development has released a utility (checkinv) which can be use to repair the  inventory if it becomes corrupted for some reason.

Please note that this version of checkinv works only for 9.2 and 10.1 releases of oracle database. This doesn’t work for 10.2 releases of oracle database.

You can download the checkinv utility from Patch 4121946.

To cleanup the components, you should determine the unwanted or dangling components, then specify those using the option “-remcomp”, to cleanup the component list.

The utility detects and asks user’s permission to repair a bad inventory.  In case you want to repair the inventory, the utility will first back up the files before any modification.

However, it is better that you backup the inventory before running the tool.

F

issue after applying OCT 2017 PSU(26635815) on 12cclusterware 12.1.0.2.0 through auto patch utility/Databases were not getting started through srvctl command .

Issue after applying OCT 2017 PSU(26635815) on clusterware through auto patch utility

Issue description:

After applying PSU on 12 clusterware  (12.1.0.2.0),Databases were not getting started through srvctl command .it showing below error.So finally we started database through sqlplus and Therefor getting error by running analyze command for ORACLE RAC HOME


[node1:root:/u02/oracle/12.1.0/grid:] cd OPatch
[node1:root:/u02/oracle/12.1.0/grid/OPatch:] ./opatchauto apply /staging/clusterware_12cpatch/26635815 -analyze -oh /u01/oracle/NFT/product/12.1.0.2

opatchauto must run from one of the homes specified
opatchauto returns with error code = 2
[node1:root:/u02/oracle/12.1.0/grid/OPatch:]  ./opatchauto apply /staging/clusterware_12cpatch/26635815 -analyze

OPatchauto session is initiated at Sun Jan 21 13:38:10 2018

System initialization log file is /u02/oracle/12.1.0/grid/cfgtoollogs/opatchautodb/systemconfig2018-01-21_01-38-20PM.log.

Failed:
ERROR:
Oracle Grid Infrastructure not configured.
You cannot run '/u02/oracle/12.1.0/grid/bin/cluvfy' without the Oracle Grid Infrastructure.

The result of cluvfy command does not contain OVERALL_STATUS String.
oracle.dbsysmodel.driver.sdk.productdriver.ProductDriverException: Unable to determine if "/u01/oracle/NFT/product/12.1.0.2" is a shared oracle home.
Failed:
ERROR:
Oracle Grid Infrastructure not configured.
You cannot run '/u02/oracle/12.1.0/grid/bin/cluvfy' without the Oracle Grid Infrastructure.

The result of cluvfy command does not contain OVERALL_STATUS String.


Solution

After applying OCT 2017 on Grid Home through opatchauto .I believe that
somehow permission have changed of below files and restricted registered
database with grid home to start database through SRVCT

Take backup of oracle RAC home


cd $ORACLE_HOME
pwd
tar -cvf /oswatcher/CH21417/fsnft_home_database.tar .
Ls -ltr /oswatcher/CH21417/fsnft_home_database.tar

Note details

 Doc ID 1989922.1. 

 Check and give permission to 755 to below files on grid home for all nodes

 File Name
 libclntsh.so, libclntshcore.so, libocci.so and libttsh12.so

 node1[/u02/oracle/12.1.0/grid]$ cd lib
node1[/u02/oracle/12.1.0/grid/lib]$ ls -ltr libclntsh.so
-rwx------    1 oracrs   dba        61823455 Dec 30 07:42 libclntsh.so
node1[/u02/oracle/12.1.0/grid/lib]$ ls -ltr  libclntshcore.so
-rwx------    1 oracrs   dba         4189584 Dec 30 07:42 libclntshcore.so
node1[/u02/oracle/12.1.0/grid/lib]$ ls -ltr  libocci.so
-rwx------    1 oracrs   dba         4996222 Dec 30 07:42 libocci.so
node1[/u02/oracle/12.1.0/grid/lib]$ ls -ltr  libttsh12.so
-rwx------    1 oracrs   dba        83317612 Dec 30 07:42 libttsh12.so
node1[/u02/oracle/12.1.0/grid/lib]$
node1[/u02/oracle/12.1.0/grid/lib]$ chmod 755 libclntsh.so
node1[/u02/oracle/12.1.0/grid/lib]$ chmod 755  libclntshcore.so
node1[/u02/oracle/12.1.0/grid/lib]$ chmod 755  libocci.so
node1[/u02/oracle/12.1.0/grid/lib]$ chmod 755   libttsh12.so
node1[/u02/oracle/12.1.0/grid/lib]$
node1[/u02/oracle/12.1.0/grid/lib]$chmod 775 libclntshcore.so
node1[/u02/oracle/12.1.0/grid/lib]$ ls -ltr libclntsh.so
-rwxr-xr-x    1 oracrs   dba        61823455 Dec 30 07:42 libclntsh.so
node1[/u02/oracle/12.1.0/grid/lib]$ ls -ltr  libclntshcore.so
-rwxr-xr-x    1 oracrs   dba         4189584 Dec 30 07:42 libclntshcore.so

node1[/u02/oracle/12.1.0/grid/lib]$ ls -ltr  libocci.so
-rwxr-xr-x    1 oracrs   dba         4996222 Dec 30 07:42 libocci.so
node1[/u02/oracle/12.1.0/grid/lib]$ ls -ltr  libttsh12.so
-rwxr-xr-x    1 oracrs   dba        83317612 Dec 30 07:42 libttsh12.so

stop and start the datbase through srvctl command

srvctl status instance -d <database> -i <instnace1>

srvctl start instance -d <database> -i <instnace1>
srvctl status instance -d <database> -i <instnace1>
srvctl stop instance -d <database> -i <instnace1>

Please repeat same step on other node

Saturday, 20 January 2018

Step by StepManual Apply Rolling PSU Patch on Grid Home And Oracle Database 11g RAC Environment

 Step by StepManual Apply Rolling PSU Patch on Grid Home And  Oracle Database 11g RAC Environment 

Apply Patch 22191577 latest GI PSU on RAC and DB homes using Manual steps

Patch 22191577: GRID INFRASTRUCTURE PATCH SET UPDATE 11.2.0.4.160119 (JAN2016)

1) Unzip the patch 22191577
2) Unzip latest Opatch Version in oracle home and crs home

Once this completed successfully make sure OPatch folder on both CRS and ORACLE HOME are owned by ORACLE:DBA

Opatch Version check: ./opatch version

Opatch Inventory check: ./opatch lsinventory

Opatch conflict checking: ./opatch prereq CheckConflictAmongPatchesWithDetail -phBaseDir /u02/patch/22191577

After this create OCM response file. You can create either in grid opatch folder or db home opatch folder This will be used if you are using opatch auto

In my case I created the file in both

/u01/app/oracle/product/11.2.0/dbhome_2/OPatch/ocm/bin/emocmrsp
/u01/app/11204/OPatch/ocm/bin/emocmrsp

Skip the email ID and select Y. A file name ocm.rsp will be created.

We have main patch folder 22191577

In that folder we have 3 patch folders

21948348 - DB and Grid Home
21948347 - DB and Grid Home
21948355 - Only Grid Home

You can find which patch is for which home in Patch read me document

In my case

ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_2/
GRID_HOME=/u01/app/11204/


************************************************************************

If using Opatch Auto user must be root. You dont need to shutdown anything OPatch does it automatically

opatch auto
Root user
#./opatch auto /u02/patch/22191577 -ocmrf /u01/app/oracle/product/11.2.0/dbhome_2/OPatch/ocm/bin/ocm.rsp

************************************************************************

If you choose to do it manually you need to shutdown instance and CRS manually

Shutdown instances on node1 as oracle user
srvctl stop instance -d prod -i prod1

Shutdown crs on node1 as root user
crsctl stop crs

Root user

cd $GRID_HOME/crs/install

./rootcrs.pl -unlock   

(If you don't execute this you get some permission errors related to patch storage and it complains not able to read and write files)


Oracle user
Grid home

/u01/app/11204/OPatch/opatch napply -oh /u01/app/11204 -local /u02/patch/22191577/21948348

/u01/app/11204/OPatch/opatch napply -oh /u01/app/11204 -local /u02/patch/22191577/21948355

/u01/app/11204/OPatch/opatch apply -oh /u01/app/11204 -local /u02/patch/22191577/21948347

Oracle user
DB home

/u02/patch/22191577/21948348/custom/server/21948348/custom/scripts/prepatch.sh -dbhome /u01/app/oracle/product/11.2.0/dbhome_2

/u01/app/oracle/product/11.2.0/dbhome_2/OPatch/opatch napply -oh /u01/app/oracle/product/11.2.0/dbhome_2/. -local /u02/patch/22191577/21948348/custom/server/21948348

/u01/app/oracle/product/11.2.0/dbhome_2/OPatch/opatch apply -oh /u01/app/oracle/product/11.2.0/dbhome_2/ -local /u02/patch/22191577/21948347

/u02/patch/22191577/21948348/custom/server/21948348/custom/scripts/postpatch.sh -dbhome /u01/app/oracle/product/11.2.0/dbhome_2

After completion

Root user

# /u01/app/oracle/product/11.2.0/dbhome_2/rdbms/install/rootadd_rdbms.sh
# /u01/app/11204/crs/install/rootcrs.pl -patch


Once you are done with patching first node1
Start the instance
srvctl start instance -d prod -i prod1

***********************************************************************

Once 1st instance is started repeat the same steps on node2

***********************************************************************

After the two instances started up

Refer the read me file to see post patch instructions. In this psu i have to do the following.

We should do this only in 1 instance.

cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> STARTUP
SQL> @catbundle.sql psu apply
SQL> QUIT

After the above command is complete it will show if the log file location. You can grep it for errors.

You can also run @utlrp.sql and compile invalid objects in the database.

************************************************************************


Rollback instructions

If using opatch auto

# opatch auto /u02/patch/22191577/ -rollback -ocmrf /u01/app/oracle/product/11.2.0/dbhome_2/OPatch/ocm/bin/ocm.rsp

If doing manually..shutdown instance and CRS manually

Shutdown instances on node1 as oracle user
srvctl stop instance -d prod -i prod1

Shutdown crs on node1 as root user
crsctl stop crs

Root user

cd $GRID_HOME/crs/install

./rootcrs.pl -unlock   

(If you don't execute this you get some permission errors related to patch storage and it complains not able to read and write files)


Oracle user
Grid home

/u01/app/11204/OPatch/opatch rollback -local -id 21948348 -oh /u01/app/11204

/u01/app/11204/OPatch/opatch rollback -local -id 21948355 -oh /u01/app/11204

/u01/app/11204/OPatch/opatch rollback -local -id 21948347 -oh /u01/app/11204

Oracle user
DB home

/u02/patch/22191577/21948348/custom/server/21948348/custom/scripts/prepatch.sh -dbhome /u01/app/oracle/product/11.2.0/dbhome_2

/u01/app/oracle/product/11.2.0/dbhome_2/OPatch/opatch rollback -local -id 21948348 -oh /u01/app/oracle/product/11.2.0/dbhome_2

/u01/app/oracle/product/11.2.0/dbhome_2/OPatch/opatch rollback -local -id 21948347  -oh /u01/app/oracle/product/11.2.0/dbhome_2

/u02/patch/22191577/21948348/custom/server/21948348/custom/scripts/postpatch.sh -dbhome /u01/app/oracle/product/11.2.0/dbhome_2

After completion
Root user

# /u01/app/oracle/product/11.2.0/dbhome_2/rdbms/install/rootadd_rdbms.sh
# /u01/app/11204/crs/install/rootcrs.pl -patch


Once you are done with patching first node1

Start the instance
srvctl start instance -d prod -i prod1

Repeat same steps on Node2

After both instances are UP. Run the following only on one instance

cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> STARTUP
SQL> @catbundle.sql psu apply
SQL> QUIT

Also run @utlrp.sql

After everything is completed you can verify if patches are applied properly for both homes

Go to Opatch folder

cd /u01/app/oracle/product/11.2.0/dbhome_2/OPatch

export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_2/

./opatch lsinventory

The above lists patches to oracle home.

Now change ORACLE_HOME path to grid home

export ORACLE_HOME=/u01/app/11204

./opatch lsinventory

Now it will list patches applied to grid home.

To further check in DB. Run the following sql statements

select comments, version, bundle_series
from sys.registry$history
where bundle_series = 'PSU'
order by action_time;

select comments Last_patch from dba_registry_history order by action_time desc;

Step by Step manually Apply Rolling PSU Patch and rollback in Oracle Database 11g RAC Environment


PSU patch Apply on ORACLE RAC 11g Home

OPatch is an Oracle-supplied utility that assists you with the process of applying interim patches to Oracle's software and rolling back interim patches from Oracle's software.


1.  For Binaries backup

Login with Grid Owner and stop clusterware/Grid

with root access(Use sudo if sudo access available for this command)

sudo /grid_home/11.2.0/grid/bin/crsctl stop crs

go to Backup location  
cd /backup

tar -cvf OraHome_CBAFPACSDBOP03_230715.tar /oracle_home/app/orahome
tar -cvf - /grid_home/11.2.0/grid | ssh oradb@172.23.3.227 "cd  /med/appl; cat > grid_172.23.1.82_230715.tar"

with root access(Use sudo for if sudo available)


start Clusterwaare/Grid after Backup

sudo /grid_home/11.2.0/grid/bin/crsctl start crs

 2.Stop the CRS managed resources running from DB homes.

      Database Home Owner Execute:

      $ srvctl stop home -o /oracle_home/app/orahome -s /backup/patch/grid_patch.log -n CBAFPACSDBOP03

  3.Run the pre root script.

      with root access(Use sudo if sudo access available for this command)

Patching Oracle Clusterware has special considerations: Part of the software is owned by root and part is owned by the account used to install the product. Before you can apply a
patch or make other modifications to the software files in the Grid home, you must first unlock the Grid home. 

To unlock the Grid home, perform the following steps:

 # /grid_home/11.2.0/grid/crs/install/rootcrs.pl -unlock

When you run the rootcrs.pl script with the -unlock flag, it stops the Oracle Clusterware stack and unlocks the files in the Grid home so they can be modified.

    # /usr/sbin/slibclean

   4.Apply the CRS patch using.

      As the GI home owner execute:

      $  /grid_home/11.2.0/grid/OPatch/opatch napply -oh /grid_home/11.2.0/grid -local /backup/patch/17592127 OPatch.SKIP_VERIFY_SPACE=true

      As the GI home owner execute:

      $  /grid_home/11.2.0/grid/OPatch/opatch apply -oh  /grid_home/11.2.0/grid -local /backup/patch/20299017 OPatch.SKIP_VERIFY_SPACE=true

   5.Run the pre-script for DB component of the patch.

Save important configuration settings to prevent them from being overwritten by the patch by using the following command:

      As the database Home Owner execute:

      $ /backup/patch/17592127/custom/server/17592127/custom/scripts/prepatch.sh -dbhome /oracle_home/app/orahome

   6.Apply the DB patch.

      As the database home owner execute:

      $ /oracle_home/app/orahome/OPatch/opatch napply -oh /oracle_home/app/orahome -local /backup/patch/17592127/custom/server/17592127
      $ /oracle_home/app/orahome/OPatch/opatch apply -oh /oracle_home/app/orahome -local /backup/patch/20299017

   7. Run the post script for DB component of the patch.

Apply the configuration settings that were saved in step 5 to the files that have been overwritten by the patch with the following command:

postpatch.sh –crshome

 As the database home owner execute:

      $ /backup/patch/17592127/custom/server/17592127/custom/scripts/postpatch.sh -dbhome /oracle_home/app/orahome

  8.Run the post script.

       with root access(Use sudo if sudo access available for this command)

      # /grid_home/11.2.0/grid/rdbms/install/rootadd_rdbms.sh

      with root access(Use sudo if sudo access available for this command)

After you have finished modifying the Grid home, lock it again as the root user using commands similar to the following:
The rootcrs.pl script with the -patch flag locks the Grid home again and restarts the Oracle Clusterware stack.

      # /grid_home/11.2.0/grid/crs/install/rootcrs.pl -patch


   9. Start the CRS managed resources that were earlier running from DB homes.

      If this is a GI Home environment, as the database home owner execute:

      $ srvctl start home -o /oracle_home/app/orahome -s /backup/patch/grid_patch.log -n CBAFPACSDBOP03

10. Follow 1 - 9 step for second Node.

11. Login with DB owner

sql>@?/rdbms/admin/catbundle.sql psu apply



======================######################=========================

Manually rollback a PSU Patch


DB HOME:

opatch nrollback -local -id 24006101 -oh $ORACLE_HOME

cd /u01/software/psuoct2016/12cGI/24412235/23854735/custom/scripts

./prepatch.sh

opatch nrollback -local -id 23854735 -oh $ORACLE_HOME

./postpatch.sh

GRID HOME:

$GRID_HOME/crs/install/rootcrs.pl -prepatch -nonrolling

opatch nrollback -local -id 24006101 -oh $GRID_HOME

opatch nrollback -local -id 23854735 -oh $GRID_HOME

$GRID_HOME/crs/install/rootcrs.pl -postpatch -nonrolling

After the final step, your GI will start all cluster services and the database instances. Please do perform your usual checks for each instance.



Wednesday, 17 January 2018

Restore RMAN backup to Different Server with Same Database Name in 12c Database


Restore RMAN backup to Different Server with Same Database Name


RMAN Restoration to New Server with Same Database Name and Same Directory

Pre-requisite:

On Target  host , Same version of 12C Database Binary should be installed

You may need to restore a database to a new server due to the following reasons:

1) confirming your disaster recovery strategy
2) moving the database to a new server

The steps can be summarized as:

1) Take a backup of the database on the existing host
2) Create Parameter file and transferred to destination and make necessary changes in parameter file as per target host 
3) on the new host, restore the controlfiles, datafiles and tempfiles 

Source database CDB3 is residing on host01 server and need to be restored it on host02

login as: oracle
oracle@9.118.255.254's password:
Last login: Mon Feb 29 08:34:12 2016 from 9.124.210.138

Take RMAN Full backup for source database including archive log on below location.

[oracle@host01 db01]$ cd backup
[oracle@host01 backup]$ pwd
/u01/ora/db01/backup
[oracle@host01 backup]$
[oracle@host01 backup]$
[oracle@host01 backup]$ ps -ef|grep pmon
oracle   18424 18153  0 08:49 pts/1    00:00:00 grep pmon
oracle   32412     1  0 Feb28 ?        00:00:01 ora_pmon_CDB3
[oracle@host01 ~]$ export ORACLE_SID=CDB3
[oracle@host01 ~]$
[oracle@host01 backup]$ echo $ORACLE_SID
CDB3
[oracle@host01 backup]$ rman

Recovery Manager: Release 12.1.0.2.0 - Production on Mon Feb 29 08:50:05 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target

connected to target database: CDB3 (DBID=602323566)

RMAN> run
2> {
3> allocate channel c1 type disk format '/u01/ora/db01/backup/df_%d_%s_%p_%T.bus';
4> backupdata     database tag LAB2_CDB3_feb29;
5> release channel c1;
6> }

or

RMAN> run
 {
allocate channel ch1 device type disk;
crosscheck archivelog all;
 backup as compressed backupset database format '/u02/backup/rman/Fullback_%T_%U'
 plus archivelog format '/u02/backup/rman/Archive_%T_%U';
 backup current controlfile format '/u02/backup/rman/Controlback_%T_%U';
 release channel ch1;
}

using target database control file instead of recovery catalog
allocated channel: c1
channel c1: SID=257 device type=DISK

Starting backup at 29-FEB-16
channel c1: starting full datafile backup set
channel c1: specifying datafile(s) in backup set
input datafile file number=00009 name=/u01/CDB3/2CD1CF93F0327B86E053FEFF760995F4/datafile/o1_mf_usertbs_cf5fztxb_.dbf
input datafile file number=00008 name=/u01/CDB3/2CD1CF93F0327B86E053FEFF760995F4/datafile/o1_mf_sysaux_cf5fztx9_.dbf
input datafile file number=00010 name=/u01/CDB3/DATAFILE/tbs_emp01.dbf
input datafile file number=00007 name=/u01/CDB3/2CD1CF93F0327B86E053FEFF760995F4/datafile/o1_mf_system_cf5fztwq_.dbf
channel c1: starting piece 1 at 29-FEB-16
channel c1: finished piece 1 at 29-FEB-16
piece handle=/u01/ora/db01/backup/df_CDB3_18_1_20160229.bus tag=LAB2_CDB3_FEB29 comment=NONE
channel c1: backup set complete, elapsed time: 00:00:45
channel c1: starting full datafile backup set
channel c1: specifying datafile(s) in backup set
input datafile file number=00006 name=/u01/CDB3/DATAFILE/pdbseed/usertbs01.dbf
input datafile file number=00004 name=/u01/CDB3/DATAFILE/pdbseed/sysaux01.dbf
input datafile file number=00002 name=/u01/CDB3/DATAFILE/pdbseed/system01.dbf
channel c1: starting piece 1 at 29-FEB-16
channel c1: finished piece 1 at 29-FEB-16
piece handle=/u01/ora/db01/backup/df_CDB3_19_1_20160229.bus tag=LAB2_CDB3_FEB29 comment=NONE
channel c1: backup set complete, elapsed time: 00:00:45
channel c1: starting full datafile backup set
channel c1: specifying datafile(s) in backup set
input datafile file number=00005 name=/u01/CDB3/DATAFILE/undotbs01.dbf
input datafile file number=00001 name=/u01/CDB3/DATAFILE/system01.dbf
input datafile file number=00003 name=/u01/CDB3/DATAFILE/sysaux01.dbf
channel c1: starting piece 1 at 29-FEB-16
channel c1: finished piece 1 at 29-FEB-16
piece handle=/u01/ora/db01/backup/df_CDB3_20_1_20160229.bus tag=LAB2_CDB3_FEB29 comment=NONE
channel c1: backup set complete, elapsed time: 00:01:05
Finished backup at 29-FEB-16

Starting Control File and SPFILE Autobackup at 29-FEB-16
piece handle=/u01/CDB3/autobackup/2016_02_29/o1_mf_s_905072224_cf7gn9bk_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 29-FEB-16

released channel: c1

Archive backup

RMAN> run
2> { allocate channel c1 type disk format '/u01/ora/db01/backup/arch_%d_%p.arch';                                                         ;
3> backup
4> format '/u01/ora/db01/backup/df_%d_%s_%p_%T.bus'                   arch_%d_%p.arch '
5> archive log    log all delete all input;
6> } release channel c1;
7> }

allocated channel: c1
channel c1: SID=257 device type=DISK

Starting backup at 29-FEB-16
current log archived
channel c1: starting archived log backup set
channel c1: specifying archived log(s) in backup set
input archived log thread=1 sequence=142 RECID=1 STAMP=905021941
input archived log thread=1 sequence=143 RECID=2 STAMP=905021961
input archived log thread=1 sequence=144 RECID=3 STAMP=905022610
input archived log thread=1 sequence=145 RECID=4 STAMP=905023399
input archived log thread=1 sequence=146 RECID=5 STAMP=905045431
input archived log thread=1 sequence=147 RECID=6 STAMP=905072281
input archived log thread=1 sequence=148 RECID=7 STAMP=905072283
input archived log thread=1 sequence=149 RECID=8 STAMP=905072284
input archived log thread=1 sequence=150 RECID=9 STAMP=905072286
input archived log thread=1 sequence=151 RECID=10 STAMP=905072829
channel c1: starting piece 1 at 29-FEB-16
channel c1: finished piece 1 at 29-FEB-16
piece handle=/u01/ora/db01/backup/arch_CDB3_1.arc tag=TAG20160229T090709 comment=NONE
channel c1: backup set complete, elapsed time: 00:00:03
channel c1: deleting archived log(s)
archived log file name=/u01/CDB3/archivelog/2016_02_28/o1_mf_1_142_cf5xjwk1_.arc RECID=1 STAMP=905021941
archived log file name=/u01/CDB3/archivelog/2016_02_28/o1_mf_1_143_cf5xkk9g_.arc RECID=2 STAMP=905021961
archived log file name=/u01/CDB3/archivelog/2016_02_28/o1_mf_1_144_cf5y5t7x_.arc RECID=3 STAMP=905022610
archived log file name=/u01/CDB3/archivelog/2016_02_28/o1_mf_1_145_cf5yyh1x_.arc RECID=4 STAMP=905023399
archived log file name=/u01/CDB3/archivelog/2016_02_29/o1_mf_1_146_cf6ngzfj_.arc RECID=5 STAMP=905045431
archived log file name=/u01/CDB3/archivelog/2016_02_29/o1_mf_1_147_cf7gp0vz_.arc RECID=6 STAMP=905072281
archived log file name=/u01/CDB3/archivelog/2016_02_29/o1_mf_1_148_cf7gp35h_.arc RECID=7 STAMP=905072283
archived log file name=/u01/CDB3/archivelog/2016_02_29/o1_mf_1_149_cf7gp4wc_.arc RECID=8 STAMP=905072284
archived log file name=/u01/CDB3/archivelog/2016_02_29/o1_mf_1_150_cf7gp62f_.arc RECID=9 STAMP=905072286
archived log file name=/u01/CDB3/archivelog/2016_02_29/o1_mf_1_151_cf7h75tn_.arc RECID=10 STAMP=905072829
Finished backup at 29-FEB-16

Starting Control File and SPFILE Autobackup at 29-FEB-16
piece handle=/u01/CDB3/autobackup/2016_02_29/o1_mf_s_905072833_cf7h79vc_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 29-FEB-16

released channel: c1

RMAN> exit


Recovery Manager complete.



RMAN Backup completed for Source Database and created below files


[oracle@host01 backup]$ ls -ltr
total 2725284
-rw-r----- 1 oracle oinstall  716947456 Feb 29 08:55 df_CDB3_18_1_20160229.bus
-rw-r----- 1 oracle oinstall  709591040 Feb 29 08:55 df_CDB3_19_1_20160229.bus
-rw-r----- 1 oracle oinstall 1255571456 Feb 29 08:57 df_CDB3_20_1_20160229.bus
-rw-r----- 1 oracle oinstall  105830400 Feb 29 09:07 arch_CDB3_1.arc
[oracle@host01 backup]$ pwd
/u01/ora/db01/backup

Create Parameter file and take Control file backup on Source host and transferred to Target Host

[oracle@host01 backup]$ sqlplus

SQL*Plus: Release 12.1.0.2.0 Production on Mon Feb 29 09:08:43 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> create pfie le  ='/u01/ora/db01/backup/initCDB3_A .ora' from spfile;

File created.

SQL> alter database backup controlfile to '/u01/ora/db01/backup/controlfile_restore.ctl'; ;

Database altered.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@host01 backup]$
[oracle@host01 backup]$
[oracle@host01 backup]$ ls -ltr
total 2742864
-rw-r----- 1 oracle oinstall  716947456 Feb 29 08:55 df_CDB3_18_1_20160229.bus
-rw-r----- 1 oracle oinstall  709591040 Feb 29 08:55 df_CDB3_19_1_20160229.bus
-rw-r----- 1 oracle oinstall 1255571456 Feb 29 08:57 df_CDB3_20_1_20160229.bus
-rw-r----- 1 oracle oinstall  105830400 Feb 29 09:07 arch_CDB3_1.arc
-rw-r--r-- 1 oracle oinstall        883 Feb 29 09:10 initCDB3_A.ora
-rw-r----- 1 oracle oinstall   17973248 Feb 29 09:12 controlfile_restore.ctl

Transfer RMAN full backup,archive backup,parameter file and controlfile from Source server to Target Server


[oracle@host01 backup]$ scp -r * oracle@9.118.255.239:/u01/backup/
oracle@9.118.255.239's password:

arch_CDB3_1.arc                                                                                                                                                                                                                        0%    0     0.0KB/s   --:-- ETA
arch_CDB3_1.arc                                                                                                                                                                                                                       41%   42MB  41.9MB/s   00:01 ETA
arch_CDB3_1.arc                                                                                                                                                                                                                       83%   84MB  41.9MB/s   00:00 ETA
arch_CDB3_1.arc                                                                                                                                                                                                                      100%  101MB  50.5MB/s   00:02 

controlfile_restore.ctl                                                                                                                                                                                                                0%    0     0.0KB/s   --:-- ETA
controlfile_restore.ctl                                                                                                                                                                                                              100%   17MB  17.1MB/s   00:00 

df_CDB3_18_1_20160229.bus                                                                                                                                                                                                              0%    0     0.0KB/s   --:-- ETA
df_CDB3_18_1_20160229.bus                                                                                                                                                                                                              5%   37MB  37.1MB/s   00:17 ETA
df_CDB3_18_1_20160229.bus                                                                                                                                                                                                             11%   76MB  37.3MB/s   00:16 ETA
df_CDB3_18_1_20160229.bus                                                                                                                                                                                                       


All file transferred on target server(host02) and make necessary changes on parameter file (control file location, dump file location) and make sure the same oracle home binary is installed on target server

Backup Location 

[oracle@host02 backup]$ ls -ltr
total 2742904
-rw-r----- 1 oracle oinstall  105830400 Feb 29 09:58 arch_CDB3_1.arc
-rw-r----- 1 oracle oinstall   17973248 Feb 29 09:58 controlfile_restore.ctl
-rw-r----- 1 oracle oinstall  716947456 Feb 29 09:58 df_CDB3_18_1_20160229.bus
-rw-r----- 1 oracle oinstall  709591040 Feb 29 09:59 df_CDB3_19_1_20160229.bus
-rw-r--r-- 1 oracle oinstall        889 Feb 29 10:00 initCDB3_A.ora
-rw-r----- 1 oracle oinstall 1255571456 Feb 29 10:00 df_CDB3_20_1_20160229.bus
drwxr-xr-x 2 oracle oinstall       4096 Feb 29 10:21 775
-rw-r--r-- 1 oracle oinstall        887 Feb 29 10:35 initCDB3.ora


 create same directory structure on Target server.


[oracle@NEW-SERVER ~]$ mkdir -p /u01/app/oracle/admin/CDB3/adump/
[oracle@NEW-SERVER ~]$ mkdir -p /u01/CDB3/DATAFILE/
[oracle@NEW-SERVER ~]$ mkdir -p /u01/app/oracle/fast_recovery_area/CDB3/
[oracle@NEW-SERVER ~]$ mkdir -p /u01/archive/


[oracle@host02 backup]$ export ORACLE_SID=CDB3
[oracle@host02 backup]$ echo $ORACLE_HOME
/u01/app/oracle/product/12.1.0.2/db_1
[oracle@host02 backup]$ echo $SID
CDB3

[oracle@host02 backup]$ sqlplus

SQL*Plus: Release 12.1.0.2.0 Production on Mon Feb 29 10:57:49 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Enter user-name: / as sysdba
Connected to an idle instance.

SQL> startup mount pfile=initCDB3_A.ora
ORACLE instance started.

Total System Global Area  629145600 bytes
Fixed Size    2927528 bytes
Variable Size  327156824 bytes
Database Buffers  293601280 bytes
Redo Buffers    5459968 bytes
Database mounted.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@host02 backup]$
[oracle@host02 backup]$ rman

Recovery Manager: Release 12.1.0.2.0 - Production on Mon Feb 29 11:26:32 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target

connected to target database: CDB3 (DBID=602323566, not open)

RMAN>

RMAN> run
2> {
3> allocate channel ch01 type disk;
4> allocate channel cho 02 type disk;
5> restore database;
6> recover database;
7> release channel ch01;
8> release channel cho 02;
9> }

using target database control file instead of recovery catalog
allocated channel: ch01
channel ch01: SID=12 device type=DISK

allocated channel: ch02
channel ch02: SID=253 device type=DISK

Starting restore at 29-FEB-16

skipping datafile 1; already restored to file /u01/CDB3/DATAFILE/system01.dbf
skipping datafile 3; already restored to file /u01/CDB3/DATAFILE/sysaux01.dbf
skipping datafile 5; already restored to file /u01/CDB3/DATAFILE/undotbs01.dbf
skipping datafile 7; already restored to file /u01/CDB3/datafile/o1_mf_system_cf7otxxm_.dbf
skipping datafile 8; already restored to file /u01/CDB3/datafile/o1_mf_sysaux_cf7otxs4_.dbf
skipping datafile 9; already restored to file /u01/CDB3/datafile/o1_mf_usertbs_cf7otxrd_.dbf
skipping datafile 10; already restored to file /u01/CDB3/DATAFILE/tbs_emp01.dbf
channel ch01: starting datafile backup set restore
channel ch01: specifying datafile(s) to restore from backup set
channel ch01: restoring datafile 00002 to /u01/CDB3/DATAFILE/pdbseed/system01.dbf
channel ch01: restoring datafile 00004 to /u01/CDB3/DATAFILE/pdbseed/sysaux01.dbf
channel ch01: restoring datafile 00006 to /u01/CDB3/DATAFILE/pdbseed/usertbs01.dbf
channel ch01: reading from backup piece /u01/ora/db01/backup/df_CDB3_19_1_20160229.bus
channel ch01: piece handle=/u01/ora/db01/backup/df_CDB3_19_1_20160229.bus tag=LAB2_CDB3_FEB29
channel ch01: restored backup piece 1
channel ch01: restore complete, elapsed time: 00:01:05
Finished restore at 29-FEB-16

Starting recover at 29-FEB-16

starting media recovery

channel ch01: starting archived log restore to default destination
channel ch01: restoring archived log
archived log thread=1 sequence=147
channel ch01: restoring archived log
archived log thread=1 sequence=148
channel ch01: restoring archived log
archived log thread=1 sequence=149
channel ch01: restoring archived log
archived log thread=1 sequence=150
channel ch01: restoring archived log
archived log thread=1 sequence=151
channel ch01: reading from backup piece /u01/ora/db01/backup/arch_CDB3_1.arc
channel ch01: piece handle=/u01/ora/db01/backup/arch_CDB3_1.arc tag=TAG20160229T090709
channel ch01: restored backup piece 1
channel ch01: restore complete, elapsed time: 00:00:03
archived log file name=/u01/CDB3/archivelog/2016_02_29/o1_mf_1_147_cf7qktht_.arc thread=1 sequence=147
channel default: deleting archived log(s)
archived log file name=/u01/CDB3/archivelog/2016_02_29/o1_mf_1_147_cf7qktht_.arc RECID=15 STAMP=905081364
archived log file name=/u01/CDB3/archivelog/2016_02_29/o1_mf_1_148_cf7qktmn_.arc thread=1 sequence=148
channel default: deleting archived log(s)
archived log file name=/u01/CDB3/archivelog/2016_02_29/o1_mf_1_148_cf7qktmn_.arc RECID=13 STAMP=905081362
archived log file name=/u01/CDB3/archivelog/2016_02_29/o1_mf_1_149_cf7qktjj_.arc thread=1 sequence=149
channel default: deleting archived log(s)
archived log file name=/u01/CDB3/archivelog/2016_02_29/o1_mf_1_149_cf7qktjj_.arc RECID=12 STAMP=905081362
archived log file name=/u01/CDB3/archivelog/2016_02_29/o1_mf_1_150_cf7qkto9_.arc thread=1 sequence=150
channel default: deleting archived log(s)
archived log file name=/u01/CDB3/archivelog/2016_02_29/o1_mf_1_150_cf7qkto9_.arc RECID=14 STAMP=905081362
archived log file name=/u01/CDB3/archivelog/2016_02_29/o1_mf_1_151_cf7qktj0_.arc thread=1 sequence=151
channel default: deleting archived log(s)
archived log file name=/u01/CDB3/archivelog/2016_02_29/o1_mf_1_151_cf7qktj0_.arc RECID=11 STAMP=905081362
media recovery complete, elapsed time: 00:02:01
Finished recover at 29-FEB-16
released channel: ch01
released channel: ch02

RMAN> sql 'alter database open resetlogs';

sql statement: alter database open resetlogs



[oracle@host02 backup]$

Finally, Database has been recovered on target server as below


[oracle@host02 backup]$ sqlplus

SQL*Plus: Release 12.1.0.2.0 Production on Mon Feb 29 11:35:37 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select name,open_mode from v$database;

NAME  OPEN_MODE
--------- --------------------
CDB3  READ WRITE

SQL> select instance_name ,host_name from v$instance;

INSTANCE_NAME HOST_NAME
--------------------------------------------
CDB3 host02


SQL>





If the backuppieces are  residing in a new location at the new host you’ll got to catalog them else not needed to set catalog.

For example, your production backup location is ‘/u02/backup/rman/‘ and New server backup location is ‘/u02/backup/‘ then you can follow below command to set catalog.
 Here, in this case we have residing all backup files on your New Server in same location as of Production Server.

RMAN> catalog start with '/u02/backup/';

Restore the Database :

RMAN> run
 {
 allocate channel ch1 device type disk;
  restore database;
 switch datafile all;
 switch tempfile all;
 release channel ch1;
 }

Recover the Database :

RMAN> list backup of archivelog 

RMAN> run
 {
 allocate channel ch1 device type disk;
 recover database until sequence 25;
 release channel ch1;
 }

RMAN> alter database open RESETLOGS;

Statement processed


Check Datafiles and Tempfiles :

 select DBID, NAME, DB_UNIQUE_NAME, DATABASE_ROLE, OPEN_MODE from v$database;

 select FILE_ID, FILE_NAME, TABLESPACE_NAME, BYTES/1024/1024/1024 "Size In GB" from dba_data_files;

select FILE_ID, FILE_NAME, TABLESPACE_NAME, BYTES/1024/1024/1024 "Size In GB" from dba_temp_files;

col MEMBER for a60

select * from v$logfile;


Monday, 15 January 2018

golden gate

Configuring Oracle GoldenGate OGG 11gR2 downstream integrated capture

Configuring Oracle GoldenGate OGG 11gR2 downstream integrated capture

In the article you will have a look at an example configuration of an OGG downstream integrated capture extract in a RAC environment (RAC database is used for a source, downstream and target). You will briefly review some new OGG 11gR2 concepts and have a glimpse of the prerequisites and steps to configure a simple (one extract, one remote trail and one replicat) OGG replication using a downstream integrated capture. You will see how I resolved the problems that I encountered during the OGG configuration.

Introduced in OGG 11gR2, the integrated extract uses Oracle Streams API to integrate with an Oracle logminig server to capture data changes as logical change records (LCR). The transaction log capture used in OGG versions prior to OGG 11gR2 was renamed to a CLASSIC extract. Downstream integrated capture requires a dedicated downstream database, offloading the source database, and deploys a redo shipping mechanism, conceptually similar to the redo transport in an Oracle Data Guard, for transferring the redo changes from the source database to the downstream database. The downstream database is responsible for running the logmining server mining the redo received from the primary database. Thus, the source and downstream mining database must be on the same platform. For example, if the source database is running on Linux 64-bit, the downstream database must also be on the Linux 64-bit platform.

Data flows as follows:

  1. Extract capture data changes against the source database and write them to the trail file
  2. Replicat applies the changes from the trail file
Zooming into the integration of the Source database, downstream database and OGG extract we have the additional links:

  • A source database configured with a redo shipment to the downstream database.
  • A downstream database receiving and archiving the redo from the source database
  • The primary extract connects to the source database for table definition lookup and/or fetches
  • The primary extract connects to the downstream database to receive the data changes presented as LCR from the logmining server
The following table summarizes the environment used.



Source Database Downstream database Target Database
DB name RACD RACDB RACDB
DB instances RACD1/RACD2 RACDB1/ RACDB2 RACDB1/ RACDB2
Archive log mode Yes Yes Yes
Local archive destination on all instances log_archive_dest_1 = ‘LOCATION=+DATADG/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=RACD’ LOG_ARCHIVE_DEST_1=’LOCATION=+DATADG/ VALID_FOR=(ONLINE_LOGFILE,ALL_ROLES)
Remote archive destination to downstream
LOG_ARCHIVE_DEST_n on all instances
LOG_ARCHIVE_DEST_4=’SERVICE=RACDB ASYNC NOREGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) REOPEN=10 DB_UNIQUE_NAME=RACDB’

LOG_ARCHIVE_CONFIG ‘DG_CONFIG=(RACD,RACDSTB,RACDB)’ ‘DG_CONFIG=(RACD,RACDSTB,RACDB)’
Archive log destination for foreign logs on the downstream database recived from source db
LOG_ARCHIVE_DEST_3=’LOCATION=+DATA/ VALID_FOR=(STANDBY_LOGFILE,ALL_ROLES)’
Standby redo logs
YES
Extract user OGG_EXTRACT

Logming user
OGG_EXTRACT
Replicat user

OGG_REPLICAT
Replicated table test4.seat_booking
test4.seat_booking
OGG extract extdown

OGG replicat

repdown
OGG remote trail

./dirdat/3x
If there noting is specified than it is not relevant.

The downstream database should be 11.2.0.3 with patch as per MOS Doc ID 1411356.1. The source database can be any Oracle database supported by OGG 11gR2. The source database cannot be a version higher than the downstream database. Look at the certification matrix for detail information for certified combination of OS and databases here or in MOS.
I will deploy a real-time downstream integrated extract.
The configuration of a downstream integrated capture involves steps performed on the source database and steps performed on the downstream database.
On the source database site perform the following configurations:
  1. Make sure that a tnsnames.ora alias exists

RACDB =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac-scan)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RACDB)
)
)
  1. Set redo shipment on the source database to the downstream database for all instances
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_4=’SERVICE=RACDB ASYNC NOREGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) REOPEN=10 DB_UNIQUE_NAME=RACDB’ scope=both sid=’*’;
  2. Enable the destinations on the source database.
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_4=ENABLE scope=both sid=’*’;
  3. Include the names of the downstream database and source database in a Data Guard configuration for redo shipment. Note that RACD is the source db and RACDB is the downstream database. RACDSTB is an existing standby database.
    ALTER SYSTEM SET LOG_ARCHIVE_CONFIG=’DG_CONFIG=(RACD,RACDSTB,RACDB)’ scope=both sid=’*’;
The article makes an assumption that OGG 11gR2 is installed and configured and database users are dedicated on both source database for extract and the target database for replicat. For an example how to install OGG in an Oracle RAC 11.2.0.3 cluster environment look at here.

On the downstream database site perform the following configurations:

  1. Make sure that the downstream database has a local archiving enabled. Explicitly specify the location if FRA is used.
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_1=’LOCATION=+DATADG/ VALID_FOR=(ONLINE_LOGFILE,ALL_ROLES)’ scope=both sid=’*’;

    ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_1=ENABLE scope=both sid=’*’;
  2. Configure the downstream database to receive the redo from the source database. Take a note that log files from all source databases must be kept separate from local mining database log files, and from each other and FRA should not be used for any source log files.
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_3=’LOCATION=+DATA/ VALID_FOR=(STANDBY_LOGFILE,ALL_ROLES)’ scope=both sid=’*’;
  3. Enable the log destination
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_3=ENABLE scope=both sid=’*’;
  4. Add standby redo logs on the downstream database. There are two things to consider. Size of the standby redo log should be same or bigger as the size of the redo logs on the source database(s). Add one more (n+1) standby log groups as the existing (n) on the source database.
    There are two source redo log groups per thread are as follows on the source.

    SQL> select * from v$log;
    GROUP# THREAD# SEQUENCE# BYTES BLOCKSIZE MEMBERS ARC
    ———- ———- ———- ———- ———- ———- —
    STATUS FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME
    —————- ————- ——— ———— ———
    1 1 352 52428800 512 2 NO
    CURRENT 5374181 28-OCT-12 2.8147E+14
    2 1 351 52428800 512 2 YES
    INACTIVE 5374017 28-OCT-12 5374181 28-OCT-12
    3 2 112 52428800 512 2 NO
    CURRENT 5374474 28-OCT-12 2.8147E+14
    GROUP# THREAD# SEQUENCE# BYTES BLOCKSIZE MEMBERS ARC
    ———- ———- ———- ———- ———- ———- —
    STATUS FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME
    —————- ————- ——— ———— ———
    4 2 111 52428800 512 2 YES
    ACTIVE 5347443 28-OCT-12 5374474 28-OCT-12
    SQL>
    I created 3 redo log groups size with the same size as on the downstream database.

    alter database add standby logfile thread 1 group 5 size 52428800;
    alter database add standby logfile thread 1 group 6 size 52428800;
    alter database add standby logfile thread 1 group 7 size 52428800;
    alter database add standby logfile thread 2 group 8 size 52428800;
    alter database add standby logfile thread 2 group 9 size 52428800;
    alter database add standby logfile thread 2 group 10 size 52428800;
    SELECT GROUP#, THREAD#, SEQUENCE#, ARCHIVED, STATUS FROM V$STANDBY_LOG;
    GROUP# THREAD# SEQUENCE# ARC STATUS
    ———- ———- ———- — ———-
    5 1 0 YES UNASSIGNED
    6 1 0 YES UNASSIGNED
    7 1 0 YES UNASSIGNED
    8 2 0 YES UNASSIGNED
    9 2 0 YES UNASSIGNED
    10 2 0 YES UNASSIGNED
    6 rows selected.
  5. Define a stream pool on the downstream database. The logmining server uses memory from the streams pool for each integrated extract. By default integrated extracts uses 1GB of streams pool memory. When using multiple integrated extracts the streams pool should account the memory requirements for each integrated extract and the number of extracts running. Sizing information can be found in the Oracle documents listed in the reference section and Oracle Installation and Setup Guide and MOS Note 1485620.1.
    alter system set streams_pool_size=1G scope=spfile sid=’*’;
    alter system set memory_max_target=2G scope=spfile sid=’*’;
    alter system set memory_target=2G scope=spfile sid=’*’;
  6. Grant privileges to the mining user in the downstream database
    SQL> exec DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE (grantee=>’ogg_extract’, privilege_type=>’capture’,grant_select_privileges=>true, do_grants=>TRUE);
    PL/SQL procedure successfully completed.
    SQL>
  7. Make sure that the downstream database is 11.2.0.3 with patch specified in MOS Doc ID 1411356.1 and COMPATIBLE=’11.2.0.3′.
  8. On all nodes running both source and downstream database create password file for authentication in $ORACLE_HOME/dbs. Make sure that you use ignorecase=y option.
    orapwd file=orapwRACDB1 password=sys1 ignorecase=y
    orapwd file=orapwRACDB2 password=sys1 ignorecase=y
  9. Done.
On the source OGG execute the following configurations.

  1. Create a test table for the replication on the source and target database.
    create user test4 identified by test4 ;
    grant connect, resource, dba to test4;
    create table test4.seat_booking (
    id             number(10) primary key,
    passenger_name         varchar2(50),
    timestamp        date,
    flight_no        number(10),
    seat_no            varchar2(19),
    flight_time        date);
  2. Add supplemental logging for the table
    GGSCI (raclinux1.gj.com) 3> dblogin userid ogg_extract, password ogg_extract
    Successfully logged into database.
    GGSCI (raclinux1.gj.com) 4> add trandata test4.seat_booking
    Logging of supplemental redo log data is already enabled for table TEST4.SEAT_BOOKING.
    GGSCI (raclinux1.gj.com) 5>
  3. Register the extract with the downstream database
    GGSCI (raclinux1.gj.com) 5> dblogin userid ogg_extract, password ogg_extract
    Successfully logged into database.
    GGSCI (raclinux1.gj.com) 6> miningdblogin userid ogg_extract@racdb, password ogg_extract
    Successfully logged into mining database.
    GGSCI (raclinux1.gj.com) 7> register extract extdown database
    2012-10-28 16:46:33 INFO OGG-02003 Extract EXTDOWN successfully registered with database at SCN 5442542.
    GGSCI (raclinux1.gj.com) 8>
    GGSCI (raclinux1.gj.com) 8>
  4. Create an extract parameter file for extract extdown
    GGSCI (raclinux1.gj.com) 83> view params extdown

    extract extdownSETENV (ORACLE_SID = “RACD1”)
    –TRANLOGOPTIONS INTEGRATEDPARAMS (max_sga_size 700, parallelism 3)
    TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine Y)
    userid ogg_extract, password ogg_extract
    TRANLOGOPTIONS MININGUSER ogg_extract@racdb MININGPASSWORD ogg_extract
    rmthost raclinux1, mgrport 7809
    rmttrail ./dirdat/3x
    table test4.seat_booking
    ;
    GGSCI (raclinux1.gj.com) 84>
  5. Add an integrated extract
    GGSCI (raclinux1.gj.com) 11> add extract extdown, integrated tranlog, begin now
    EXTRACT added.
    GGSCI (raclinux1.gj.com) 12>
    GGSCI (raclinux1.gj.com) 12>
  6. Add a remote trail
    GGSCI (raclinux1.gj.com) 12> add rmttrail ./dirdat/3x, extract extdown megabytes 20
    RMTTRAIL added.
    GGSCI (raclinux1.gj.com) 13>
  7. Create a replicat parameter file
    GGSCI (raclinux1.gj.com) 2> view params repdown
    replicat repdown
    –reperror(default,ignore)
    SETENV (ORACLE_SID = “RACDB1”)
    userid ogg_replicat, password ogg_replicat
    assumetargetdefs
    discardfile ./dirrpt/repdown.dsc, purge
    map test4.seat_booking, target test4.seat_booking;
    GGSCI (raclinux1.gj.com) 3>
  8. Add a Replicat
    GGSCI (raclinux1.gj.com) 3> add replicat repdown, exttrail ./dirdat/3x
    REPLICAT added.
    GGSCI (raclinux1.gj.com) 4>
  9. Start the extract and replicat
    GGSCI (raclinux1.gj.com) 18> start extract extdown
    Sending START request to MANAGER …
    EXTRACT EXTDOWN starting
    GGSCI (raclinux1.gj.com) 19> info extract extdown
    EXTRACT EXTDOWN Initialized 2012-10-28 16:50 Status RUNNING
    Checkpoint Lag 00:00:00 (updated 00:05:53 ago)
    Log Read Checkpoint Oracle Integrated Redo Logs
    2012-10-28 16:50:29
    SCN 0.0 (0)
    GGSCI (raclinux1.gj.com) 20>
    GGSCI (raclinux1.gj.com) 4> start replicat repdown
    Sending START request to MANAGER …
    REPLICAT REPDOWN starting
    GGSCI (raclinux1.gj.com) 5> info replicat repdown
    REPLICAT REPDOWN Last Started 2012-10-28 16:58 Status RUNNING
    Checkpoint Lag 00:00:00 (updated 00:00:01 ago)
    Log Read Checkpoint File ./dirdat/3×000000
    First Record RBA 0
    GGSCI (raclinux1.gj.com) 6>
  10. Testing the OGG configuration
    Insert the following records:

    On RACD1

    insert into test4.seat_booking values(1,”,sysdate,200,’B1′,sysdate);
    insert into test4.seat_booking values(2,”,sysdate,200,’B2′,sysdate);
    insert into test4.seat_booking values(3,”,sysdate,200,’B3′,sysdate);
    commit;
    On RACD2

    insert into test4.seat_booking values(4,”,sysdate,200,’B4′,sysdate);
    insert into test4.seat_booking values(5,”,sysdate,200,’B5′,sysdate);
    insert into test4.seat_booking values(6,”,sysdate,200,’B6′,sysdate);
    commit;
    Verify the result on the target

    SQL> select * from test4.seat_booking;
    ID PASSENGER_NAME TIMESTAMP
    ———- ————————————————– ———
    FLIGHT_NO SEAT_NO FLIGHT_TI
    ———- ——————- ———
    1 28-OCT-12
    200 B1 28-OCT-12
    2 28-OCT-12
    200 B2 28-OCT-12
    3 28-OCT-12
    200 B3 28-OCT-12
    ID PASSENGER_NAME TIMESTAMP
    ———- ————————————————– ———
    FLIGHT_NO SEAT_NO FLIGHT_TI
    ———- ——————- ———
    4 28-OCT-12
    200 B4 28-OCT-12
    5 28-OCT-12
    200 B5 28-OCT-12
    6 28-OCT-12
    200 B6 28-OCT-12
    6 rows selected.
    SQL>
    Issue update on the source.

    update test4.seat_booking set passenger_name=’John Smith’ where seat_no=’B4′;
    update test4.seat_booking set passenger_name=’Alejandro García’where seat_no=’B5′;
    update test4.seat_booking set passenger_name=’Frank Meyer’ where seat_no=’B6′;
    update test4.seat_booking set passenger_name=’Scott Tiger’ where seat_no=’B3′;
    commit;
    commit;
    Verify the result on the target

    SQL> select * from test4.seat_booking;
    ID PASSENGER_NAME TIMESTAMP
    ———- ————————————————– ———
    FLIGHT_NO SEAT_NO FLIGHT_TI
    ———- ——————- ———
    5 Alejandro Garc??a 28-OCT-12
    200 B5 28-OCT-12
    6 Frank Meyer 28-OCT-12
    200 B6 28-OCT-12
    1 28-OCT-12
    200 B1 28-OCT-12
    ID PASSENGER_NAME TIMESTAMP
    ———- ————————————————– ———
    FLIGHT_NO SEAT_NO FLIGHT_TI
    ———- ——————- ———
    2 28-OCT-12
    200 B2 28-OCT-12
    3 Scott Tiger 28-OCT-12
    200 B3 28-OCT-12
    4 John Smith 28-OCT-12
    200 B4 28-OCT-12
    6 rows selected.
    Issue delete on the source.

    SQL> delete from test4.seat_booking;
    6 rows deleted.
    SQL> commit;
    Commit complete.
    SQL>
    Verify the result on the target

    SQL> select * from test4.seat_booking;
    no rows selected
    SQL>
    The test confirms that OGG is configured successfully. Look at the Appendix for a snip from the archive log file from the downstream database reflecting OGG mining.
  11. Troubleshooting tips and tricks
    Make sure that redo is successfully shipped to the downstream database and archived. Pay attention to the locations, service definition and password file authentication. Useful way to troubleshoot redo transportation is to use adrci>show alert –tail –f

    I encountered the following issues:

    1. Issue 1
    2012-10-28 17:18:51 ERROR OGG-02028 Failed to attach to logmining server OGG$EXTDOWN error 26926 – ORA-26926: Un
    able to attach because capture “OGG$CAP_EXTDOWN” is missing dictionary redo logs.
    2012-10-28 17:18:51 ERROR OGG-01668 PROCESS ABENDING.
    Further investigation revealed that redo shipment errors as follows from source database alert log file.

    Error 1031 received logging on to the standby
    PING[ARC2]: Heartbeat failed to connect to standby ‘RACDB’. Error is 1031.
    2012-10-28 17:38:54.477000 +02:00
    Error 1031 received logging on to the standby
    PING[ARC2]: Heartbeat failed to connect to standby ‘RACDB’. Error is 1031.
    2012-10-28 17:39:55.407000 +02:00
    Error 1031 received logging on to the standby
    PING[ARC2]: Heartbeat failed to connect to standby ‘RACDB’. Error is 1031.
    Look at password file authentication. Make sure that ignorecase=y option is used when creating the password files.

    1. Issue 2
      2012-10-28 16:57:43 ERROR OGG-02050 Not enough database memory to honor requested MAX_SGA_SIZE of 300.

      2012-10-28 16:57:43 ERROR OGG-01668 PROCESS ABENDING

      Somehow, omitting an explicit specification of max_sga_size and parallelism resolved the issue. I masked off the following parameter in the extract. The

      –TRANLOGOPTIONS INTEGRATEDPARAMS (max_sga_size 700, parallelism 3)
    2. Issue 3
      LOGMINER: End mining logfile for session 2 thread 1 sequence 369, +DATA/racd/onlinelog/group_2.1143.787360855

      2012-10-28 17:45:41.009000 +02:00

      FAL[server, ARCt]: Error 16047 creating remote archivelog file ‘RACDB’

      FAL[server, ARCt]: FAL archive failed, see trace file.

      ARCH: FAL archive failed. Archiver continuing

      ORACLE Instance RACD1 – Archival Error. Archiver continuing.

      LOGMINER: Begin mining logfile for session 2 thread 1 sequence 370, +DATA/racd/onlinelog/group_1.1142.787360841

      2012-10-28 17:46:00.723000 +02:00

      PING[ARC2]: Heartbeat failed to connect to standby ‘RACDB’. Error is 16047.

      Set on the downstream database the following

      SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG=’DG_CONFIG=(RACD,RACDSTB,RACDB)’ scope=both sid=’*’;
      System altered.
      SQL>
    3. Issue 4
      2012-10-28 19:00:01 ERROR OGG-02032 Failed to set TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine, Y)
      Do not set the TRANLOGOPTIONS INTEGRATEDPARAMS in two lines. Use one line. I masked the parameter for explicit sizing and run with default size.

      –TRANLOGOPTIONS INTEGRATEDPARAMS (max_sga_size 700, parallelism 3)

      TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine Y)
    4. Issue 5
      BEGIN DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE (grantee=>’ogg_extract’, privilege_type=>’capture’,grant_select_privileges=>true, do_grants=>TRUE); END;
      *
      ERROR at line 1:
      ORA-04031: unable to allocate 45664 bytes of shared memory (“shared
      pool”,”GGS_DDL_TRIGGER_BEFORE”,”KKTSR^81131d18″,”trigger source”)
      ORA-06512: at “SYS.DBMS_XSTREAM_ADM_INTERNAL”, line 2268
      ORA-06512: at “SYS.DBMS_GOLDENGATE_AUTH”, line 24
      ORA-06512: at line 1
      SQL>
      Make sure that you have a proper memory allocation at database level and at OS level /dev/shm if OEL 5.X is used.
    5. Issue 6
      Errors in file /u01/app/oracle/diag/rdbms/racd/RACD1/trace/RACD1_m000_10405.trc:
      ORA-04031: unable to allocate 56 bytes of shared memory (“shared pool”,”unknown object”,”KKSSP^98″,”kglseshtSegs”)
      Process m000 died, see its trace file
      2012-10-28 14:32:16.520000 +02:00
      Dumping diagnostic data in directory=[cdmp_20121028143222], requested by (instance=2, osid=20338 (M000)), summary=[incident=88665].
      2012-10-28 14:32:42.376000 +02:00
      Errors in file /u01/app/oracle/diag/rdbms/racd/RACD1/trace/RACD1_mmon_9935.trc:
      ORA-04031: unable to allocate 1072 bytes of shared memory (“shared pool”,”select o.owner#,o.name,o.nam…”,”sga heap(1,1)”,”KQR L PO”)
      2012-10-28 14:32:49.260000 +02:00
      Errors in file /u01/app/oracle/diag/rdbms/racd/RACD1/trace/RACD1_mmon_9935.trc:
      ORA-04031: unable to allocate 1072 bytes of shared memory (“shared pool”,”select i.obj#,i.ts#,i.file#,…”,”sga heap(1,1)”,”KQR L PO”)
      2012-10-28 14:33:08.989000 +02:00
      Errors in file /u01/app/oracle/diag/rdbms/racd/RACD1/trace/RACD1_m001_10479.trc:
      ORA-04031: unable to allocate 760 bytes of shared memory (“shared pool”,”unknown object”,”KKSSP^29″,”kglss”)
      Process m001 died, see its trace file
      2012-10-28 14:33:23.739000 +02:00
      Starting background process CJQ0
      CJQ0 started with pid=29, OS id=10494
      2012-10-28 14:33:30.072000 +02:00
      Restarting dead background process CJQ0
      CJQ0 started with pid=29, OS id=10518
      2012-10-28 14:33:33.473000 +02:00
      Starting ORACLE instance (normal)
      2012-10-28 14:33:37.766000 +02:00
      Errors in file /u01/app/oracle/diag/rdbms/racd/RACD1/trace/RACD1_smon_9927.trc:
      ORA-00604: error occurred at recursive SQL level 2
      ORA-04031: unable to allocate 1072 bytes of shared memory (“shared pool”,”select i.obj#,i.ts#,i.file#,…”,”sga heap(1,1)”,”KQR L PO”)
      Make sure that you have a proper memory allocation at database level and at OS level /dev/shm if OEL 5.X is used.
Summary

You looked at an example of configuring OGG downstream integrated extract and how to troubleshoot some issues that can be encountered during the configuration.

Appendix

GoldenGate downstream capture: turning on downstream_real_time_mine

GoldenGate process OGG$CAP_EXTDOWN altered (6) on instance 2

2012-10-28 19:35:08.447000 +02:00

GoldenGate CAPTURE CP01 for OGG$CAP_EXTDOWN started with pid=35, OS id=10691

2012-10-28 19:35:10.243000 +02:00

GoldenGate CAPTURE CP01 for OGG$CAP_EXTDOWN with pid=35, OS id=10691 is in combined capture and apply mode.

Capture OGG$CAP_EXTDOWN is handling 1 applies.

— capture is running in apply-state checkpoint mode.

GoldenGate downstream capture OGG$CAP_EXTDOWN uses downstream_real_time_mine: TRUE

Starting persistent Logminer Session with sid = 1 for GoldenGate Capture OGG$CAP_EXTDOWN

2012-10-28 19:35:12.407000 +02:00

LOGMINER: Parameters summary for session# = 1

LOGMINER: Number of processes = 4, Transaction Chunk Size = 1

LOGMINER: Memory Size = 512M, Checkpoint interval = 1000M

LOGMINER: SpillScn 0, ResetLogScn 1184647

LOGMINER: summary for session# = 1

LOGMINER: StartScn: 5511601 (0x0000.005419b1)

LOGMINER: EndScn: 0

LOGMINER: HighConsumedScn: 5511601 (0x0000.005419b1)

LOGMINER: session_flag: 0xf1

LOGMINER: DDL CKPT is on.

LOGMINER: Read buffers: 64

LOGMINER: Memory LWM limit: 10M, 97%

LOGMINER: Memory Release Limit: 15000

LOGMINER: LowCkptScn: 5510405 (0x0000.00541505)

LOGMINER: HighCkptScn: 0 (0x0000.00000000)

LOGMINER: SkipScn: 5510405 (0x0000.00541505)

2012-10-28 19:35:14.805000 +02:00

LOGMINER: session#=1 (OGG$CAP_EXTDOWN), reader MS00 pid=37 OS id=10696 sid=34 started

LOGMINER: session#=1 (OGG$CAP_EXTDOWN), builder MS01 pid=67 OS id=10698 sid=96 started

LOGMINER: session#=1 (OGG$CAP_EXTDOWN), preparer MS02 pid=68 OS id=10700 sid=94 started

LOGMINER: session#=1 (OGG$CAP_EXTDOWN), preparer MS03 pid=69 OS id=10702 sid=88 started

2012-10-28 19:35:15.988000 +02:00

LOGMINER: Begin mining logfile for session 1 thread 1 sequence 383, +DATA/racdb/archivelog/2012_10_28/thread_1_seq_383.1939.797887839

LOGMINER: Begin mining logfile for session 1 thread 2 sequence 139, +DATA/racdb/archivelog/2012_10_28/thread_2_seq_139.1938.797887875

LOGMINER: End mining logfile for session 1 thread 1 sequence 383, +DATA/racdb/archivelog/2012_10_28/thread_1_seq_383.1939.797887839

2012-10-28 19:35:17.352000 +02:00

LOGMINER: Begin mining logfile for session 1 thread 1 sequence 384, +DATA/racdb/archivelog/2012_10_28/thread_1_seq_384.1943.797887949

LOGMINER: End mining logfile for session 1 thread 2 sequence 139, +DATA/racdb/archivelog/2012_10_28/thread_2_seq_139.1938.797887875

LOGMINER: Begin mining logfile for session 1 thread 2 sequence 140, +DATA/racdb/archivelog/2012_10_28/thread_2_seq_140.1942.797887985

LOGMINER: End mining logfile for session 1 thread 2 sequence 140, +DATA/racdb/archivelog/2012_10_28/thread_2_seq_140.1942.797887985

LOGMINER: Begin mining logfile for session 1 thread 2 sequence 141, +DATA/racdb/archivelog/2012_10_28/thread_2_seq_141.1944.797888041

LOGMINER: End mining logfile for session 1 thread 1 sequence 384, +DATA/racdb/archivelog/2012_10_28/thread_1_seq_384.1943.797887949

LOGMINER: Begin mining logfile for session 1 thread 1 sequence 385, +DATA/racdb/archivelog/2012_10_28/thread_1_seq_385.1947.797888017

LOGMINER: End mining logfile for session 1 thread 2 sequence 141, +DATA/racdb/archivelog/2012_10_28/thread_2_seq_141.1944.797888041

2012-10-28 19:35:20.490000 +02:00

LOGMINER: session#=1 (OGG$CAP_EXTDOWN), builder MS01 pid=67 OS id=10698 sid=96 stopped

LOGMINER: session#=1 (OGG$CAP_EXTDOWN), preparer MS02 pid=68 OS id=10700 sid=94 stopped

LOGMINER: session#=1 (OGG$CAP_EXTDOWN), preparer MS03 pid=69 OS id=10702 sid=88 stopped

LOGMINER: session#=1 (OGG$CAP_EXTDOWN), reader MS00 pid=37 OS id=10696 sid=34 stopped

GoldenGate CAPTURE CP01 for OGG$CAP_EXTDOWN with pid=35, OS id=10691 stopped

2012-10-28 19:36:25.480000 +02:00

RFS[7]: Assigned to RFS process 10770

RFS[7]: Selected log 8 for thread 2 sequence 143 dbid 1643830466 branch 787454697

2012-10-28 19:36:38.259000 +02:00

RFS[8]: Assigned to RFS process 10790

RFS[8]: Selected log 9 for thread 2 sequence 142 dbid 1643830466 branch 787454697

2012-10-28 19:36:41.927000 +02:00

RFS LogMiner: Registered logfile [+DATA/racdb/archivelog/2012_10_28/thread_2_seq_142.1950.797888201] to LogMiner session id [1]