Saturday, November 23, 2013

Goldengate Discard Monitoring and Alert.

Links to this post

This script is to monitor Discard , It should run Each Hour at 59 Min.

It will send email if Any discard found in Any Processes. Please customize it according to your Need.

It is still under Development and raw but it does the Job.

#!/usr/bin/ksh
##############################################################
# To get Accurate Result schedule it at each Hour like 59 * * * * from Cron
##############################################################
# Export variables
##############################################################
export ORACLE_HOME=/usr/local/opt/oracle/11r2
export GG_HOME=/usr/local/opt/oracle/ggs/112106
export PATH=$ORACLE_HOME/bin:$GG_HOME:$PATH
##############################################################
# Customized "info all" command to get Replicat Processes
##############################################################
X=`${GG_HOME}/ggsci< /tmp/info.log
stats $i,Hourly
exit
!
echo "______________________________"
##############################################################
# Check and send Email if any discard Found. 
##############################################################
Y=`grep discards /tmp/info.log |tr "." " "|awk '{print $3}'`
echo $Y
for c in $Y
do
if [ $c -gt 0 ];then
echo "send email for process $i"
mailx -s "Discards found in $i" emailadd@comapny.com < /tmp/info.log
cat /tmp/info.log
else
echo "dont send email"
fi
done
done
###############################################################

Tuesday, October 8, 2013

Identify and Diagnostic of Hardware Failure using ILOM on Exadata

Links to this post
To identify Faulty Hardware on Exadata , it use the Sun ILOM.

Below are some steps to Identify and Diagnostics using ILOMs on Exadata Machine using command Lines.

You can Take snapshot and Upload to SR , using ILOM snapshot ASR will help you to Diagnose and provide Further support.

To Take snapshot, Login into ILOM.

Oracle(R) Integrated Lights Out Manager

Version 3.1.2.10 r74387

Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved.

-> help
The help command is used to view information about commands and targets

Usage: help [-format wrap|nowrap] [-o|-output terse|verbose]
[|legal|targets|| ]

Special characters used in the help command are
[]   encloses optional keywords or options
<>   encloses a description of the keyword
     (If <> is not present, an actual keyword is indicated)
|    indicates a choice of keywords or options

help               displays description if this target and its
properties
help     displays description of this property of this target
help targets               displays a list of targets
help legal                 displays the product legal notice

Commands are:
cd
create
delete
dump
exit
help
load
reset
set
show
start
stop
version
--- Choose which type of snapshot you want to Take. 

->set /SP/diag/snapshot dataset=data   [normal|full]
->set /SP/diag/snapshot dump_uri= or   ftp://username:pwd@host_ip_address/~


Identify Hardware Failure.

1) Method.
-> show /SP/faultmgmt

 /SP/faultmgmt
    Targets:
        shell
        0 (/SYS/MB/P0/D7)

2) Method. Which is Very Detailed.

-> show -o table -level all /SP/faultmgmt


Target              | Property               | Value
--------------------+------------------------+---------------------------------
/SP/faultmgmt/0     | fru                    | /SYS/MB/P0/D7
/SP/faultmgmt/0/    | class                  | fault.memory.intel.sb.dimm_ce
 faults/0           |                        |
/SP/faultmgmt/0/    | sunw-msg-id            | SPX86-8004-CE
 faults/0           |                        |
/SP/faultmgmt/0/    | component              | /SYS/MB/P0/D7
 faults/0           |                        |
/SP/faultmgmt/0/    | uuid                   | 34d4bfaa-dummy-ebc8-f95a-dummy-
 faults/0           |                        | d17a
/SP/faultmgmt/0/    | timestamp              | 2013-10-05/23:13:06
 faults/0           |                        |
/SP/faultmgmt/0/    | fru_part_number        | 001-0003
 faults/0           |                        |
/SP/faultmgmt/0/    | fru_dash_level         | 01
 faults/0           |                        |
/SP/faultmgmt/0/    | fru_rev_level          | 50
 faults/0           |                        |
/SP/faultmgmt/0/    | fru_serial_number      | 0000dummy00000dummy
 faults/0           |                        |
/SP/faultmgmt/0/    | fru_manufacturer       | Hynix Semiconductor Inc.
 faults/0           |                        |
/SP/faultmgmt/0/    | fru_name               | 8192MB DDR3 SDRAM DIMM
 faults/0           |                        |
/SP/faultmgmt/0/    | system_manufacturer    | Oracle Corporation
 faults/0           |                        |
/SP/faultmgmt/0/    | system_name            | Exadata X3-2
 faults/0           |                        |
/SP/faultmgmt/0/    | system_part_number     | Exadata X3-2
 faults/0           |                        |
/SP/faultmgmt/0/    | system_serial_number   | AK00122916
 faults/0           |                        |
/SP/faultmgmt/0/    | chassis_manufacturer   | Oracle Corporation
 faults/0           |                        |
/SP/faultmgmt/0/    | chassis_name           | SUN FIRE X4270 M3
 faults/0           |                        |
/SP/faultmgmt/0/    | chassis_part_number    | 700000000
 faults/0           |                        |
/SP/faultmgmt/0/    | chassis_serial_number  | 1323XXXXX03F
 faults/0           |                        |
/SP/faultmgmt/0/    | system_component_manuf | Oracle Corporation
 faults/0           | acturer                |
/SP/faultmgmt/0/    | system_component_name  | SUN FIRE X4270 M3
 faults/0           |                        |
/SP/faultmgmt/0/    | system_component_part_ | 70000000
 faults/0           | number                 |
/SP/faultmgmt/0/    | system_component_seria | 1323XXXXX03F
 faults/0           | l_number               |
/SP/faultmgmt/0/    | serd_count             | 0x7b
 faults/0           |                        |
/SP/faultmgmt/0/    | _list_idx              | 0
 faults/0           |                        |
/SP/faultmgmt/0/    | _list_sz               | 1
 faults/0           |                        | 

Sunday, September 29, 2013

Rman Backup Shell script

Links to this post
This is backup script to schedule in crontab.
Create three rman command file first.
These Rman commandfile will be called by Below shell script to Kick of which Type of Backup you want.

1) Full backup
2) Inremental Backup
3) Archivelog Backup

To run below backup script provide argument as

./script_name FULL|INCR|ARCH
Backup script

#!/bin/ksh
###########################################################
#######CHECK ARGUMENTS
#
if [ $# -lt 2 ] ;
then
echo "Usage :script_name ORACLE_SID FULL|INCR|ARCH POLICY"
exit
else
echo "Enough Argument Passed"
fi
###########################################################
export ALTER SESSION NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI'

export ORACLE_SID=$1
echo $ORACLE_SID
if [[ $( grep -iwc "$ORACLE_SID" /etc/oratab ) -eq 1 ]]
then
echo "ORACLE SID Found"
echo "$ORACLE_SID is instance"
ORACLE_HOME=`grep -iw $ORACLE_SID /etc/oratab | cut -d ":" -f2`
TNS_ADMIN=$ORACLE_HOME/network/admin
PATH=$ORACLE_HOME/bin:$PATH
RMAN=$ORACLE_HOME/bin/rman
echo "Current Oracle Instance :-------------: $ORACLE_SID"
echo "Current Oracle Home :-----------------: $ORACLE_HOME"
echo "Current TNS admin :-------------------: $TNS_ADMIN"
echo "Current RMAN :------------------------: $RMAN"
else
echo "$ORACLE_SID" not found in Oratab file.
exit
fi
###########################################################
export BKTYPE=$2
echo $BKTYPE
case $BKTYPE in
FULL) print "\n You selected FULL DB BACKUP"
export CMD=/u01/test_disk_bkp.rcv
echo $CMD
;;
INCR) print "\n You selected Incremental Backup"
export CMD=/u01/test_disk_incr_bkp.rcv
echo $CMD
;;
ARCH) print "\n You selected Achivelog backup"
export CMD=/u01/test_disk_arch.rcv
echo $CMD
;;
*) print "\n Invalid Argument"
print " choose FULL or INCR or ARCH"
sleep 0
exit
;;
esac
###########################################################

rman  <connect target /
connect catalog rman/dummy@RMAN_CATALOG_STRING
@$CMD
exit
EOF
####################################################################################
RETURN_STATUS=$?
if [ $RETURN_STATUS = 0 ]; then
  echo "$ORACLE_SID Backup successful" | mail -s "$ORACLE_SID backup Completed"
   else
  echo "$ORACLE_SID Backup not successful" | mail -s "$ORACLE_SID backup failed"  
fi
####################################################################################

Tuesday, September 24, 2013

OGG-01028 Formatting error

Links to this post
Extract can be abend if Parameters are not Set correctly.
Extract was abend due to below Error , after searching on Metalink I found that it is considered as DB bug which fixed in 11.2.0.4.
But there is workaround.

Error
2013-09-24 04:08:30  ERROR   OGG-01028  Formatting error on: table name DUMMY.DUMMY_INQ, rowid AADYhHAHVAAJLIcAAA, XID 8.26.543865, 
position (Seqno 7314, RBA 212298768). Error converting timestamp with timezone from Oracle to ASCII format for column DUMMY_UPDATE.

Metalink Suggestion
GoldenGate extract abends: OGG-01028 Formatting error on: table name xxxx.xxxx rowid , XID nnnn.nn.nnnn, position (Seqno n, RBA n). (Doc ID 1558791.1)

Oracle GoldenGate - Version 11.2.1.0.1 and later
Information in this document applies to any platform.

This is related to database bug 15947884 (base bug 14053498).

In House WorkAround
--Add SETENV parameter before userid 
SETENV (NLS_LANG="AMERICAN_AMERICA.US7ASCII")
--Add SETENV in Extract
TRANLOGOPTIONS INCLUDEREGIONID

Wednesday, June 26, 2013

Find Transaction SQL from RBA

Links to this post
logdump> open 
logdump> GHDR ON
logdump> DETAIL DATA
logdump> HEADERTOKEN
logdump> GGSTOKEN detail
logdump> pos  
TokenID x47 'G' Record Header    Info x01  Length 1180
TokenID x48 'H' GHDR             Info x00  Length   55
 4504 0041 0417 0fff 02f2 052c 5ca7 4670 0000 0000 | E..A.......,\.Fp....
 05c2 b45c 0001 83f6 0352 0000 0001 4e53 5044 4241 | ...\.....R....dummy
 2e4e 5350 5f49 4e56 454e 544f 5259 00             | .DUMMY_INVENTORY.
TokenID x44 'D' Data             Info x00  Length 1047
TokenID x54 'T' GGS Tokens       Info x00  Length   58
TokenID x5a 'Z' Record Trailer   Info x01  Length 1180
___________________________________________________________________
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
RecLength  :  1047  (x0417)   IO Time    : 2013/06/12 14:08:51.000.944
IOType     :    15  (x0f)     OrigNode   :   255  (xff)
TransInd   :     .  (x03)     FormatType :     R  (x52)
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
---------------------------------------------------------------
---------------------------------------------------------------
--AuditRBA is Redo sequence Number , AuditPos is SCN
AuditRBA   :      99318       AuditPos   : 96646236   
---------------------------------------------------------------
---------------------------------------------------------------
Continued  :     N  (x00)     RecCount   :     1  (x01)

2013/06/12 14:08:51.000.944 FieldComp            Len  1047 RBA 433498317
Name: DUMMYDBA.DUMMY_INVENTORY
After  Image:                                             Partition 4   G  s
 0000 000e 0000 000a 4d37 3266 3833 6464 3034 0001 | ........M72f83dd04..
 0003 0000 4e00 0200 0c00 0000 0847 5244 4e43 4130 | ....N........GRDNCA0
 3100 0300 0900 0000 0531 3256 5450 0004 0009 0000 | 1........12VTP......
 0005 3131 5437 5600 0500 0a00 0000 0633 3130 3231 | ..11T7V........31021
 3700 0600 0a00 0000 0634 3130 3630 3600 0700 0300 | 7........410606.....
 004e 0008 0003 0000 4e00 0900 0300 004e 000a 0008 | .N......N......N....
 0000 0004 4654 544e 000b 0015 ffff 3139 3030 2d30 | ....FTTN......1900-0
Column     0 (x0000), Len    14 (x000e)
 0000 000a 4d37 3266 3833 6464 3034                | ....M72f83dd04
Column     1 (x0001), Len     3 (x0003)
 0000 4e                                           | ..N
Column     2 (x0002), Len    12 (x000c)
 0000 0008 4752 444e 4341 3031                     | ....GRDNCA01
Column     3 (x0003), Len     9 (x0009)
 0000 0005 3132 5654 50                            | ....12VTP
Column     4 (x0004), Len     9 (x0009)
 0000 0005 3131 5437 56                            | ....11T7V
Column     5 (x0005), Len    10 (x000a)
 0000 0006 3331 3032 3137                          | ....310217
Column     6 (x0006), Len    10 (x000a)
 0000 0006 3431 3036 3036                          | ....410606
Column    57 (x0039), Len     4 (x0004)
 ffff 3139 3030 2d30 312d 3031 3a30 303a 3030 3a30 | ..1900-01-01:00:00:0
 30                                                | 0
Column    78 (x004e), Len    21 (x0015)
 ffff 3139 3030 2d30 312d 3031 3a30 303a 3030 3a30 | ..1900-01-01:00:00:0
 30                                                | 0
Column    79 (x004f), Len     4 (x0004)
 ffff 0000                                         | ....
Column    80 (x0050), Len     5 (x0005)
 0000 0001 59                                      | ....Y

GGS tokens:
TokenID x52 'R' ORAROWID         Info x00  Length   20
 4141 4338 6c39 4143 6d41 4141 7a68 3141 4148 0001 | AAC8l9ACmAAAzh1AAH..
TokenID x4c 'L' LOGCSN           Info x00  Length   14
 3133 3332 3539 3536 3332 3432 3337                | 13325956324237
------XID Transactions. 
TokenID x36 '6' TRANID           Info x00  Length   12
 3332 2e32 382e 3831 3139 3133                     | 32.28.811913 


Transaction XID will be at end of Logdump record format.


This way we can find XID,redo log sequence and SCN of Record.

Now how to find Original transaction.

Go to source database.

Find Location of Archivelog sequence which is in AuditRBA in our case it is 99318.

spool output.txt
ALTER SESSION SET NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS';
set numformat 9999999999999999999999
set long 999999
set lines 190
set pages 555
exec dbms_logmnr.add_logfile(' complete path of arc seq 99318 '); 
exec dbms_logmnr.start_logmnr(options=>dbms_logmnr.dict_from_online_catalog);
select username, os_username, DATA_OBJ#, rbasqn, RBABLK, rbabyte, RS_ID, row_id, rollback, seg_name,seg_owner,operation, timestamp, sql_redo, scn, cscn from v$logmnr_contents where xidusn || '.' || xidslt || '.' || xidsqn='32.28.811913';
EXECUTE DBMS_LOGMNR.END_LOGMNR;
spool off;

Wednesday, June 19, 2013

Goldengate User Creation.

Links to this post
This is SQL script to Create GG Admin User. Script Run DBMS_PACKAGE and grants according to Oracle database Version.

It also created Dedicated Tablespace for Goldengate User.

prompt "Enter Goldengate Tablespace Name"
def GGATE_ADMIN_TABLESPACE_NAME=&1
prompt "Enter Goldengate Tablespace Location"
def GG_ADMIN_TABLESPACE_LOCATION=&2
prompt "Enter Goldengate Admin UserName"
def GOLDENGATE_ADMIN_USER=&3
prompt "Enter Goldengate username password"
def GOLDENGATE_ADMIN_USER_PWD=&4
set serveroutput on;
prompt "-------------------------------------------------------------------------"
prompt "This script first drop Old tablespace and Goldengate user if same name exist"
prompt "Provide DISKGROUP NAME LIKE +DATA if Tablespace location is on ASM  "
PROMPT "OR"
prompt "provide filesystem with datafile name like /u01/app/datafile/GGADMIN.dbf"
prompt "-------------------------------------------------------------------------"
spool GG_ADMIN_USER_CREATION.LOG
prompt "Goldengate User must have its own Tablespace it can not be shared with"
Prompt "shared with any other user or ddl_setup script will fail"
--prompt "=======first read above instruction clearly============"
prompt "-------------------------------------------------------------------------"
prompt "------------------------Disable DDL--------------------------------------"
@ddl_disable.sql
SET VERIFY OFF;
prompt "-----------------------Droping any existing Tablespace-------------------"
DROP TABLESPACE &GGATE_ADMIN_TABLESPACE_NAME INCLUDING CONTENTS AND DATAFILES;
prompt "-----------------------Droping any existing User-------------------------"
DROP USER &GOLDENGATE_ADMIN_USER CASCADE;
Prompt "-----------------------creating new Tablespace --------------------------"
CREATE TABLESPACE &GGATE_ADMIN_TABLESPACE_NAME  datafile '&GG_ADMIN_TABLESPACE_LOCATION' SIZE 1500M autoextend on;
--PROMPT "ENTER PASSWORD FOR NEW GOLDENGATE ADMIN USER"
prompt "-----------------------creating new Goldengate Admin User----------------"
CREATE USER &GOLDENGATE_ADMIN_USER IDENTIFIED BY &GOLDENGATE_ADMIN_USER_PWD DEFAULT TABLESPACE &GGATE_ADMIN_TABLESPACE_NAME TEMPORARY TABLESPACE TEMP;
GRANT CONNECT,alter session,CREATE SESSION,RESOURCE,CREATE TABLE TO &GOLDENGATE_ADMIN_USER;
GRANT FLASHBACK ANY TABLE TO &GOLDENGATE_ADMIN_USER;
grant SELECT ANY TRANSACTION to &GOLDENGATE_ADMIN_USER;
grant SELECT ANY TABLE to &GOLDENGATE_ADMIN_USER;
grant SELECT ANY DICTIONARY to &GOLDENGATE_ADMIN_USER;
grant CREATE TABLE to &GOLDENGATE_ADMIN_USER;
grant ALTER SESSION to &GOLDENGATE_ADMIN_USER;
grant UNLIMITED TABLESPACE  to &GOLDENGATE_ADMIN_USER;
grant EXECUTE ON utl_file to &GOLDENGATE_ADMIN_USER;
grant select on v_$instance to &GOLDENGATE_ADMIN_USER;
grant select on v_$database to &GOLDENGATE_ADMIN_USER;
grant LOCK ANY TABLE to &GOLDENGATE_ADMIN_USER;
PROMPT "----------DBA PRIVILEGES WILL BE GRANTED TO GOLDENGATE_ADMIN_USER--------"
GRANT DBA TO &GOLDENGATE_ADMIN_USER;
Prompt "-------------------------------------------------------------------------"
Prompt "-------------------------------------------------------------------------"
Prompt " Running PLSQL BLOCK to execute procedure and grants according to Oracle version"
Prompt "-------------------------------------------------------------------------"
Prompt "-------------------------------------------------------------------------"
Prompt
Prompt


declare
v1 VARCHAR2(200);
v2 varchar2(200);
V3 varchar2(1000);
l_sql varchar2(1000);
begin
select version into v1 from DBA_REGISTRY WHERE COMP_NAME LIKE '%Catalog Views%';
 CASE
      WHEN v1 like '10.2%' THEN
DBMS_OUTPUT.PUT_LINE('it is Oracle version 10.2');
execute immediate 'BEGIN dbms_streams_auth.grant_admin_privilege(''&GOLDENGATE_ADMIN_USER''); END;';
EXECUTE IMMEDIATE 'grant insert on system.logmnr_restart_ckpt$ to &GOLDENGATE_ADMIN_USER';
EXECUTE IMMEDIATE 'grant update on sys.streams$_capture_process to &GOLDENGATE_ADMIN_USER';
EXECUTE IMMEDIATE 'grant become user to &GOLDENGATE_ADMIN_USER';
dbms_output.put_line('Executing dbms_streams_auth.grant_admin_privilege');
WHEN v1 like '11.1%' THEN
DBMS_OUTPUT.PUT_LINE('it is Oracle version 11.1');
execute immediate 'BEGIN dbms_streams_auth.grant_admin_privilege(''&GOLDENGATE_ADMIN_USER''); END;';
EXECUTE IMMEDIATE 'grant become user to &GOLDENGATE_ADMIN_USER';
dbms_output.put_line('Executing dbms_streams_auth.grant_admin_privilege');
WHEN v1 like '11.2.0.3%' THEN
DBMS_OUTPUT.PUT_LINE('it is Oracle version 11.2.0.3');
EXECUTE IMMEDIATE 'begin dbms_goldengate_auth.grant_admin_privilege(''&GOLDENGATE_ADMIN_USER''); END;';
dbms_output.put_line('Executing dbms_goldengate_auth.grant_admin_privilege');
    END CASE;
exception when others
then
dbms_output.put_line('ERROR IS ' || SQLCODE || SQLERRM);
end;
/
prompt "---------------------script end-------------------------------------------"
prompt
prompt
prompt "~~~~~~~~please verify GG_ADMIN_USER_CREATION.LOG for Errors ~~~~~~~~~~~~~~"
prompt
prompt
prompt
spool OFF;

Friday, June 7, 2013

OGG-01930 Errors In Datastore

Links to this post
Few days back we got below error on few instance of GG.

2013-06-07 01:25:06 WARNING OGG-01930 Oracle GoldenGate Delivery for Oracle, dummy1.prm: Datastore error in 'dirbdb': BDB0113 Thread/process 17080/1125251392 failed: BDB1507 Thread died in Berkeley DB library.


Later found that one of the Teammate is trying to configure Monitoring using OEM plugin for OGG.

If your version of GoldenGate from Version 11.2.0.0.0 to 11.2.1.0.4 then You must upgrade till OGG 11.2.1.0.5_02 and re-create datastore.

you may found more reference document on.
GoldenGate Monitor: BDB0060 BDB1581 BDB1582 BDB1507 BDB0118 BDB2027 BDB0113 BDB0087 Errors In Datastore [ID 1495998.1]

Wednesday, May 8, 2013

Goldengate Implementation Automated - I

Links to this post
THis is part of First script to Install DDL/SEQUENCES and Create HeartBeat Tables.

It also create GoldenGate parameter file for Hearbeat tables and Update GLOBALS file for GG schema.

Parden my editing because HTML does not like ">" .

###########################################################################
#!/bin/ksh


######################################

###functions.

checksid ()

{

ORACLE_SID=""

ORACLE_HOME=""

TNS_ADMIN=""

cat /etc/oratab
cut -d ":" -f1

#echo "SID must be in oratab file"

echo "Enter ORACLE_SID of Database from Above:\c "

read ORACLE_SID

while :

do

if [[ $( grep -wc "$ORACLE_SID" /etc/oratab ) -eq 1 ]]

then

echo "ORACLE SID Found"

echo "$ORACLE_SID is instance"

ORACLE_HOME=`grep -w $ORACLE_SID /etc/oratab
cut -d ":" -f2`

TNS_ADMIN=$ORACLE_HOME/network/admin

PATH=$ORACLE_HOME/bin:$PATH

GGHOME=$PWD

echo "Current Oracle Instance :-------------: $ORACLE_SID"

echo "Current Oracle Home :-----------------: $ORACLE_HOME"

echo "Current TNS admin :-------------------: $TNS_ADMIN"

echo "Current Goldengate Home :-------------: $GGHOME"

break;

#exit 1

else

echo "Enter valid ORACLE_SID or press ctl+c to cancel script:\c"

read ORACLE_SID

continue;

fi

done

}

###############################################################



checksid

echo "Enter Goldengate Home directory:\c"

read GGHOME

echo "$GGHOME"

echo "----Create Install directory-----"

mkdir -p $GGHOME/admin/etc

mkdir -p $GGHOME/admin/logs

mkdir -p $GGHOME/admin/scripts

mkdir -p $GGHOME/dirprm/parameterfiles

mkdir -p $GGHOME/admin/install

echo "----copy sqls to Install dir-----"

cp $GGHOME/*.sql $GGHOME/admin/install

export GGINSTALL=$GGHOME/admin/install

echo $GGINSTALL

echo "------start processing ----------"

for i in $GGINSTALL/ddl_remove.sql $GGINSTALL/marker_setup.sql $GGINSTALL/ddl_setup.sql

do

echo "----- $i file Processing Now --------"

sed -e '/new_value gg_user/d' $i > $GGINSTALL/temp1.sql

sed -e '/accept/d' $GGINSTALL/temp1.sql > $GGINSTALL/temp2.sql

mv $GGINSTALL/temp2.sql $i

done

echo "verify files"

echo "---------File processing is Done -------"

echo ""

echo ""

echo "----------------------------------------"

echo "Enter Goldengate Admin User for DDL and sequence Installation :\c"

read GOLDENGATE_ADMIN_USER

echo $GOLDENGATE_ADMIN_USER

echo "DISABLE DDL and remove if any Existing Installation"

$ORACLE_HOME/bin/sqlplus -s / as sysdba <
PROMPT "=========================DISABLE DDL TRIGGER =========================================="

@$GGINSTALL/ddl_disable.sql

PROMPT "===========================REMOVING DDL ==============================================="

@$GGINSTALL/ddl_remove.sql $GOLDENGATE_ADMIN_USER

EOF

echo "Now Installing Sequence and DDL"

$ORACLE_HOME/bin/sqlplus -s / as sysdba <
PROMPT "============================INSTALL SEQUENCE==========================================="

@$GGINSTALL/sequence.sql $GOLDENGATE_ADMIN_USER

PROMPT "============================MARKER SCRIPT=============================================="

@$GGINSTALL/marker_setup.sql $GOLDENGATE_ADMIN_USER

PROMPT "============================INSTALL DDL================================================"

@$GGINSTALL/ddl_setup.sql $GOLDENGATE_ADMIN_USER

PROMPT "============================ENABLE DDL================================================="

@$GGINSTALL/ddl_enable.sql

PROMPT "============================RUN DDL PIN ==============================================="

@$GGINSTALL/ddl_pin.sql $GOLDENGATE_ADMIN_USER

EOF

echo "-----------DDL and sequence installation completed for user $GOLDENGATE_ADMIN_USER--------"

echo ""

echo ""

sleep 2

echo "--------------------creating Heartbeat Table for $GOLDENGATE_ADMIN_USER-------------------"

$ORACLE_HOME/bin/sqlplus -s / as sysdba << EOF

spool GG_HEARTBEAT_SETUP.log

set VERIFY OFF;

drop table "$GOLDENGATE_ADMIN_USER".heartbeat;

-- Create table statement

prompt "--creating HEARTBEAT TABLE "

CREATE TABLE "$GOLDENGATE_ADMIN_USER".HEARTBEAT

( ID NUMBER ,

SRC_DB VARCHAR2(30),

EXTRACT_NAME varchar2(8),

SOURCE_COMMIT TIMESTAMP,

TARGET_COMMIT TIMESTAMP,

CAPTIME TIMESTAMP,

CAPLAG NUMBER,

PMPTIME TIMESTAMP,

PMPGROUP VARCHAR2(8 BYTE),

PMPLAG NUMBER,

DELTIME TIMESTAMP,

DELGROUP VARCHAR2(8 BYTE),

DELLAG NUMBER,

TOTALLAG NUMBER,

thread number,

update_timestamp timestamp,

EDDLDELTASTATS number,

EDMLDELTASTATS number,

RDDLDELTASTATS number,

RDMLDELTASTATS number,

CONSTRAINT HEARTBEAT_PK PRIMARY KEY (thread) ENABLE

)

/

prompt "--DROPING SEQUENCE "

DROP SEQUENCE "$GOLDENGATE_ADMIN_USER".SEQ_GGS_HEARTBEAT_ID ;

CREATE SEQUENCE "$GOLDENGATE_ADMIN_USER".SEQ_GGS_HEARTBEAT_ID INCREMENT BY 1 START WITH 1 ORDER ;

DROP TABLE "$GOLDENGATE_ADMIN_USER".GGS_HEARTBEAT;

prompt "--Creating GGS_HEARTBEAT TABLE "

CREATE TABLE "$GOLDENGATE_ADMIN_USER".GGS_HEARTBEAT

(ID NUMBER ,

SRC_DB VARCHAR2(30),

EXTRACT_NAME varchar2(8),

SOURCE_COMMIT TIMESTAMP,

TARGET_COMMIT TIMESTAMP,

CAPTIME TIMESTAMP,

CAPLAG NUMBER,

PMPTIME TIMESTAMP,

PMPGROUP VARCHAR2(8 BYTE),

PMPLAG NUMBER,

DELTIME TIMESTAMP,

DELGROUP VARCHAR2(8 BYTE),

DELLAG NUMBER,

TOTALLAG NUMBER,

thread number,

update_timestamp timestamp,

EDDLDELTASTATS number,

EDMLDELTASTATS number,

RDDLDELTASTATS number,

RDMLDELTASTATS number,

CONSTRAINT GGS_HEARTBEAT_PK PRIMARY KEY (DELGROUP) ENABLE

);



prompt "--Creating GGS_HEARTBEAT_TRIG TRIGGER "



CREATE OR REPLACE TRIGGER "$GOLDENGATE_ADMIN_USER".GGS_HEARTBEAT_TRIG

BEFORE INSERT OR UPDATE ON "$GOLDENGATE_ADMIN_USER".GGS_HEARTBEAT

FOR EACH ROW

BEGIN

select seq_ggs_HEARTBEAT_id.nextval

into :NEW.ID

from dual;

select systimestamp

into :NEW.target_COMMIT

from dual;

select trunc(to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT ),1, instr(:NEW.CAPTIME - :NEW.SOURCE_COMMIT,' ')))) * 86400

+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+1,2)) * 3600

+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+4,2) ) * 60

+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+7,2))

+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+10,6)) / 1000000

into :NEW.CAPLAG

from dual;

select trunc(to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME),1, instr(:NEW.PMPTIME - :NEW.CAPTIME,' ')))) * 86400

+ to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME), instr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+1,2)) * 3600

+ to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME), instr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+4,2) ) * 60

+ to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME), instr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+7,2))

+ to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME), instr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+10,6)) / 1000000

into :NEW.PMPLAG

from dual;

select trunc(to_number(substr((:NEW.DELTIME - :NEW.PMPTIME),1, instr(:NEW.DELTIME - :NEW.PMPTIME,' ')))) * 86400

+ to_number(substr((:NEW.DELTIME - :NEW.PMPTIME), instr((:NEW.DELTIME - :NEW.PMPTIME),' ')+1,2)) * 3600

+ to_number(substr((:NEW.DELTIME - :NEW.PMPTIME), instr((:NEW.DELTIME - :NEW.PMPTIME),' ')+4,2) ) * 60

+ to_number(substr((:NEW.DELTIME - :NEW.PMPTIME), instr((:NEW.DELTIME - :NEW.PMPTIME),' ')+7,2))

+ to_number(substr((:NEW.DELTIME - :NEW.PMPTIME), instr((:NEW.DELTIME - :NEW.PMPTIME),' ')+10,6)) / 1000000

into :NEW.DELLAG

from dual;

select trunc(to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),1, instr(:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT,' ')))) * 86400

+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+1,2)) * 3600

+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+4,2) ) * 60

+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+7,2))

+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+10,6)) / 1000000

into :NEW.TOTALLAG

from dual;

end ;

/



prompt "--Enable GGS_HEARTBEAT TABLE "

ALTER TRIGGER "$GOLDENGATE_ADMIN_USER".GGS_HEARTBEAT_TRIG ENABLE;



--

-- This is for the History heartbeat table

--

prompt "--Creating GGS_HEARTBEAT_HIST sequence "



DROP SEQUENCE "$GOLDENGATE_ADMIN_USER".SEQ_GGS_HEARTBEAT_HIST ;

CREATE SEQUENCE "$GOLDENGATE_ADMIN_USER".SEQ_GGS_HEARTBEAT_HIST INCREMENT BY 1 START WITH 1 ORDER ;

DROP TABLE "$GOLDENGATE_ADMIN_USER".GGS_HEARTBEAT_HISTORY;



prompt "--Creating GGS_HEARTBEAT_HIST TABLE "



CREATE TABLE "$GOLDENGATE_ADMIN_USER".GGS_HEARTBEAT_HISTORY

( ID NUMBER ,

SRC_DB VARCHAR2(30),

EXTRACT_NAME varchar2(8),

SOURCE_COMMIT TIMESTAMP,

TARGET_COMMIT TIMESTAMP,

CAPTIME TIMESTAMP,

CAPLAG NUMBER,

PMPTIME TIMESTAMP,

PMPGROUP VARCHAR2(8 BYTE),

PMPLAG NUMBER,

DELTIME TIMESTAMP,

DELGROUP VARCHAR2(8 BYTE),

DELLAG NUMBER,

TOTALLAG NUMBER,

thread number,

update_timestamp timestamp,

EDDLDELTASTATS number,

EDMLDELTASTATS number,

RDDLDELTASTATS number,

RDMLDELTASTATS number,

CONSTRAINT GGS_HEARTBEAT_HIST_PK PRIMARY KEY (ID) ENABLE

);



prompt "--Creating GGS_HEARTBEAT_TRIG_HIST Trigger. "



CREATE OR REPLACE TRIGGER "$GOLDENGATE_ADMIN_USER".GGS_HEARTBEAT_TRIG_HIST

BEFORE INSERT OR UPDATE ON "$GOLDENGATE_ADMIN_USER".GGS_HEARTBEAT_HISTORY

FOR EACH ROW

BEGIN

select seq_ggs_HEARTBEAT_HIST.nextval

into :NEW.ID

from dual;

select systimestamp

into :NEW.target_COMMIT

from dual;

select trunc(to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT ),1, instr(:NEW.CAPTIME - :NEW.SOURCE_COMMIT,' ')))) * 86400

+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+1,2)) * 3600

+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+4,2) ) * 60

+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+7,2))

+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+10,6)) / 1000000

into :NEW.CAPLAG

from dual;

select trunc(to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME),1, instr(:NEW.PMPTIME - :NEW.CAPTIME,' ')))) * 86400

+ to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME), instr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+1,2)) * 3600

+ to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME), instr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+4,2) ) * 60

+ to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME), instr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+7,2))

+ to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME), instr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+10,6)) / 1000000

into :NEW.PMPLAG

from dual;

select trunc(to_number(substr((:NEW.DELTIME - :NEW.PMPTIME),1, instr(:NEW.DELTIME - :NEW.PMPTIME,' ')))) * 86400

+ to_number(substr((:NEW.DELTIME - :NEW.PMPTIME), instr((:NEW.DELTIME - :NEW.PMPTIME),' ')+1,2)) * 3600

+ to_number(substr((:NEW.DELTIME - :NEW.PMPTIME), instr((:NEW.DELTIME - :NEW.PMPTIME),' ')+4,2) ) * 60

+ to_number(substr((:NEW.DELTIME - :NEW.PMPTIME), instr((:NEW.DELTIME - :NEW.PMPTIME),' ')+7,2))

+ to_number(substr((:NEW.DELTIME - :NEW.PMPTIME), instr((:NEW.DELTIME - :NEW.PMPTIME),' ')+10,6)) / 1000000

into :NEW.DELLAG

from dual;

select trunc(to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),1, instr(:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT,' ')))) * 86400

+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+1,2)) * 3600

+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+4,2) ) * 60

+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+7,2))

+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+10,6)) / 1000000

into :NEW.TOTALLAG

from dual;

end ;

/



prompt "--Enable GGS_HEARTBEAT_HIST TABLE"



ALTER TRIGGER "$GOLDENGATE_ADMIN_USER".GGS_HEARTBEAT_TRIG_HIST ENABLE;





Prompt "--Setup DBMS_SCHEDULER for Heart Beat"



prompt "--grant select on v_$instance to GOLDENGATE_ADMIN_USER is already given. "

prompt "--grant select on v_$database to GOLDENGATE_ADMIN_USER is already given. "



PROMPT "--CREEATE PROCEDURE TO UPDATE HEARTBEAT TABLE"

CREATE OR REPLACE PROCEDURE "$GOLDENGATE_ADMIN_USER".gg_update_hb_tab IS

v_thread_num NUMBER;

v_db_unique_name VARCHAR2 (128);

BEGIN

SELECT thread#, db_unique_name

INTO v_thread_num, v_db_unique_name

FROM v\$instance, v\$database;

UPDATE "$GOLDENGATE_ADMIN_USER".heartbeat

SET update_timestamp = SYSTIMESTAMP

,src_db = v_db_unique_name

WHERE thread = v_thread_num;

COMMIT;

END;

/

PROMPT "INSERT INTO HEARTBEAT TABLE"

truncate table "$GOLDENGATE_ADMIN_USER".HEARTBEAT;



INSERT INTO "$GOLDENGATE_ADMIN_USER".HEARTBEAT (thread) select thread# from GV\$instance;

commit;



---FOR 10G USE THIS



PROMPT "CREATE GOLDENGATE JOBS"



BEGIN

DBMS_SCHEDULER.CREATE_JOB (

job_name => '"$GOLDENGATE_ADMIN_USER".OGG_HB',

job_type => 'STORED_PROCEDURE',

job_action => '"$GOLDENGATE_ADMIN_USER".GG_UPDATE_HB_TAB',

number_of_arguments => 0,

repeat_interval => 'FREQ=MINUTELY',

job_class => 'SYS.DEFAULT_JOB_CLASS',

start_date => NULL,

end_date => NULL,

enabled => FALSE,

auto_drop => FALSE,

comments => 'GoldenGate'

);

END;

/





BEGIN

SYS.DBMS_SCHEDULER.SET_ATTRIBUTE(

name => '"$GOLDENGATE_ADMIN_USER"."OGG_HB"',

attribute => 'restartable', value => TRUE);

END;

/



BEGIN

SYS.DBMS_SCHEDULER.SET_ATTRIBUTE(

name => '"$GOLDENGATE_ADMIN_USER"."OGG_HB"',

attribute => 'logging_level', value => DBMS_SCHEDULER.LOGGING_OFF);

END;

/



BEGIN

SYS.DBMS_SCHEDULER.enable(

name => '"$GOLDENGATE_ADMIN_USER"."OGG_HB"');

END;

/



SPOOL OFF;

EOF

echo "-------------------------Hearbeat setup completed --------------------------------"

echo ""

echo ""



echo "----------creating parameterfile for $GOLDENGATE_ADMIN_USER FOR EXTRACT---------"

echo "TABLE GGADMIN.HEARTBEAT," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_EXTRACT.prm

echo "TOKENS (" >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_EXTRACT.prm

echo "CAPGROUP = @GETENV (\"GGENVIRONMENT\", \"GROUPNAME\")," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_EXTRACT.prm

echo "CAPTIME = @DATE (\"YYYY-MM-DD HH:MI:SS.FFFFFF\",\"JTS\",@GETENV (\"JULIANTIMESTAMP\"))," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_EXTRACT.prm

echo "EDDLDELTASTATS = @GETENV (\"DELTASTATS\", \"DDL\")," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_EXTRACT.prm

echo "EDMLDELTASTATS = @GETENV (\"DELTASTATS\", \"DML\")" >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_EXTRACT.prm

echo ");" >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_EXTRACT.prm

echo ""

echo ""

echo "----EXTRACT Hearbeat mapping parameterfile for $GOLDENGATE_ADMIN_USER is ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_EXTRACT.prm--"

echo ""

echo ""

cat ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_EXTRACT.prm

echo "----------creating parameterfile for $GOLDENGATE_ADMIN_USER FOR PUMP---------"

echo "TABLE GGADMIN.heartbeat," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_PUMP.prm

echo "TOKENS (" >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_PUMP.prm

echo "PMPGROUP = @GETENV (\"GGENVIRONMENT\",\"GROUPNAME\")," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_PUMP.prm

echo "PMPTIME = @DATE (\"YYYY-MM-DD HH:MI:SS.FFFFFF\",\"JTS\",@GETENV (\"JULIANTIMESTAMP\")));" >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_PUMP.prm

echo ""

echo ""

echo "----PUMP Hearbeat mapping parameterfile for $GOLDENGATE_ADMIN_USER is ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_PUMP.prm--"

echo ""

echo ""

cat ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_PUMP.prm

echo "----------creating parameterfile for $GOLDENGATE_ADMIN_USER FOR REPLICAT---------"

echo "MAP GGADMIN.HEARTBEAT, TARGET GGADMIN.GGS_HEARTBEAT," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "KEYCOLS (DELGROUP)," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "INSERTMISSINGUPDATES," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "COLMAP (USEDEFAULTS," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "ID = 0," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "SOURCE_COMMIT = @GETENV (\"GGHEADER\", \"COMMITTIMESTAMP\")," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "EXTRACT_NAME = @TOKEN (\"CAPGROUP\")," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "CAPTIME = @TOKEN (\"CAPTIME\")," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "PMPGROUP = @TOKEN (\"PMPGROUP\")," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "PMPTIME = @TOKEN (\"PMPTIME\")," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "DELGROUP = @GETENV (\"GGENVIRONMENT\", \"GROUPNAME\")," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "DELTIME = @DATE (\"YYYY-MM-DD HH:MI:SS.FFFFFF\",\"JTS\",@GETENV (\"JULIANTIMESTAMP\"))," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "EDDLDELTASTATS = @TOKEN (\"EDDLDELTASTATS\")," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "EDMLDELTASTATS = @TOKEN (\"EDMLDELTASTATS\")," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "RDDLDELTASTATS = @GETENV (\"DELTASTATS\", \"DDL\")," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "RDMLDELTASTATS = @GETENV (\"DELTASTATS\", \"DML\")" >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo ");" >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "----------Mapping into history table------------------" >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "MAP GGADMIN.HEARTBEAT, TARGET GGADMIN.GGS_HEARTBEAT_HISTORY," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "KEYCOLS (ID)," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "INSERTALLRECORDS," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "COLMAP (USEDEFAULTS," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "ID = 0," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "SOURCE_COMMIT = @GETENV (\"GGHEADER\", \"COMMITTIMESTAMP\")," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "EXTRACT_NAME = @TOKEN (\"CAPGROUP\")," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "CAPTIME = @TOKEN (\"CAPTIME\")," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "PMPGROUP = @TOKEN (\"PMPGROUP\")," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "PMPTIME = @TOKEN (\"PMPTIME\")," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "DELGROUP = @GETENV (\"GGENVIRONMENT\", \"GROUPNAME\")," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "DELTIME = @DATE (\"YYYY-MM-DD HH:MI:SS.FFFFFF\",\"JTS\",@GETENV (\"JULIANTIMESTAMP\"))," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "EDDLDELTASTATS = @TOKEN (\"EDDLDELTASTATS\")," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "EDMLDELTASTATS = @TOKEN (\"EDMLDELTASTATS\")," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "RDDLDELTASTATS = @GETENV (\"DELTASTATS\", \"DDL\")," >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo "RDMLDELTASTATS = @GETENV (\"DELTASTATS\", \"DML\")" >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo ");" >> ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

echo ""

echo ""

echo "----REPLICAT Hearbeat mapping parameterfile for $GOLDENGATE_ADMIN_USER is ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm--"

echo ""

echo ""

cat ${GGHOME}/dirprm/${GOLDENGATE_ADMIN_USER}_HB_REPLICAT.prm

#########################################################################################################

echo "------------Adding Goldengate Admin schema $GOLDENGATE_ADMIN_USER to GLOBALS file ----------------"

echo "GGSCHEMA $GOLDENGATE_ADMIN_USER" >> ${GGHOME}/GLOBALS

echo "------------Creating Checkpoint Table for $GOLDENGATE_ADMIN_USER ---------------------------------"

echo "Enter New checkpoint table for $GOLDENGATE_ADMIN_USER :\c"

read CKPTABLE

echo "Enter Goldengate admin password :\c"

read GOLDENGATE_ADMIN_USER_PWD

$GGHOME/ggsci << EOF

DBLOGIN USERID $GOLDENGATE_ADMIN_USER,PASSWORD $GOLDENGATE_ADMIN_USER_PWD

ADD CHECKPOINTTABLE $GOLDENGATE_ADMIN_USER.$CKPTABLE

exit

EOF

echo ""

echo ""

echo "------------Apend $GOLDENGATE_ADMIN_USER.$CKPTABLE to Globals file -------------------------------"

echo "CHECKPOINTTABLE $GOLDENGATE_ADMIN_USER.$CKPTABLE" >> ${GGHOME}/GLOBALS

cat ${GGHOME}/GLOBALS

###############################################################################

Friday, April 26, 2013

Goldengate ArchiveLog delete issue when Extract is registered.

Links to this post
When you register extract on source database using Log retention. It marks scn from that point.

2013-02-07 21:53:51 INFO OGG-01749 Oracle GoldenGate Command Interpreter for Oracle: Successfully registered EXTRACT XCOMTE to start managing log retention at SCN 10855583972257.

After it registered , when you try to delete Archivelog using RMAN you get below errors.

RMAN-08137: WARNING: archived log not deleted, needed for standby or upstream capture process archived log file

This mostly occurs on 10.2.0.4 to 11.2.0.2, it also happens due to Orphan Logminer sessions.

You may refer below links.
Why is RMAN Not Purging Archive Logs After Backup Since OGG was Configured? [ID 1351352.1]

It is also recommend to apply Patch on database.

Important: To support RMAN log retention on Oracle RAC, you must download and install the database patch that is provided in BUGFIX 11879974, before you add the Extract groups.

Wednesday, April 24, 2013

Incorrect Parallel degree calculated when Auto DOP used in 11.2.0.3 Exadata.

Links to this post

This may be bug in 11.2.0.3 on exadata, AUTO DOP calculate Incorrect Parallel degree, when parallel hints are used with Auto DOP.



SQL> explain plan for
SELECT /*+PARALLEL(E)*/ SITE_ID, ITEM_NUMBER, SERIAL_NUMBER, PORT_NUMBER, EQUIPMENT_ADDRESSABLE, TRIM(STATUS_DATE) 
STATUS_DATE, QUALITY_ASSURANCE_CODE, TRIM(QUALITY_ASSURANCE_DATE) QUALITY_ASSURANCE_DATE, EQUIPMENT_ADDRESS,
EQUIPMENT_OVERRIDE_ACTIVE, INITIALIZE_REQUIRED, PARENTAL_CODE, TEMP_ENABLED, TRIM(TRANSMISSION_DATE) TRANSMISSION_DATE, 
DNS_NAME, IP_ADDRESS, FQDN, LOCAL_STATUS, TRIM( LOCAL_STATUS_DATE) LOCAL_STATUS_DATE, SERVER_ID, SERVER_STATUS, 
TRIM(SERVER_STATUS_DATE) SERVER_STATUS_DATE, EQUIP_DTL_STATUS, PORT_CATEGORY_CODE, HEADEND, ACCOUNT_NUMBER, 
SUB_ACCOUNT_ID, VIDEO_RATING_CODE, PORT_TYPE, SERVICE_CATEGORY_CODE, SERVICE_OCCURRENCE, CREATED_USER_ID, TRIM(DATE_CREATED)
DATE_CREATED, LAST_CHANGE_USER_ID, TRIM(LAST_CHANGE_DATE) LAST_CHANGE_DATE, CABLE_CARD_ID, JOURNAL_DATE 
FROM CLE_DUMMY_DETAIL E 
WHERE JOURNAL_DATE >= (SELECT LAST_RUN_DATE FROM ETL_SITE_RUN_DATE@ODS.WORLD 
WHERE TABLE_NAME = 'xyz' AND SITE_NAME = 'CLE' ) 
AND JOURNAL_DATE <= (SELECT HEART_BEAT_DATE FROM ETL_SITE_RUN_DATE@ODS.WORLD 
WHERE TABLE_NAME = 'xyz' AND SITE_NAME = 'CLE' ) 
;

Explained.

SQL> @xplan

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 350686579

-----------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ/Ins |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1158 | 203K| 18 (0)| 00:00:01 | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM)| :TQ10000 | 1158 | 203K| 16 (0)| 00:00:01 | Q1,00 | P->S | QC (RAND) |
| 3 | PX BLOCK ITERATOR |  | 1158 | 203K| 16 (0)| 00:00:01 | Q1,00 | PCWC | |
|* 4 | TABLE ACCESS FULL| CLE_CDO1CPP | 1158 | 203K| 16 (0)| 00:00:01 | Q1,00 | PCWP | |
| 5 | REMOTE | ETL_SITE_RUN_DATE | 1 | 47 | 1 (0)| 00:00:01 | Q1,00 | PCWP | |
| 6 | REMOTE | ETL_SITE_RUN_DATE | 1 | 47 | 1 (0)| 00:00:01 | PODS_~ | R->S | |
-----------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

4 - filter("JOURNAL_DATE">= (SELECT "LAST_RUN_DATE" FROM "ETL_SITE_RUN_DATE" WHERE "SITE_NAME"='CLE' AND
"TABLE_NAME"='xyz') AND "JOURNAL_DATE"<= (SELECT "HEART_BEAT_DATE" FROM "ETL_SITE_RUN_DATE"
WHERE "SITE_NAME"='CLE' AND "TABLE_NAME"='xyz'))

Remote SQL Information (identified by operation id):
----------------------------------------------------

6 - SELECT "SITE_NAME","TABLE_NAME","HEART_BEAT_DATE" FROM "ETL_SITE_RUN_DATE" "ETL_SITE_RUN_DATE" WHERE
"SITE_NAME"='CLE' AND "TABLE_NAME"='xyz' (accessing 'PODS_ETL_CONTROL.WORLD' )


Note
-----
- Degree of Parallelism is 32767 because of hint

31 rows selected.

SQL> show parameter parallel

NAME TYPE VALUE
------------------------------------ ----------- 
parallel_adaptive_multi_user    FALSE
parallel_automatic_tuning      FALSE
parallel_degree_limit       8
parallel_degree_policy      AUTO
parallel_execution_message_size  16384
parallel_force_local       FALSE
parallel_io_cap_enabled      FALSE
parallel_max_servers     196
parallel_min_percent     0
parallel_min_servers     2
parallel_min_time_threshold     60
parallel_server           TRUE
parallel_server_instances    5
parallel_servers_target    90
parallel_threads_per_cpu    2
recovery_parallelism     0

FileName
----------------


Tested with without DBLINK.

-- Testing this on 11.2.0.3

SQL> select /*+ Parallel(E) */ * from foo e;

Execution Plan
----------------------------------------------------------
Plan hash value: 3135133324

--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time| TQ |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 18M| 1708M| 9922 (1)| 00:00:01| | | |
| 1 | PX COORDINATOR | | | | || | | |
| 2 | PX SEND QC (RANDOM)| :TQ10000 | 18M| 1708M| 9922 (1)| 00:00:01| Q1,00 | P->S | QC (RAND) |
| 3 | PX BLOCK ITERATOR | | 18M| 1708M| 9922 (1)| 00:00:01| Q1,00 | PCWC | |
| 4 | TABLE ACCESS FULL| FOO | 18M| 1708M| 9922 (1)| 00:00:01| Q1,00 | PCWP | |
--------------------------------------------------------------------------------------------------------------

Note
-----
- Degree of Parallelism is 32767 because of hint

SQL> connect / as sysdba
Connected.
SQL> select count(*) from foo;

COUNT(*)
----------
18472960

parallel_adaptive_multi_user     TRUE
parallel_automatic_tuning      FALSE
parallel_degree_limit       2
parallel_degree_policy      AUTO
parallel_execution_message_size    16384
parallel_force_local       FALSE
parallel_instance_group 
parallel_io_cap_enabled      FALSE
parallel_max_servers       135
parallel_min_percent       0
parallel_min_servers       0
parallel_min_time_threshold     1
parallel_server        FALSE
parallel_server_instances      1
parallel_servers_target      64
parallel_threads_per_cpu      2
recovery_parallelism       0



comparison on 11.2.0.2 and 11.2.0.3

----test case 
drop table foo;
alter session set parallel_degree_policy = auto;
alter session set parallel_degree_limit = 2;
alter session set parallel_min_time_threshold = 1;
create table foo as select * from all_objects where rownum < 16;
alter table foo parallel 6;
set autotrace traceonly explain
select /*+ Parallel(e) */ * from foo e;

--------
FOO has a default DOP of 6

11.2.0.3 -- select /*+ Parallel(a) */ * from foo a; --- Degree of Parallelism is 32767 because of hint
11.2.0.2 -- select /*+ Parallel(a) */ * from foo a; -- automatic DOP: Computed Degree of Parallelism is 1
11.2.0.3 select /*+ Parallel */ * from foo; -- - automatic DOP: Computed Degree of Parallelism is 2
11.2.0.2 select /*+ Parallel */ * from foo; -- - automatic DOP: Computed Degree of Parallelism is 2
11.2.0.3 select /*+ Parallel(20) */ * from foo; -- - Degree of Parallelism is 20 because of hint
11.2.0.2 select /*+ Parallel(20) */ * from foo; -- - Degree of Parallelism is 20 because of hint


Friday, April 5, 2013

Goldengate Mapping and Transformation for ETL

Links to this post
There are situation when you want to take advantage of GG. it can map and transform data in shape which you want on Target.
we had same one situation,when Source table have 90+ rows and 5 million rows gets Replicated/day and Target table is not partitioned.

so we did partition Target table not just partitioned but we sub-partition it.

Table will be first Partitioned based on date column by New 11g Interval partition and
then subpartitioned by site id which is again subset of account number, Account number column is Varchar2 with some 20 digits Number.so site id will be parsed from first 3 digits from Account Number.
In nutshell we are decomposing account Number on Target.

In addition to this ,we added three extra column.

R_TGT_CREATED_DT = Whenever record inserted first time on target it will have target current timestamp.
R_TGT_UPDATED_DT = Whenever record inserted & updated on target it will have target timestamp.
R_ACTION_CD = type of DML will be captured in this column, (insert/update/delete)

Again addition to this, we want to preserve rows which will be deleted in separate partitioned archived table.
so we had again archive table called ${TABLE_NAME}_DEL , Sorry using variable of Unix.

Note: As source and Target DDL is not identical , You must use defgen utility to generate source DDL. First time You must load data using Initial Load,so rows will fall into right partitions and Extra mapping columns will also populated as same time.

Note: When you want use INSERTDELETES on Targetside, You must use NOCOMPRESSDELETES on Source.
DDL of Source table.

CREATE TABLE DUMMYEVENT
(
  ACC_NUM                    VARCHAR2(20 BYTE) NOT NULL,
  ORIGINAL_ACC_NUM           VARCHAR2(20 BYTE),
  RULE_NUMBER                    NUMBER(1),
  -----few columns Trim. 
  PREVIOUS_EVENT_REF             VARCHAR2(16 BYTE),
  PREVIOUS_EVENT_SEQ             NUMBER(9),
  PRODUCT_SEQ                    NUMBER(9),
  TWIN_EVENT_BOO                 VARCHAR2(1 BYTE)
  )

DDL of Target table.

CREATE TABLE DUMMYEVENT
( SITE_ID                        NUMBER(3),
  ACC_NUM                    VARCHAR2(20 BYTE) NOT NULL,
  ORIGINAL_ACC_NUM           VARCHAR2(20 BYTE),
  RULE_NUMBER                    NUMBER(1),
 -----few columns Trim. 
  PREVIOUS_EVENT_REF             VARCHAR2(16 BYTE),
  PREVIOUS_EVENT_SEQ             NUMBER(9),
  PRODUCT_SEQ                    NUMBER(9),
  TWIN_EVENT_BOO                 VARCHAR2(1 BYTE)
  R_TGT_CREATED_DT  DATE,
  R_TGT_UPDATED_DT  DATE,
  R_ACTION_CD       VARCHAR2(100 BYTE)
)
TABLESPACE DUMMY_ADMIN
PARTITION BY RANGE (ACC_NUM) 
INTERVAL(NUMTODSINTERVAL(1, 'DAY')) 
subpartition by list (SITE_ID)
(PARTITION DUMMYEVENT1 VALUES LESS THAN (TO_DATE('20-02-2013', 'DD-MM-YYYY')) 
(
SUBPARTITION CO_1 VALUES (1),
SUBPARTITION CO_126 VALUES (126),
SUBPARTITION CO_131 VALUES (131),
SUBPARTITION CO_132 VALUES (132),
SUBPARTITION CO_135 VALUES (135),
SUBPARTITION CO_182 VALUES (182),
SUBPARTITION CO_186 VALUES (186),
SUBPARTITION CO_214 VALUES (214),
SUBPARTITION CO_215 VALUES (215),
SUBPARTITION CO_216 VALUES (216),
SUBPARTITION CO_238 VALUES (238),
SUBPARTITION CO_239 VALUES (239),
SUBPARTITION CO_333 VALUES (333),
SUBPARTITION CO_334 VALUES (334),
SUBPARTITION CO_342 VALUES (342),
SUBPARTITION CO_436 VALUES (436),
SUBPARTITION CO_476 VALUES (476),
SUBPARTITION CO_477 VALUES (477),
SUBPARTITION CO_541 VALUES (541),
SUBPARTITION CO_580 VALUES (580),
SUBPARTITION CO_609 VALUES (609),
SUBPARTITION CO_UK VALUES (DEFAULT)
  )
);

---ARCHIVED DELETE TABLE TO PRESERVE DELETED ROWS.

CREATE TABLE IDENTITY.DUMMYEVENT_DEL
( SITE_ID                        NUMBER(3),
  ACC_NUM                    VARCHAR2(20 BYTE) NOT NULL,
  ORIGINAL_ACC_NUM           VARCHAR2(20 BYTE),
  RULE_NUMBER                    NUMBER(1),
  -----few columns Trim. 
  PREVIOUS_EVENT_REF             VARCHAR2(16 BYTE),
  PREVIOUS_EVENT_SEQ             NUMBER(9),
  PRODUCT_SEQ                    NUMBER(9),
  TWIN_EVENT_BOO                 VARCHAR2(1 BYTE)
  R_TGT_CREATED_DT  DATE,
  R_TGT_UPDATED_DT  DATE,
  R_ACTION_CD       VARCHAR2(100 BYTE)
)
TABLESPACE IDM_DATA
PARTITION BY RANGE (R_TGT_UPDATED_DT) 
INTERVAL(NUMTOYMINTERVAL(1, 'MONTH')) 
(
PARTITION costed1 VALUES LESS THAN (TO_DATE('20-02-2013', 'DD-MM-YYYY')) TABLESPACE IDM_DATA
);

---Goldengate Mapping On source

Please keep in mind that you put NOCOMPRESSDELETES in Source parameter file.

TABLE ICBS_ADMIN.DUMMYEVENT,
KEYCOLS(ACC_NUM);

---Goldengate Mapping On Target
Please keep in mind that you need to put ALLOWDUPTARGETMAP in Target Mapping file.

---------------------------------------------------------------------------------
---------------------------------------------------------------------------------
--THIS IS FOR INSERT AND DELETE RECORD ONLY. 
ALLOWDUPTARGETMAP
GETINSERTS
GETDELETES
IGNOREUPDATES
MAP ICBS_ADMIN.DUMMYEVENT, TARGET ICBS_ADMIN.DUMMYEVENT,
KEYCOLS(ACC_NUM),
INSERTMISSINGUPDATES,
COLMAP (USEDEFAULTS,
R_TGT_CREATED_DT= @DATENOW (),
R_TGT_UPDATED_DT= @DATENOW (),
R_ACTION_CD= @GETENV ("GGHEADER","OPTYPE")
);
--THIS IS FOR UPDATE RECORD ONLY.
GETUPDATES
IGNOREINSERTS
IGNOREDELETES
MAP ICBS_ADMIN.DUMMYEVENT, TARGET ICBS_ADMIN.DUMMYEVENT,
KEYCOLS(ACC_NUM),
INSERTMISSINGUPDATES,
COLMAP (USEDEFAULTS,
R_TGT_UPDATED_DT= @DATENOW (),
R_ACTION_CD= @GETENV ("GGHEADER","OPTYPE")
);
---------------------------------------------------------------------------------
--To preserve deletes. 
---------------------------------------------------------------------------------
GETDELETES
IGNOREINSERTS
IGNOREUPDATES
INSERTDELETES
MAP ICBS_ADMIN.DUMMYEVENT, TARGET ICBS_ADMIN.DUMMYEVENT_DEL,
KEYCOLS(ACC_NUM),
INSERTMISSINGUPDATES,
COLMAP (USEDEFAULTS,
SITE_ID= @STREXT (ACC_NUM, 1, 3),
R_TGT_CREATED_DT= @DATENOW (),
R_TGT_UPDATED_DT= @DATENOW (),
R_ACTION_CD= @GETENV ("GGHEADER","OPTYPE")
);

Saturday, February 2, 2013

Remote sqlplus Over SSH

Links to this post
Example is to show how to run sqlplus over ssh.

/usr/bin/ssh -qn catdummyxyz 'ORACLE_HOME=/u00/app/oracle/product/10.2.0/DB;export ORACLE_HOME;$ORACLE_HOME/bin/sqlplus -s sys/dummy@ASM as sysdba << EndOfFile
select dummy from dual;
select instance_number ,INSTANCE_NAME from v\$instance;
exit;
EndOfFile'

You can also export ORACLE_SID if you want.
/usr/bin/ssh -qn catdummxyz 'ORACLE_SID=+ASM2;export ORACLE_SID;ORACLE_HOME=/u00/app/oracle/product/10.2.0/DB;export ORACLE_HOME;$ORACLE_HOME/bin/sqlplus -s sys/dummy@ASM as sysdba << EndOfFile
select dummy from dual;
select instance_number ,INSTANCE_NAME from v\$instance;
exit;
EndOfFile'

Thursday, January 3, 2013

Restore Database RAC from multiple Instances

Links to this post
This is restore & recovery of RAC database running on 3 nodes.

While restore & recover database we allocate channel to all Instances for Load balance.

Oracle Database version is 11.2.0.3.

Steps to perform restore & recovery.

Restore spfile
Restore controlfile
Bring all the instances in mount state (do not put cluster_database=false)
Allocate channel using multiple instances
Restore database
Recover database
Open the database in resetlogs in one node
Open the database on other nodes

---------RESTORE SPFILE 

TEST271 | DUMMYGG1 | /export/home/oracle/scripts/rman
> rman target / catalog rman/RMAN@catalogdb

Recovery Manager: Release 11.2.0.3.0 - Production on Thu Jan 3 12:06:05 2013

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database (not started)
connected to recovery catalog database

RMAN> set dbid=90275615

executing command: SET DBID
database name is "DUMMYGG" and DBID is 90275615

RMAN> list backup of spfile;


List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
7639172 Incr 0  80.00K     DISK        00:00:01     02-JAN-13
        BP Key: 7639190   Status: AVAILABLE  Compressed: NO  Tag: TAG20130102T121926
        Piece Name: /ORAGG/backup/DUMMYGG15_12nudf6q_1_1
  SPFILE Included: Modification time: 02-JAN-13
  SPFILE db_unique_name: DUMMYGG

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
7640203 Incr 0  80.00K     DISK        00:00:01     02-JAN-13
        BP Key: 7640224   Status: AVAILABLE  Compressed: NO  Tag: TAG20130102T124658
        Piece Name: /ORAGG/backup/DUMMYGG05_2gnudgqg_1_1
  SPFILE Included: Modification time: 02-JAN-13
  SPFILE db_unique_name: DUMMYGG

RMAN> STARTUP FORCE NOMOUNT;
  
RMAN> restore spfile from '/ORAGG/backup/DUMMYGG05_2gnudgqg_1_1';

Starting restore at 03-JAN-13
using channel ORA_DISK_1

channel ORA_DISK_1: restoring spfile from AUTOBACKUP /ORAGG/backup/DUMMYGG05_2gnudgqg_1_1
channel ORA_DISK_1: SPFILE restore from AUTOBACKUP complete
Finished restore at 03-JAN-13

RMAN> exit

Now copy spfile to all nodes & rename according to it.


------Restore controlfile

> rman target / catalog rman/RMAN@catalogdb

Recovery Manager: Release 11.2.0.3.0 - Production on Thu Jan 3 12:14:22 2013

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DUMMYGG (not mounted)
connected to recovery catalog database

RMAN> set dbid=90275615;

executing command: SET DBID
database name is "DUMMYGG" and DBID is 90275615

RMAN> restore controlfile;

Starting restore at 03-JAN-13
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=156 instance=DUMMYGG1 device type=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: copied control file copy
input file name=/u00/app/oracle/diag/control.ctl
output file name=+TESTGG/DUMMYGG/controlfile/current.958.803736875
Finished restore at 03-JAN-13

----------Mount all Instance.

Mount database on All nodes & make sure cluster_database parameter is set to TRUE

----------Restore database.

rman target / catalog rman/RMAN@catalogdb

run {
sql 'alter system set "_backup_disk_bufcnt"=64 scope=memory';
sql 'alter system set "_backup_disk_bufsz"=1048576 scope=memory';
sql 'alter system set "_backup_file_bufcnt"=64 scope=memory';
sql 'alter system set "_backup_file_bufsz"=1048576 scope=memory';
allocate channel c01 DEVICE TYPE DISK FORMAT '/ORAGG/backup/DUMMYGG01_%U' CONNECT 'sys/SYSTEM@DUMMYGG1';
allocate channel c02 DEVICE TYPE DISK FORMAT '/ORAGG/backup/DUMMYGG02_%U' CONNECT 'sys/SYSTEM@DUMMYGG1';
allocate channel c03 DEVICE TYPE DISK FORMAT '/ORAGG/backup/DUMMYGG03_%U' CONNECT 'sys/SYSTEM@DUMMYGG1';
allocate channel c04 DEVICE TYPE DISK FORMAT '/ORAGG/backup/DUMMYGG04_%U' CONNECT 'sys/SYSTEM@DUMMYGG1';
allocate channel c05 DEVICE TYPE DISK FORMAT '/ORAGG/backup/DUMMYGG05_%U' CONNECT 'sys/SYSTEM@DUMMYGG1';
allocate channel c06 DEVICE TYPE DISK FORMAT '/ORAGG/backup/DUMMYGG06_%U' CONNECT 'sys/SYSTEM@DUMMYGG1';
allocate channel c07 DEVICE TYPE DISK FORMAT '/ORAGG/backup/DUMMYGG07_%U' CONNECT 'sys/SYSTEM@DUMMYGG2';
allocate channel c08 DEVICE TYPE DISK FORMAT '/ORAGG/backup/DUMMYGG08_%U' CONNECT 'sys/SYSTEM@DUMMYGG2';
allocate channel c09 DEVICE TYPE DISK FORMAT '/ORAGG/backup/DUMMYGG09_%U' CONNECT 'sys/SYSTEM@DUMMYGG2';
allocate channel c10 DEVICE TYPE DISK FORMAT '/ORAGG/backup/DUMMYGG10_%U' CONNECT 'sys/SYSTEM@DUMMYGG2';
allocate channel c11 DEVICE TYPE DISK FORMAT '/ORAGG/backup/DUMMYGG11_%U' CONNECT 'sys/SYSTEM@DUMMYGG2';
allocate channel c12 DEVICE TYPE DISK FORMAT '/ORAGG/backup/DUMMYGG12_%U' CONNECT 'sys/SYSTEM@DUMMYGG2';
allocate channel c13 DEVICE TYPE DISK FORMAT '/ORAGG/backup/DUMMYGG13_%U' CONNECT 'sys/SYSTEM@DUMMYGG3';
allocate channel c14 DEVICE TYPE DISK FORMAT '/ORAGG/backup/DUMMYGG14_%U' CONNECT 'sys/SYSTEM@DUMMYGG3';
allocate channel c15 DEVICE TYPE DISK FORMAT '/ORAGG/backup/DUMMYGG15_%U' CONNECT 'sys/SYSTEM@DUMMYGG3';
allocate channel c16 DEVICE TYPE DISK FORMAT '/ORAGG/backup/DUMMYGG16_%U' CONNECT 'sys/SYSTEM@DUMMYGG3';
allocate channel c17 DEVICE TYPE DISK FORMAT '/ORAGG/backup/DUMMYGG17_%U' CONNECT 'sys/SYSTEM@DUMMYGG3';
allocate channel c18 DEVICE TYPE DISK FORMAT '/ORAGG/backup/DUMMYGG18_%U' CONNECT 'sys/SYSTEM@DUMMYGG3';
restore database;
release channel c01;
release channel c02;
release channel c03;
release channel c04;
release channel c05;
release channel c06;
release channel c07;
release channel c08;
release channel c09;
release channel c10;
release channel c11;
release channel c12;
release channel c13;
release channel c14;
release channel c15;
release channel c16;
release channel c17;
release channel c18;
}


---------recover database.

RMAN> run {
sql 'alter system set "_backup_disk_bufcnt"=64 scope=memory';
sql 'alter system set "_backup_disk_bufsz"=1048576 scope=memory';
sql 'alter system set "_backup_file_bufcnt"=64 scope=memory';
sql 'alter system set "_backup_file_bufsz"=1048576 scope=memory';
allocate channel c01 DEVICE TYPE DISK FORMAT '/ORAGG/backup/' CONNECT 'sys/SYSTEM@DUMMYGG1';
allocate channel c07 DEVICE TYPE DISK FORMAT '/ORAGG/backup/' CONNECT 'sys/SYSTEM@DUMMYGG2';
allocate channel c13 DEVICE TYPE DISK FORMAT '/ORAGG/backup/' CONNECT 'sys/SYSTEM@DUMMYGG3';
set until time "to_date('2013-01-02:13:00:00', 'yyyy-mm-dd:hh24:mi:ss')";
recover database;
release channel c01;
release channel c07;
release channel c13;
}


---------Open database in resetlogs on first instance & rest with normal.

TEST271 | DUMMYGG1 | /export/home/oracle/scripts/rman/log
sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Thu Jan 3 12:46:30 2013

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

12:47:01 @DUMMYGG1 SQL>alter database open resetlogs;

Database altered.

Elapsed: 00:00:14.39

TEST272 | DUMMYGG2 | /export/home/oracle
sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Thu Jan 3 12:47:43 2013

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

12:47:43 @DUMMYGG2 SQL>alter database open;

Database altered.

Elapsed: 00:00:03.92
12:47:50 @DUMMYGG2 SQL>



---------output of restore

sql statement: alter system set "_backup_disk_bufcnt"=64 scope=memory

sql statement: alter system set "_backup_disk_bufsz"=1048576 scope=memory

sql statement: alter system set "_backup_file_bufcnt"=64 scope=memory

sql statement: alter system set "_backup_file_bufsz"=1048576 scope=memory

allocated channel: c01
channel c01: SID=222 instance=DUMMYGG1 device type=DISK

allocated channel: c02
channel c02: SID=5 instance=DUMMYGG1 device type=DISK

allocated channel: c03
channel c03: SID=68 instance=DUMMYGG1 device type=DISK

allocated channel: c04
channel c04: SID=99 instance=DUMMYGG1 device type=DISK

allocated channel: c05
channel c05: SID=129 instance=DUMMYGG1 device type=DISK

allocated channel: c06
channel c06: SID=162 instance=DUMMYGG1 device type=DISK

allocated channel: c07
channel c07: SID=67 instance=DUMMYGG2 device type=DISK

allocated channel: c08
channel c08: SID=161 instance=DUMMYGG2 device type=DISK

allocated channel: c09
channel c09: SID=191 instance=DUMMYGG2 device type=DISK

allocated channel: c10
channel c10: SID=222 instance=DUMMYGG2 device type=DISK

allocated channel: c11
channel c11: SID=5 instance=DUMMYGG2 device type=DISK

allocated channel: c12
channel c12: SID=38 instance=DUMMYGG2 device type=DISK

allocated channel: c13
channel c13: SID=134 instance=DUMMYGG3 device type=DISK

allocated channel: c14
channel c14: SID=72 instance=DUMMYGG3 device type=DISK

allocated channel: c15
channel c15: SID=133 instance=DUMMYGG3 device type=DISK

allocated channel: c16
channel c16: SID=194 instance=DUMMYGG3 device type=DISK

allocated channel: c17
channel c17: SID=10 instance=DUMMYGG3 device type=DISK

allocated channel: c18
channel c18: SID=74 instance=DUMMYGG3 device type=DISK

Starting restore at 03-JAN-13

channel c01: starting datafile backup set restore
channel c01: specifying datafile(s) to restore from backup set
channel c01: restoring datafile 00003 to +TESTGG/DUMMYGG/datafile/undotbs1.258.792518685
channel c01: reading from backup piece /ORAGG/backup/DUMMYGG05_22nudgq6_1_1
channel c02: starting datafile backup set restore
channel c02: specifying datafile(s) to restore from backup set
channel c02: restoring datafile 00005 to +TESTGG/DUMMYGG/datafile/undotbs2.264.792518777
channel c02: reading from backup piece /ORAGG/backup/DUMMYGG06_23nudgq6_1_1
channel c03: starting datafile backup set restore
channel c03: specifying datafile(s) to restore from backup set
channel c03: restoring datafile 00006 to +TESTGG/DUMMYGG/datafile/undotbs3.265.792518777
channel c03: reading from backup piece /ORAGG/backup/DUMMYGG07_24nudgq6_1_1
channel c04: starting datafile backup set restore
channel c04: specifying datafile(s) to restore from backup set
channel c04: restoring datafile 00013 to +TESTGG/DUMMYGG/datafile/c2oindx_tbs.974.799865727
channel c04: reading from backup piece /ORAGG/backup/DUMMYGG08_25nudgq8_1_1
channel c05: starting datafile backup set restore
channel c05: specifying datafile(s) to restore from backup set
channel c05: restoring datafile 00014 to +TESTGG/DUMMYGG/datafile/c2odata_tbs.939.799865745
channel c05: reading from backup piece /ORAGG/backup/DUMMYGG09_26nudgqa_1_1
channel c06: starting datafile backup set restore
channel c06: specifying datafile(s) to restore from backup set
channel c06: restoring datafile 00009 to +TESTGG/DUMMYGG/datafile/idm_data.344.793627825
channel c06: reading from backup piece /ORAGG/backup/DUMMYGG10_27nudgqa_1_1
channel c07: starting datafile backup set restore
channel c07: specifying datafile(s) to restore from backup set
channel c07: restoring datafile 00015 to +TESTGG/DUMMYGG/datafile/c2odata_tbs_reorg1.948.799865763
channel c07: reading from backup piece /ORAGG/backup/DUMMYGG14_2bnudgqc_1_1
channel c08: starting datafile backup set restore
channel c08: specifying datafile(s) to restore from backup set
channel c08: restoring datafile 00016 to +TESTGG/DUMMYGG/datafile/mvlog_data.735.799865763
channel c08: reading from backup piece /ORAGG/backup/DUMMYGG15_2cnudgqc_1_1
channel c09: starting datafile backup set restore
channel c09: specifying datafile(s) to restore from backup set
channel c09: restoring datafile 00017 to +TESTGG/DUMMYGG/datafile/c2odata_tbs_reorg0.305.799865765
channel c09: reading from backup piece /ORAGG/backup/DUMMYGG16_2dnudgqe_1_1
channel c10: starting datafile backup set restore
channel c10: specifying datafile(s) to restore from backup set
channel c10: restoring datafile 00004 to +TESTGG/DUMMYGG/datafile/users.259.792518685
channel c10: reading from backup piece /ORAGG/backup/DUMMYGG18_2fnudgqg_1_1
channel c11: starting datafile backup set restore
channel c11: specifying datafile(s) to restore from backup set
channel c11: restoring datafile 00007 to +TESTGG/DUMMYGG/datafile/gg_data.312.793368947
channel c11: reading from backup piece /ORAGG/backup/DUMMYGG11_28nudgqb_1_1
channel c12: starting datafile backup set restore
channel c12: specifying datafile(s) to restore from backup set
channel c12: restoring datafile 00011 to +TESTGG/DUMMYGG/datafile/soa_data.328.795
channel c12: reading from backup piece /ORAGG/backup/DUMMYGG13_2anudgqc_1_1
channel c13: starting datafile backup set restore
channel c13: specifying datafile(s) to restore from backup set
channel c13: restoring datafile 00001 to +TESTGG/DUMMYGG/datafile/system.256.79251
channel c13: reading from backup piece /ORAGG/backup/DUMMYGG12_29nudgqb_1_1
channel c14: starting datafile backup set restore
channel c14: specifying datafile(s) to restore from backup set
channel c14: restoring datafile 00012 to +TESTGG/DUMMYGG/datafile/taps_data.617.79
channel c14: reading from backup piece /ORAGG/backup/DUMMYGG03_20nudgq5_1_1
channel c15: starting datafile backup set restore
channel c15: specifying datafile(s) to restore from backup set
channel c15: restoring datafile 00002 to +TESTGG/DUMMYGG/datafile/sysaux.257.79251
channel c15: reading from backup piece /ORAGG/backup/DUMMYGG04_21nudgq6_1_1
channel c16: starting datafile backup set restore
channel c16: specifying datafile(s) to restore from backup set
channel c16: restoring datafile 00008 to +TESTGG/DUMMYGG/datafile/identity_data.34
channel c16: reading from backup piece /ORAGG/backup/DUMMYGG01_1unudgq5_1_1
channel c17: starting datafile backup set restore
channel c17: specifying datafile(s) to restore from backup set
channel c17: restoring datafile 00010 to +TESTGG/DUMMYGG/datafile/taps_data.329.79
channel c17: reading from backup piece /ORAGG/backup/DUMMYGG02_1vnudgq5_1_1
channel c07: piece handle=/ORAGG/backup/DUMMYGG14_2bnudgqc_1_1 tag=TAG20130102T124658
channel c07: restored backup piece 1
channel c07: restore complete, elapsed time: 00:00:05
channel c08: piece handle=/ORAGG/backup/DUMMYGG15_2cnudgqc_1_1 tag=TAG20130102T124658
channel c08: restored backup piece 1
channel c08: restore complete, elapsed time: 00:00:05
channel c09: piece handle=/ORAGG/backup/DUMMYGG16_2dnudgqe_1_1 tag=TAG20130102T124658
channel c09: restored backup piece 1
channel c09: restore complete, elapsed time: 00:00:05
channel c10: piece handle=/ORAGG/backup/DUMMYGG18_2fnudgqg_1_1 tag=TAG20130102T124658
channel c10: restored backup piece 1
channel c10: restore complete, elapsed time: 00:00:17
channel c11: piece handle=/ORAGG/backup/DUMMYGG11_28nudgqb_1_1 tag=TAG20130102T124658
channel c11: restored backup piece 1
channel c11: restore complete, elapsed time: 00:00:17
channel c12: piece handle=/ORAGG/backup/DUMMYGG13_2anudgqc_1_1 tag=TAG20130102T124658
channel c12: restored backup piece 1
channel c12: restore complete, elapsed time: 00:00:17
channel c13: piece handle=/ORAGG/backup/DUMMYGG12_29nudgqb_1_1 tag=TAG20130102T124658
channel c13: restored backup piece 1
channel c13: restore complete, elapsed time: 00:00:37
channel c06: piece handle=/ORAGG/backup/DUMMYGG10_27nudgqa_1_1 tag=TAG20130102T124658
channel c06: restored backup piece 1
channel c06: restore complete, elapsed time: 00:00:57
channel c01: piece handle=/ORAGG/backup/DUMMYGG05_22nudgq6_1_1 tag=TAG20130102T124658
channel c01: restored backup piece 1
channel c01: restore complete, elapsed time: 00:01:58
channel c02: piece handle=/ORAGG/backup/DUMMYGG06_23nudgq6_1_1 tag=TAG20130102T124658
channel c02: restored backup piece 1
channel c02: restore complete, elapsed time: 00:02:48
channel c03: piece handle=/ORAGG/backup/DUMMYGG07_24nudgq6_1_1 tag=TAG20130102T124658
channel c03: restored backup piece 1
channel c03: restore complete, elapsed time: 00:02:48
channel c04: piece handle=/ORAGG/backup/DUMMYGG08_25nudgq8_1_1 tag=TAG20130102T124658
channel c04: restored backup piece 1
channel c04: restore complete, elapsed time: 00:02:48
channel c05: piece handle=/ORAGG/backup/DUMMYGG09_26nudgqa_1_1 tag=TAG20130102T124658
channel c05: restored backup piece 1
channel c05: restore complete, elapsed time: 00:02:48
channel c15: piece handle=/ORAGG/backup/DUMMYGG04_21nudgq6_1_1 tag=TAG20130102T124658
channel c15: restored backup piece 1
channel c15: restore complete, elapsed time: 00:04:37
channel c14: piece handle=/ORAGG/backup/DUMMYGG03_20nudgq5_1_1 tag=TAG20130102T124658
channel c14: restored backup piece 1
channel c14: restore complete, elapsed time: 00:05:07
channel c17: piece handle=/ORAGG/backup/DUMMYGG02_1vnudgq5_1_1 tag=TAG20130102T124658
channel c17: restored backup piece 1
channel c17: restore complete, elapsed time: 00:10:38
channel c16: piece handle=/ORAGG/backup/DUMMYGG01_1unudgq5_1_1 tag=TAG20130102T124658
channel c16: restored backup piece 1
channel c16: restore complete, elapsed time: 00:10:48
Finished restore at 03-JAN-13

released channel: c01

released channel: c02

released channel: c03

released channel: c04

released channel: c05

released channel: c06

released channel: c07

released channel: c08

released channel: c09

released channel: c10

released channel: c11

released channel: c12

released channel: c13

released channel: c14

released channel: c15

released channel: c16

released channel: c17

released channel: c18

RMAN> quit


Recovery Manager complete.




------output of recover database.

sql statement: alter system set "_backup_disk_bufcnt"=64 scope=memory

sql statement: alter system set "_backup_disk_bufsz"=1048576 scope=memory

sql statement: alter system set "_backup_file_bufcnt"=64 scope=memory

sql statement: alter system set "_backup_file_bufsz"=1048576 scope=memory

allocated channel: c01
channel c01: SID=5 instance=DUMMYGG1 device type=DISK

allocated channel: c07
channel c07: SID=67 instance=DUMMYGG2 device type=DISK

allocated channel: c13
channel c13: SID=133 instance=DUMMYGG3 device type=DISK

executing command: SET until clause

Starting recover at 03-JAN-13

starting media recovery

archived log for thread 1 with sequence 1697 is already on disk as file +TESTGG/DUMMYGG/archivelog/2013_01_02/thread_1_seq_1697.776.803667609
archived log for thread 2 with sequence 1140 is already on disk as file +TESTGG/DUMMYGG/archivelog/2013_01_02/thread_2_seq_1140.1090.803668029
archived log for thread 3 with sequence 1195 is already on disk as file +TESTGG/DUMMYGG/archivelog/2013_01_02/thread_3_seq_1195.485.803669415
archived log file name=+TESTGG/DUMMYGG/archivelog/2013_01_02/thread_1_seq_1697.776.803667609 thread=1 sequence=1697
archived log file name=+TESTGG/DUMMYGG/archivelog/2013_01_02/thread_2_seq_1140.1090.803668029 thread=2 sequence=1140
archived log file name=+TESTGG/DUMMYGG/archivelog/2013_01_02/thread_3_seq_1195.485.803669415 thread=3 sequence=1195
media recovery complete, elapsed time: 00:00:01
Finished recover at 03-JAN-13

released channel: c01

released channel: c07

released channel: c13

RMAN>