Friday, April 26, 2013

Goldengate ArchiveLog delete issue when Extract is registered.

Links to this post
When you register extract on source database using Log retention. It marks scn from that point.

2013-02-07 21:53:51 INFO OGG-01749 Oracle GoldenGate Command Interpreter for Oracle: Successfully registered EXTRACT XCOMTE to start managing log retention at SCN 10855583972257.

After it registered , when you try to delete Archivelog using RMAN you get below errors.

RMAN-08137: WARNING: archived log not deleted, needed for standby or upstream capture process archived log file

This mostly occurs on 10.2.0.4 to 11.2.0.2, it also happens due to Orphan Logminer sessions.

You may refer below links.
Why is RMAN Not Purging Archive Logs After Backup Since OGG was Configured? [ID 1351352.1]

It is also recommend to apply Patch on database.

Important: To support RMAN log retention on Oracle RAC, you must download and install the database patch that is provided in BUGFIX 11879974, before you add the Extract groups.

Wednesday, April 24, 2013

Incorrect Parallel degree calculated when Auto DOP used in 11.2.0.3 Exadata.

Links to this post

This may be bug in 11.2.0.3 on exadata, AUTO DOP calculate Incorrect Parallel degree, when parallel hints are used with Auto DOP.



SQL> explain plan for
SELECT /*+PARALLEL(E)*/ SITE_ID, ITEM_NUMBER, SERIAL_NUMBER, PORT_NUMBER, EQUIPMENT_ADDRESSABLE, TRIM(STATUS_DATE) 
STATUS_DATE, QUALITY_ASSURANCE_CODE, TRIM(QUALITY_ASSURANCE_DATE) QUALITY_ASSURANCE_DATE, EQUIPMENT_ADDRESS,
EQUIPMENT_OVERRIDE_ACTIVE, INITIALIZE_REQUIRED, PARENTAL_CODE, TEMP_ENABLED, TRIM(TRANSMISSION_DATE) TRANSMISSION_DATE, 
DNS_NAME, IP_ADDRESS, FQDN, LOCAL_STATUS, TRIM( LOCAL_STATUS_DATE) LOCAL_STATUS_DATE, SERVER_ID, SERVER_STATUS, 
TRIM(SERVER_STATUS_DATE) SERVER_STATUS_DATE, EQUIP_DTL_STATUS, PORT_CATEGORY_CODE, HEADEND, ACCOUNT_NUMBER, 
SUB_ACCOUNT_ID, VIDEO_RATING_CODE, PORT_TYPE, SERVICE_CATEGORY_CODE, SERVICE_OCCURRENCE, CREATED_USER_ID, TRIM(DATE_CREATED)
DATE_CREATED, LAST_CHANGE_USER_ID, TRIM(LAST_CHANGE_DATE) LAST_CHANGE_DATE, CABLE_CARD_ID, JOURNAL_DATE 
FROM CLE_DUMMY_DETAIL E 
WHERE JOURNAL_DATE >= (SELECT LAST_RUN_DATE FROM ETL_SITE_RUN_DATE@ODS.WORLD 
WHERE TABLE_NAME = 'xyz' AND SITE_NAME = 'CLE' ) 
AND JOURNAL_DATE <= (SELECT HEART_BEAT_DATE FROM ETL_SITE_RUN_DATE@ODS.WORLD 
WHERE TABLE_NAME = 'xyz' AND SITE_NAME = 'CLE' ) 
;

Explained.

SQL> @xplan

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 350686579

-----------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ/Ins |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1158 | 203K| 18 (0)| 00:00:01 | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM)| :TQ10000 | 1158 | 203K| 16 (0)| 00:00:01 | Q1,00 | P->S | QC (RAND) |
| 3 | PX BLOCK ITERATOR |  | 1158 | 203K| 16 (0)| 00:00:01 | Q1,00 | PCWC | |
|* 4 | TABLE ACCESS FULL| CLE_CDO1CPP | 1158 | 203K| 16 (0)| 00:00:01 | Q1,00 | PCWP | |
| 5 | REMOTE | ETL_SITE_RUN_DATE | 1 | 47 | 1 (0)| 00:00:01 | Q1,00 | PCWP | |
| 6 | REMOTE | ETL_SITE_RUN_DATE | 1 | 47 | 1 (0)| 00:00:01 | PODS_~ | R->S | |
-----------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

4 - filter("JOURNAL_DATE">= (SELECT "LAST_RUN_DATE" FROM "ETL_SITE_RUN_DATE" WHERE "SITE_NAME"='CLE' AND
"TABLE_NAME"='xyz') AND "JOURNAL_DATE"<= (SELECT "HEART_BEAT_DATE" FROM "ETL_SITE_RUN_DATE"
WHERE "SITE_NAME"='CLE' AND "TABLE_NAME"='xyz'))

Remote SQL Information (identified by operation id):
----------------------------------------------------

6 - SELECT "SITE_NAME","TABLE_NAME","HEART_BEAT_DATE" FROM "ETL_SITE_RUN_DATE" "ETL_SITE_RUN_DATE" WHERE
"SITE_NAME"='CLE' AND "TABLE_NAME"='xyz' (accessing 'PODS_ETL_CONTROL.WORLD' )


Note
-----
- Degree of Parallelism is 32767 because of hint

31 rows selected.

SQL> show parameter parallel

NAME TYPE VALUE
------------------------------------ ----------- 
parallel_adaptive_multi_user    FALSE
parallel_automatic_tuning      FALSE
parallel_degree_limit       8
parallel_degree_policy      AUTO
parallel_execution_message_size  16384
parallel_force_local       FALSE
parallel_io_cap_enabled      FALSE
parallel_max_servers     196
parallel_min_percent     0
parallel_min_servers     2
parallel_min_time_threshold     60
parallel_server           TRUE
parallel_server_instances    5
parallel_servers_target    90
parallel_threads_per_cpu    2
recovery_parallelism     0

FileName
----------------


Tested with without DBLINK.

-- Testing this on 11.2.0.3

SQL> select /*+ Parallel(E) */ * from foo e;

Execution Plan
----------------------------------------------------------
Plan hash value: 3135133324

--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time| TQ |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 18M| 1708M| 9922 (1)| 00:00:01| | | |
| 1 | PX COORDINATOR | | | | || | | |
| 2 | PX SEND QC (RANDOM)| :TQ10000 | 18M| 1708M| 9922 (1)| 00:00:01| Q1,00 | P->S | QC (RAND) |
| 3 | PX BLOCK ITERATOR | | 18M| 1708M| 9922 (1)| 00:00:01| Q1,00 | PCWC | |
| 4 | TABLE ACCESS FULL| FOO | 18M| 1708M| 9922 (1)| 00:00:01| Q1,00 | PCWP | |
--------------------------------------------------------------------------------------------------------------

Note
-----
- Degree of Parallelism is 32767 because of hint

SQL> connect / as sysdba
Connected.
SQL> select count(*) from foo;

COUNT(*)
----------
18472960

parallel_adaptive_multi_user     TRUE
parallel_automatic_tuning      FALSE
parallel_degree_limit       2
parallel_degree_policy      AUTO
parallel_execution_message_size    16384
parallel_force_local       FALSE
parallel_instance_group 
parallel_io_cap_enabled      FALSE
parallel_max_servers       135
parallel_min_percent       0
parallel_min_servers       0
parallel_min_time_threshold     1
parallel_server        FALSE
parallel_server_instances      1
parallel_servers_target      64
parallel_threads_per_cpu      2
recovery_parallelism       0



comparison on 11.2.0.2 and 11.2.0.3

----test case 
drop table foo;
alter session set parallel_degree_policy = auto;
alter session set parallel_degree_limit = 2;
alter session set parallel_min_time_threshold = 1;
create table foo as select * from all_objects where rownum < 16;
alter table foo parallel 6;
set autotrace traceonly explain
select /*+ Parallel(e) */ * from foo e;

--------
FOO has a default DOP of 6

11.2.0.3 -- select /*+ Parallel(a) */ * from foo a; --- Degree of Parallelism is 32767 because of hint
11.2.0.2 -- select /*+ Parallel(a) */ * from foo a; -- automatic DOP: Computed Degree of Parallelism is 1
11.2.0.3 select /*+ Parallel */ * from foo; -- - automatic DOP: Computed Degree of Parallelism is 2
11.2.0.2 select /*+ Parallel */ * from foo; -- - automatic DOP: Computed Degree of Parallelism is 2
11.2.0.3 select /*+ Parallel(20) */ * from foo; -- - Degree of Parallelism is 20 because of hint
11.2.0.2 select /*+ Parallel(20) */ * from foo; -- - Degree of Parallelism is 20 because of hint


Friday, April 5, 2013

Goldengate Mapping and Transformation for ETL

Links to this post
There are situation when you want to take advantage of GG. it can map and transform data in shape which you want on Target.
we had same one situation,when Source table have 90+ rows and 5 million rows gets Replicated/day and Target table is not partitioned.

so we did partition Target table not just partitioned but we sub-partition it.

Table will be first Partitioned based on date column by New 11g Interval partition and
then subpartitioned by site id which is again subset of account number, Account number column is Varchar2 with some 20 digits Number.so site id will be parsed from first 3 digits from Account Number.
In nutshell we are decomposing account Number on Target.

In addition to this ,we added three extra column.

R_TGT_CREATED_DT = Whenever record inserted first time on target it will have target current timestamp.
R_TGT_UPDATED_DT = Whenever record inserted & updated on target it will have target timestamp.
R_ACTION_CD = type of DML will be captured in this column, (insert/update/delete)

Again addition to this, we want to preserve rows which will be deleted in separate partitioned archived table.
so we had again archive table called ${TABLE_NAME}_DEL , Sorry using variable of Unix.

Note: As source and Target DDL is not identical , You must use defgen utility to generate source DDL. First time You must load data using Initial Load,so rows will fall into right partitions and Extra mapping columns will also populated as same time.

Note: When you want use INSERTDELETES on Targetside, You must use NOCOMPRESSDELETES on Source.
DDL of Source table.

CREATE TABLE DUMMYEVENT
(
  ACC_NUM                    VARCHAR2(20 BYTE) NOT NULL,
  ORIGINAL_ACC_NUM           VARCHAR2(20 BYTE),
  RULE_NUMBER                    NUMBER(1),
  -----few columns Trim. 
  PREVIOUS_EVENT_REF             VARCHAR2(16 BYTE),
  PREVIOUS_EVENT_SEQ             NUMBER(9),
  PRODUCT_SEQ                    NUMBER(9),
  TWIN_EVENT_BOO                 VARCHAR2(1 BYTE)
  )

DDL of Target table.

CREATE TABLE DUMMYEVENT
( SITE_ID                        NUMBER(3),
  ACC_NUM                    VARCHAR2(20 BYTE) NOT NULL,
  ORIGINAL_ACC_NUM           VARCHAR2(20 BYTE),
  RULE_NUMBER                    NUMBER(1),
 -----few columns Trim. 
  PREVIOUS_EVENT_REF             VARCHAR2(16 BYTE),
  PREVIOUS_EVENT_SEQ             NUMBER(9),
  PRODUCT_SEQ                    NUMBER(9),
  TWIN_EVENT_BOO                 VARCHAR2(1 BYTE)
  R_TGT_CREATED_DT  DATE,
  R_TGT_UPDATED_DT  DATE,
  R_ACTION_CD       VARCHAR2(100 BYTE)
)
TABLESPACE DUMMY_ADMIN
PARTITION BY RANGE (ACC_NUM) 
INTERVAL(NUMTODSINTERVAL(1, 'DAY')) 
subpartition by list (SITE_ID)
(PARTITION DUMMYEVENT1 VALUES LESS THAN (TO_DATE('20-02-2013', 'DD-MM-YYYY')) 
(
SUBPARTITION CO_1 VALUES (1),
SUBPARTITION CO_126 VALUES (126),
SUBPARTITION CO_131 VALUES (131),
SUBPARTITION CO_132 VALUES (132),
SUBPARTITION CO_135 VALUES (135),
SUBPARTITION CO_182 VALUES (182),
SUBPARTITION CO_186 VALUES (186),
SUBPARTITION CO_214 VALUES (214),
SUBPARTITION CO_215 VALUES (215),
SUBPARTITION CO_216 VALUES (216),
SUBPARTITION CO_238 VALUES (238),
SUBPARTITION CO_239 VALUES (239),
SUBPARTITION CO_333 VALUES (333),
SUBPARTITION CO_334 VALUES (334),
SUBPARTITION CO_342 VALUES (342),
SUBPARTITION CO_436 VALUES (436),
SUBPARTITION CO_476 VALUES (476),
SUBPARTITION CO_477 VALUES (477),
SUBPARTITION CO_541 VALUES (541),
SUBPARTITION CO_580 VALUES (580),
SUBPARTITION CO_609 VALUES (609),
SUBPARTITION CO_UK VALUES (DEFAULT)
  )
);

---ARCHIVED DELETE TABLE TO PRESERVE DELETED ROWS.

CREATE TABLE IDENTITY.DUMMYEVENT_DEL
( SITE_ID                        NUMBER(3),
  ACC_NUM                    VARCHAR2(20 BYTE) NOT NULL,
  ORIGINAL_ACC_NUM           VARCHAR2(20 BYTE),
  RULE_NUMBER                    NUMBER(1),
  -----few columns Trim. 
  PREVIOUS_EVENT_REF             VARCHAR2(16 BYTE),
  PREVIOUS_EVENT_SEQ             NUMBER(9),
  PRODUCT_SEQ                    NUMBER(9),
  TWIN_EVENT_BOO                 VARCHAR2(1 BYTE)
  R_TGT_CREATED_DT  DATE,
  R_TGT_UPDATED_DT  DATE,
  R_ACTION_CD       VARCHAR2(100 BYTE)
)
TABLESPACE IDM_DATA
PARTITION BY RANGE (R_TGT_UPDATED_DT) 
INTERVAL(NUMTOYMINTERVAL(1, 'MONTH')) 
(
PARTITION costed1 VALUES LESS THAN (TO_DATE('20-02-2013', 'DD-MM-YYYY')) TABLESPACE IDM_DATA
);

---Goldengate Mapping On source

Please keep in mind that you put NOCOMPRESSDELETES in Source parameter file.

TABLE ICBS_ADMIN.DUMMYEVENT,
KEYCOLS(ACC_NUM);

---Goldengate Mapping On Target
Please keep in mind that you need to put ALLOWDUPTARGETMAP in Target Mapping file.

---------------------------------------------------------------------------------
---------------------------------------------------------------------------------
--THIS IS FOR INSERT AND DELETE RECORD ONLY. 
ALLOWDUPTARGETMAP
GETINSERTS
GETDELETES
IGNOREUPDATES
MAP ICBS_ADMIN.DUMMYEVENT, TARGET ICBS_ADMIN.DUMMYEVENT,
KEYCOLS(ACC_NUM),
INSERTMISSINGUPDATES,
COLMAP (USEDEFAULTS,
R_TGT_CREATED_DT= @DATENOW (),
R_TGT_UPDATED_DT= @DATENOW (),
R_ACTION_CD= @GETENV ("GGHEADER","OPTYPE")
);
--THIS IS FOR UPDATE RECORD ONLY.
GETUPDATES
IGNOREINSERTS
IGNOREDELETES
MAP ICBS_ADMIN.DUMMYEVENT, TARGET ICBS_ADMIN.DUMMYEVENT,
KEYCOLS(ACC_NUM),
INSERTMISSINGUPDATES,
COLMAP (USEDEFAULTS,
R_TGT_UPDATED_DT= @DATENOW (),
R_ACTION_CD= @GETENV ("GGHEADER","OPTYPE")
);
---------------------------------------------------------------------------------
--To preserve deletes. 
---------------------------------------------------------------------------------
GETDELETES
IGNOREINSERTS
IGNOREUPDATES
INSERTDELETES
MAP ICBS_ADMIN.DUMMYEVENT, TARGET ICBS_ADMIN.DUMMYEVENT_DEL,
KEYCOLS(ACC_NUM),
INSERTMISSINGUPDATES,
COLMAP (USEDEFAULTS,
SITE_ID= @STREXT (ACC_NUM, 1, 3),
R_TGT_CREATED_DT= @DATENOW (),
R_TGT_UPDATED_DT= @DATENOW (),
R_ACTION_CD= @GETENV ("GGHEADER","OPTYPE")
);