Thursday 31 January 2013

Golden Gate Zero Downtime Database Migration

Detailed step using a platform Migration using Golden Gate with Zero down time.


This is a case study of moving an Oracle Database 9i on IBM AIX to Oracle Database 11g Release 2 on Linux and the detailed implementation steps. These steps can be simplified and adopted to accomplish a rolling upgrade.
Two scenarios are described next: a migration without the addition of a failback solution and the same migration with the addition of a failback.

1. Migration Without Failback:

Overview of a cross-platform database migration without failback




1. Address change management by restricting the creation of newer packages and tablespaces during the migration process (not shown). Using DDL migration, these activities can be allowed to some extent.
2. Start the Oracle GoldenGate Capture process at the production database Dprod.
3. Do a point-in-time recovery of an existing backup of Dprod until some SCN Qscn in a separate staging area. Call this database Dpitr. The Capture process in Step 2 must capture all transactions that have a commit time stamp higher than this Qscn. It must have been positioned at or before the beginning of the longest-running transaction when the source database was at Qscn. You can query gv$transaction and gv$database at the source database to identify the longest-running transaction and the current SCN at any point in time to identify where Capture should be positioned.
4. Upgrade Dpitr to Oracle Database 11g Release 2 on IBM AIX. Advance compatibility to Oracle Database version 10.0.0.0 or later.
5. Set up a vanilla Oracle Database 11g Release 2 database on Linux. Call this database Dtarget.
6. Take a full export without rows at Dpitr. Call the generated export exp_norows.dmp. This will pick up any objects that are not picked up as part of the transportable tablespace export.
7. Unplug the user tablespaces from Dpitr with the Oracle Database cross-platform transportable tablespace feature, using source-side endian conversion. (Note that the conversion would not be required if the endian systems were the same.) This is the step that avoids any performance degradation and does not require any quiescing at Dprod. This step will create a small export dump file. Call this exp_xtts.dmp.
8. Plug the set of tablespaces from Dpitr into Dtarget using the cross-platform transportable tablespace feature. Use the exp_xtts.dmp file created from Step 7. (Note that the plugged in tablespaces are in read-only mode.)
9. Make the set of user tablespaces in Dtarget Read Write (not shown).
10. Do a NOROWS import with IGNORE=Y option at Dtarget using the exp_norows.dmp dump file.
11. Start the Oracle GoldenGate Delivery process at Dtarget and synchronize up to the changes generated since Qscn. 
12. If any datatypes are not supported by transportable tablespace or Oracle GoldenGate, then do a special export/import of these objects from Dprod to Dtarget. 
13. Use Oracle GoldenGate Veridata to verify that the data at Dprod and Dtarget is synchronized. 
14. Swiitch the application from Dprod to Dtarget (not shown). 

The above procedure offloads any quiescing, conversion work to a clone database, and takes advantage of Oracle GoldenGate’s incremental real-time data capture and delivery to eliminate the downtime to zero, excluding the application switchover time. 

As an alternative to using the cross-platform transportable tablespace feature in Step 6, you can use a full export with data and import into a vanilla database, in which case steps 7, 8, and 9 do not apply. Additionally, in Step 10, you can import the data as well as all database objects. 

For upgrading from Oracle Database 10g to Oracle Database 11g, you can use Data Pump Export in parallel to improve performance.

2. Migration with Failback

A cross-platform database migration with failback



1. Address change management by restricting the creation of newer packages and tablespaces during the migration process.
2. Start the Oracle GoldenGate Capture process at the production database Dprod.
3. Do a point-in-time recovery of an existing backup of Dprod until some SCN Qscn in a separate staging area. Call this database Dpitr. The Capture process in Step 2 must capture all transactions that have a commit time stamp higher than this Qscn. It must have been positioned at or before the beginning of the longest-running transaction when the source database was at Qscn. You can query gv$transaction and gv$database at the source database to identify the longest-running transaction and the current SCN at any point in time to identify where the Capture process should be positioned.
4. Upgrade Dpitr to Oracle Database 11g Release 2 on IBM AIX. Advance compatibility to Oracle Database version 10.0.0.0 or later.
5. Set up a vanilla Oracle Database 11g Release 2 database on Linux. Call this database Dtarget.
6. Take a full export without rows at Dpitr. Call the generated export exp_norows.dmp. This will pick up any objects that are not picked up as part of the transportable tablespace export.
7. Unplug the user tablespaces from Dpitr with the Oracle Database cross-platform transportable tablespace feature, using source-side endian conversion. (Note that the conversion would not be required if the endian systems were the same.) This is the step that avoids any performance degradation and does not require any quiescing at Dprod. This step will create a small export dump file. Call this exp_xtts.dmp.
8. Plug the set of tablespaces from Dpitr into Dtarget using the cross-platform transportable tablespace feature. Use the exp_xtts.dmp file created from Step 7. (Note that the plugged in tablespaces are in read-only mode.)
9. Make the set of user tablespaces in Dtarget Read Write.
10. Do a NOROWS import with IGNORE=Y option at Dtarget using the exp_norows.dmp dump file.
11. Start the Oracle GoldenGate Delivery process at Dtarget and synchronize up to the changes generated since Qscn.
12. If any datatypes are not supported by transportable tablespace or Oracle GoldenGate, then do a special export/import of these objects from Dprod to Dtarget.
13. After Oracle GoldenGate eliminates the lag between Dprod and Dtarget, use Oracle GoldenGate Veridata to verify that the data at Dprod and Dtarget is synchronized.
14. Start the Oracle GoldenGate Capture process on Dtarget.
15. Switch the application from Dprod to Dtarget (not shown).
16. Start the Oracle GoldenGate Delivery process on Dprod.

Failback Steps

During the real-time bidirectional data movement, Oracle GoldenGate allows two databases to support transaction processing. This makes failback a trivial process, if the new database is not stable:

1. Stop the application at Dtarget (the new primary) running Oracle Database 11g Release 2.
2. Once Oracle GoldenGate has applied all transactions from Dtarget to Dprod, switch the application to Dprod.
3. Declare Dprod as the new primary.

Oracle GoldenGate Veridata

Oracle GoldenGate Veridata is a standalone, high-speed data comparison solution that identifies and reports data discrepancies between two databases, without interrupting ongoing business processes. It allows data discrepancies to be isolated for testing and troubleshooting. Oracle GoldenGate Veridata is ideal for conducting data validation after the rolling upgrade, once the source and target are fully operational and running different versions of Oracle Database. It can also help to determine if a failback is needed, in case of any risky data anomalies.















Transportable Tablespaces (TTS) -- Common Questions and Issues

Master Note: (Doc ID 1166564.1)

Applies to:
Oracle Server - Enterprise Edition - Version: 9.2.0.1 to 11.2.0.2 - Release: 9.2 to 11.2
Oracle Server - Standard Edition - Version: 9.0.2.0 to 11.2.0.2   [Release: 9.0.1 to 11.2]
Information in this document applies to any platform.

Why use Transportable Tablespaces (TTS)?
  • "Oracle transportable tablespaces are the fastest way for moving large volumes of data between two Oracle databases."
  • "Using transportable tablespaces, Oracle data files (containing table data, indexes, and almost every other Oracle database object) can be directly transported from one database to another. Furthermore, like import and export, transportable tablespaces provide a mechanism for transporting metadata in addition to transporting data."
  • "Transportable tablespaces have some limitations: source and target systems must be running Oracle8i (or higher), must use compatible character sets, and, prior to Oracle Database 10g, must run on the same operating system."
How to Validating Self Containing Property

TTS requires all the tablespaces, which we are moving must be self contained. This means that the segments within the migration tablespace set cannot have dependency to a segment in a tablespace out of the transportable tablespace set. This can be checked using the DBMS_TTS.TRANSPORT_SET_CHECK procedure.

SQL> exec DBMS_TTS.TRANSPORT_SET_CHECK('tbs', TRUE);
SQL> exec DBMS_TTS.TRANSPORT_SET_CHECK('tbs1, tbs2, tbs3', FALSE, TRUE);

SQL> SELECT * FROM transport_set_violations;
No rows should be displayed

If it were not self contained you should either remove the dependencies by dropping/moving them or include the tablespaces of segments into TTS set to which migration set is depended.

Commonly Asked Questions
Question
Short Answer
Complete Answer
Can I move/migrate to both a different RDBMS version and OS platform at the same time?
Yes; must be 10g or higher to move across OS platforms; charactersets must be the same regardless of version.
See "Limitations on Transportable Use" in Document 371556.1 How to move tablespaces across platforms using Transportable Tablespaces with RMAN
Do I have to convert the datafiles?
Yes, if the endianness is different.  If the endianness is not different and no undo is in any of the tablespaces being transported, then the convert step is not needed. 
Document 243304.1 10g : Transportable Tablespaces Across Different Platforms confirms the answer.

See question "Is there an easy-to-follow example?" below for another TTS-usage example.
Can I use TTS with ASM?
Yes, with RMAN, ASM files can be moved.
See "Transportable tablespace EXP/IMP of ASM files" in Document 371556.1 How to move tablespaces across platforms using Transportable Tablespaces with RMAN
Can I move raw files?
Yes, with RMAN.
See "Transportable tablespace EXP/IMP of ASM files" in Document 371556.1 How to move tablespaces across platforms using Transportable Tablespaces with RMAN
Can I transport just a single partition?
Yes.
See Document 731559.1 How to move or transport table partition using Transportable Table Space (TTS)?
Is there an easy-to-follow example?
Yes, see note.
See either "Transportable tablespace EXP/IMP of ASM files" or "Transportable tablespace EXP/IMP with OS files" in Document 371556.1 How to move tablespaces across platforms using Transportable Tablespaces with RMAN
Is there a size limitation?
No, except for a couple of size-related bugs, one which is a potential-corruption bug in < 11g.  Please see note for description and patch information.
Document 566875.1 Size Limitations On Cross Platform Transportable Tablespaces
What restrictions/limitations exist for TTS?
  1. Movement between different charactersets
  2. Movement between different OS (depending on RDBMS version)
  3. Some objects are not transferred via TTS
  4. Oracle Server -- Standard Edition vs. Enterprise Edition; Standard Edition can only import TTS (no export)
  5. Also review size-related bugs in question above.
See "Limitations on Transportable Use" in Document 371556.1 How to move tablespaces across platforms using Transportable Tablespaces with RMAN

See Feature Availability for Oracle Database Editions in the Oracle licensing documentation.

See Document 114915.1 Using Dbms_tts.transport_set_check Results in Entry in Transport_set_violations

See Document 883153.1
What Objects Are Exported With Transportble Tablespaces (TTS)?
What are best practices for TTS, especially when migrating a database?
  1. Check restrictions/limitations in question above.
  2. If you are migrating a database, (a) make sure there are no invalid objects in the source database before making the export.
  3. Take a full norows export to recreate objects that won't be transported with TTS.
  4. Keep the source database viable until you have determined all objects are in the target database and there are no issues (i.e. the target database has been thoroughly checked out and exercised).
  5. Do a dry run to work out any unexpected issues and determine timings.

Common Issues with Transportable Tablespaces
Error/Problem
Suggested Solution
ORA-19721
Document 757795.1 Import of an Exported Transportable Tablespace get ORA-19721 error
ORA-19721, IMP-3, IMP-0 (< 11g)
ORA-19721 : Transportable Tablespace Import Fails With Errors IMP-00003,ORA-19721,IMP-00000 (Doc ID 438683.1)
ORA-29341 The transportable set is not self-contained
Document 114915.1 Using Dbms_tts.transport_set_check Results in Entry in Transport_set_violations
Document 867246.1 New Partitions or Subpartitions Are Being Created in the Wrong Tablespace
TTS import completes successfully, but objects are missing
Document 883153.1 What Objects Are Exported With Transportble Tablespaces (TTS)?

Also, 'CTXSYS, 'ORDSYS', 'MDSYS', 'ORDPLUGINS', 'LBACSYS', 'XDB',
'SI_INFORMTN_SCHEMA', 'DIP', 'DMSYS', 'DBSNMP are treated as objects owned by 'SYS' and not exported with TTS.



Wednesday 30 January 2013

Invisible Indexes in Oracle 11g



Always wanted this – the ability to create an index on production without impacting the queries being fired by application but at the same time test the impact an index creation can cause. Invisible indexes are useful alternative to making an index unusable or to drop it. 

The optimizer ignores the index that are marked “Invisible” unless you set the initialization parameter “OPTIMIZE_USE_INVISIBLE_INDEXES” to TRUE. This parameter can be set both at a session level as well as system level.

Usage of Invisible Indexes

One can use invisible index for testing the impact of removing an index. Instead of dropping the index we can make it invisible and its effect.
One can speed up operations by creating invisible indexes for infrequent scenarios. Invisible index will make sure that the overall performance of the application is not affected.
Gives you the flexibility to have both b-tree (to guarantee unique PK) as well as bitmap indexes (on FK columns) in a data warehouse application.

How to create?

Multiple options – either mention “INVISIBLE” clause at the time of index creation or use ALTER command to make an index “INVISIBLE”.

CREATE INDEX emp_ename ON emp(ename)
      TABLESPACE users
      STORAGE (INITIAL 20K
      NEXT 20k
      PCTINCREASE 75)
      INVISIBLE;
ALTER INDEX index INVISIBLE;

To make the Index “VISIBLE”

ALTER INDEX index VISIBLE;

A new column “VISIBILITY” is available in *_INDEXES data dictionary views to know if an index is visible or invisible.

Example
SQL> create index indx_job on emp1(job);

Index created.


SQL> explain plan for select * from emp1 where job='CLERK';


Explained.


SQL> select * from table(dbms_xplan.display);


PLAN_TABLE_OUTPUT

------------------------------------------------------------------------------------------------------------
Plan hash value: 3449298850
------------------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |          |     4 |   348 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| EMP1     |     4 |   348 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX_JOB |     4 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------


SQL> explain plan for select * from emp1 where job='CLERK';


Explained.


SQL> select * from table(dbms_xplan.display);


PLAN_TABLE_OUTPUT

------------------------------------------------------------------------------------------------------------
Plan hash value: 2226897347
------------------------------------------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     4 |   348 |     3   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| EMP1 |     4 |   348 |     3   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------


SQL> ALTER SESSION SET optimizer_use_invisible_indexes=TRUE;


Session altered.


SQL> select index_name,visibility from user_indexes where table_name='EMP1';


INDEX_NAME                     VISIBILIT

------------------------------ ---------
INDX_JOB                       INVISIBLE

SQL>  explain plan for select * from emp1 where job='CLERK';


Explained.


SQL> select * from table(dbms_xplan.display);


PLAN_TABLE_OUTPUT

------------------------------------------------------------------------------------------------------------
Plan hash value: 3449298850
----------------------------------------------------------------------------------------
| Id  | Operation                   | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |          |     4 |   348 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| EMP1     |     4 |   348 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX_JOB |     4 |       |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------