Oracle GoldenGate is instrumental in achieving Maximum Availability in the following scenarios:
- To migrate to an Oracle Database by incurring minimal down time.
- To deploy an application architecture that requires features such as an active-active database and zero or minimal down time during planned outages for system upgrades.
- To implement a near real-time data warehouse or consolidated database on Oracle RAC, which is sourced from various, possibly heterogeneous source databases.
- To capture data from an OLTP application that is running on Oracle RAC to support further downstream consumption such as a SOA type integration.
It is a best practice to store the Oracle GoldenGate trail files, checkpoint files, bounded recovery, and configuration files in Oracle DBFS. Such arrangement provides the best performance, scalability, recoverability, and failover capabilities in the event of a system failure.
Using DBFS is fundamental to the continuing availability of the checkpoint and trail files in the event of a node failure. Ensuring the availability of the checkpoint files cluster wide is essential to ensuring that after a failure occurs, the Extract process can continue mining from the last known archive redo log file position, and the Replicat processes can start applying from the same trail file position before a failure occurred.
The use of DBFS allows one of the surviving database instances to be the source of an Extract/Data Pump process or the destination for the Replicat processes.
It is recommended that you run the DBFS database in ARCHIVELOG mode, so that recoverability is not compromised in the event of media failures or corruptions.
It is further recommended that you create a single file system for storing the Oracle GoldenGate trail files, checkpoint files, bounded recovery files, temp files, discard files, and parameter files.
Enough trail file disk space should be allocated to permit storage of up to 12 hours of trail files.
DBFS can be configured so that the DBFS instance and mount point resources are automatically started by Cluster Ready Services (CRS) after a node failure.
The crsctl command-line utility is used to register the DBFS resource with the Cluster Ready Services, so that Oracle Clusterware is aware of the resource, and is able to mount the DBFS file system from a surviving node.
The following Oracle GoldenGate directories should be placed in the shared DBFS drive:
dirchk (Checkpoint files), dirpcs (Process status files), dirprm (Parameter files) and dirdat (Extract data files)
The recommended way to store Oracle GoldenGate files in the DBFS file system is to create symbolic links.
Oracle GoldenGate Bounded Recovery
The Bounded Recovery (BR) feature guarantees efficient recovery after Extract stops for any reason, whether planned or unplanned, no matter how many open (uncommitted) transactions there are at the time that Extract stops and no matter how old they are.
The Bounded Recovery checkpoint files should be placed on a shared file system such that in the event of a failover when there are open long-running transactions, Extract can use Bounded Recovery to reduce the time taken to perform recovery.
Bounded Recovery files should be placed on DBFS.
The Extract parameters BR must be used to specify the DBFS location of the Bounded Recovery file.
Oracle GoldenGate Target Environment
On the target environment where the Replicat processes read the trail files and apply the data to the target database, there is a requirement for two separate DBFS file systems to separate the different I/O requirements of the trail and checkpoints files.
Trail files are written by the Collection Server process on the target host by using consecutive serial I/O from the start to the end of the file, sized according to the Data Pump configuration.
The same trail files are read by each Replicat Process, also using consecutive serial I/O requests.
After a portion of the trail is read by a Replicat process, it will not normally be read a second time by the same process.
The storage option for trail files is NOCACHE LOGGING.
Oracle GoldenGate Checkpoint Files
The checkpoint files are small (approximately 4 KB) but written to frequently, overwriting previous data.
The files does not grow in size and is read only during process startup to determine the proper starting point for recovery or initiation. Because the checkpoint file is written to over and over, performance is best when the file is stored in DBFS with CACHE LOGGING storage option.
Setting the CACHE option causes the small amount of data that is being written to the checkpoint files to be written into the buffer cache of the DBFS instance, without issuing direct writes to disk that causes higher waits on I/O.
Checkpoint performance increases by a factor of 2 to 5 times when CACHE LOGGING configuration is used.