Oracle Golden Gate Trailfile Encryption

Why We need to Encrypt Data?

When replicating sensitive data with Oracle GoldenGate, it's important to encrypt the trail files. At any given time, organizations are gathering, storing, and transferring millions of records about their customers across multiple enterprise environments. There are various operational settings within Oracle GoldenGate (OGG) that should be set to ensure Personally Identifiable Information (PII), or Sensitive Personal Information (SPI) is not compromised, data consists of a broad range of information that can identify individuals, such as birth dates, addresses, driver’s license numbers, credit card numbers, bank account numbers, and much more.

To encrypt trail or extract files, Oracle GoldenGate uses 256-key byte substitution. All records going into those files are encrypted both across any data links and within the files themselves.


You can encrypt the data in any local or remote trail or file.

To encrypt trail or extract files

1. In the Extract parameter file, list the following parameter before all trails or files that you want to be encrypted. You can list multiple trails or files after one instance of this parameter. 
     ENCRYPTTRAIL

2. To disable encryption for any files or trails listed in the Extract parameter file, precede their entries with the following parameter:
     NOENCRYPTTRAIL

3. In the Replicat parameter file, include the following parameter so that Replicat decrypts the data for processing.
     DECRYPTTRAIL

You also can use DECRYPTTRAIL for an Extract data pump to decrypt the data for column mapping, filtering, transformation, and so forth. You can then leave it decrypted for downstream trails or files, or you can use ENCRYPTTRAIL to encrypt the data again before it is written to those files.

Starting from Oracle GoldenGate 12.1.2, Oracle Wallet is integrated into Oracle GoldenGate to manage encryption keys. The master key and wallet method then also became the recommended approach to encrypting trail files.

The master key and wallet encryption process includes the following steps:

Users have to create a master-key wallet and add a master key to the wallet.
Oracle GoldenGate automatically generates a new encryption key and use it to encrypt every new trail file. The encryption key is included in the trail header and is encrypted using the master key. 

Oracle GoldenGate on the target will decrypt the encryption key with the shared master key, and then use the encryption key to decrypt the trail file.




Encrypting Data with Master Key and Wallet Method.

1. Create Master Key
To use this method of data encryption, you create a master-key wallet and add a master key to the wallet. This method works as follows, depending on whether the data is encrypted in the trails or across TCP/IP: 
.      Each time Oracle GoldenGate creates a trail file, it generates a new encryption key automatically. This encryption key encrypts the trail contents. The master key encrypts the encryption key. This process of encrypting encryption keys is known as key wrapand is described in standard ANS X9.102 from American Standards Committee.
.     To encrypt data across the network, Oracle GoldenGate generates a session key using a cryptographic function based on the master key.

The wallet is created in a platform-independent format. The wallet can be stored on a shared file system that is accessible by all systems in the Oracle GoldenGate environment. Alternatively, you can use an identical wallet on each system in the Oracle GoldenGate environment. If you use a wallet on each system, you must create the wallet on one system, typically the source system, and then copy it to all of the other systems in the Oracle GoldenGate environment. This must also be done every time you add, change, or delete a master key.

Lets Create Wallet, master key and then update into GLOBAL parameter.


GGSCI (localhost.localdomain) 3> create wallet
Created wallet at location 'dirwlt'.
Opened wallet at location 'dirwlt'.

GGSCI (localhost.localdomain) 4> open wallet
Opened wallet at location 'dirwlt'.

GGSCI (localhost.localdomain) 5>  add masterkey
Master key 'OGG_DEFAULT_MASTERKEY' added to wallet at location 'dirwlt'.

GGSCI (localhost.localdomain) 6> sh ls dirwlt
cwallet.sso

GGSCI (localhost.localdomain) 7> add masterkey ggcs
Master key 'ggcs' added to wallet at location 'dirwlt'.

GGSCI (localhost.localdomain) 11> info masterkey version 1
Masterkey Name:                           OGG_DEFAULT_MASTERKEY
Creation Date:                                Wed Nov  8 14:44:39 2017
Version:                                          1
Renew Date:                                   Wed Nov  8 14:44:39 2017
Status:                                           Current
Key Hash (SHA1):                          0x5F16E754E85E0551DE3E5834F4C59748497D6920

The example creates a master key named ggcs. We tell Oracle GoldenGate to use the master key by configuring the the MASTERKEYNAME parameter in GLOBALS file. By default, Oracle GoldenGate will pick up the latest version

GGSCI (localhost.localdomain) 11> view params ./GLOBALS
MASTERKEYNAME ggcs

2. Using the Encryption Key in an Extract.
You can use the ENCRYPTTRAIL parameter in Extract (including the Pump) parameter file to encrypt the trail files. If you don't specify the MASTERKEYNAME in the GLOBALS, Oracle GoldenGate will use the default master key named OGG_DEFAULT_MASTERKEY. If you don't specify the AES cipher name, AES128 is used.

            ENCRYPTTRAIL AES192
            ENCRYPTTRAI

When extract directly write to the remote host, the encryption using REMOTEHOSTOPTIONS syntax is shown as follows:

          RMTHOSTOPTIONS host, MGRPORT port, ENCRYPT {AES128 | AES192 | AES256 |  BLOWFISH}

Let add One new Extract and use Encryption
GGSCI (localhost.localdomain) 11> add trandata test.*,ALLCOLS
GGSCI (localhost.localdomain) 11> add extract test_gm, INTEGRATED tranlog, begin now
GGSCI (localhost.localdomain) 11> add exttrail /ogg_trail/gm, extract test_gm, megabytes 50 GGSCI (localhost.localdomain) 11> REGISTER EXTRACT test_gm DATABASE CONTAINER (psrs)

Viewing Parameter File configuration
GGSCI (localhost.localdomain) 11> view params test_gm
EXTRACT TEST_GM 
SETENV (ORACLE_HOME='/u01/app/oracle/product/12.1.0/db_1')
SETENV (ORACLE_SID='CCDB') 
USERID c##ogg@psrs, PASSWORD oracle
LOGALLSUPCOLS
UPDATERECORDFORMAT COMPACT
TRANLOGOPTIONS INTEGRATEDPARAMS (max_sga_size 512)
ENCRYPTTRAIL
EXTTRAIL /ogg_home/ggs/trail_files/GM
TABLE TEST.DEPT_INFO;

Verify Trailfile Encrypted or not? Using logdump.

[oracle@localhost ~]/ogg_home/ggs/logdump

Logdump 1>open /ogg_trail/gm0000001
Logdump 2>ghdr on
Logdump 3> detail on
Logdump 4> data on
Logdump 5> n




3.Decrypting Trailfile at Target Side.

Use the DECRYPTTRAIL parameter for a data pump if you want trail data to be decrypted before it is written to the output trail. Otherwise, the data pump automatically decrypts it, if processing is required, and then reencrypts it before writing to the output trail. (Replicat decrypts the data automatically without any parameter input.)

Recover an accidentally deleted file when the database is Up & Running

This post describe you how to recover datafile which is deleted at OS level by accidentally using “rm- -rf” command while database is Up & Running

1. I have 12c (12.1.0.2 relase ) database, first lets login to database and then   source to required PDB to create tablespace, user and table to hold some data.
  1a. Lets Logged into PDB called PSRS
  1b. lets create tablespace called test_tbs under /home/oracle/ file name test_tbs01.dbf.


         1c. Let’s create a user called test_user
    1d. granted required privileges to connect DB

     1e. Lets create table and insert some data for testing purpose.




So now we have a schema test_user with a simple table that was created in the test_tbs datafile (this was default tablespace of test_user).

2.  Now we are going to remove this test_tbs01.dbf file at OS level from /home/oracle directory.
            2a. check file exist or not at os level with ls -ltr test*
       2b. then remove file using rm -rf test_tbs01.dbf
       2c. and then check again with ls -ltr test* (you will get no such file or directory)



Since table already cached into buffer cache, you can still retrieve data from cache, but as soon as you are forced to attempt a read from disk you will get error.
             
        2d.You can see below, still get result after deleting related datafile. 



         2e. lets flush buffer cache alter system flush buffer_cache




           2f. now connect user and try to retrieve data




           2g. check dba_data_files as well



    3.  Data file recovery process, first find out PID  for the Database writer, so that we can open File Descriptor for the file we just removed 
          3a. To find out PID, “ps -ef|grep dbw” so we got PID as 8432


   3b. Now try to do lsof on this PID to confirm this is right PID, “/usr/sbin/lsof -p 8432” you can see on the last line /home/oracle/test_tbs01.dbf and we see its marked as deleted and we can    also see that it has a file descriptor 281.

3c. you can also see this information from this directory “/proc/8432/fd/ (8432 where we   have used for PID), again we can see this file marked as deleted and its FD 281.


3d. got to directory “/proc/8432/fd/ and ran a CAT command on this FD number and sent it to another filename or actual data file name in our case “test_tbs01.dbf”.


          3e. now we can see that have original data file back in same location.


    4. Now Verify data base and check data will we able to retrieve or not.

                 4a. Checking dba_data_files.
             

               4b. Connecting test_user and querying for data.
             
           

#########That’s the end###########


OGG-00664 OCI Error beginning session (status = 1034-ORA-01034: ORACLE not available

2017-06-03 10:12:31  ERROR  OGG-00664  OCI Error beginning session (status = 1034-ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3651
Additional information: -609063181).

2017-06-03 10:12:31  ERROR   OGG-01668  PROCESS ABENDING.

Step 1 : Un Register the Extract from database.

GGSCI (localhost as c##ogg@LOCALCDB1/CDB$ROOT) 8>  UNREGISTER EXTRACT EXTPP01 DATABASE

GGSCI (localhost as c##ogg@LOCALCDB1/CDB$ROOT) 9> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     STOPPED     EXTPP01      00:00:00      00:12:38
REPLICAT    RUNNING     REPP001     00:00:00      00:00:05


GGSCI (localhost as c##ogg@LOCALCDB1/CDB$ROOT) 10> view report EXTPP01


***********************************************************************
                 Oracle GoldenGate Capture for Oracle
 Version 12.1.2.1.10 21604177 OGGCORE_12.1.2.1.0OGGBP_PLATFORMS_150902.1337_FBO
   Linux, x64, 64bit (optimized), Oracle 12c on Sep  2 2015 22:06:14

Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.


                    Starting at 2017-06-03 10:03:18
***********************************************************************

Operating System Version:
Linux
Version #1 SMP Fri May 6 14:54:05 EDT 2016, Release 2.6.32-642.1.1.el6.x86_64
Node: localhost
Machine: x86_64
                         soft limit   hard limit
Address Space Size   :    unlimited    unlimited
Heap Size            :    unlimited    unlimited
File Size            :    unlimited    unlimited
CPU Time             :    unlimited    unlimited

Process id: 5302

Description:

***********************************************************************
**            Running with the following parameters                  **
***********************************************************************

2017-06-03 10:03:18  INFO    OGG-03059  Operating system character set identified as UTF-8.

2017-06-03 10:03:18  INFO    OGG-02695  ANSI SQL parameter syntax is used for parameter parsing.
EXTRACT extpp1
SETENV (ORACLE_HOME='/u01/app/oracle/product/12.1.0/db_1')

2017-06-03 10:03:18  INFO    OGG-02095  Successfully set environment variable 

ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1.

SETENV (ORACLE_SID=LOCALCDB1)

2017-06-03 10:03:18  INFO    OGG-02095  Successfully set environment variable ORACLE_SID=LOCALCDB1.
USERID C##OGG, PASSWORD ***

-------------
-------------
-------------
-------------

2017-06-03 10:03:18  ERROR   OGG-00664  OCI Error beginning session (status = 1034-ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3651
Additional information: -609063181).

2017-06-03 10:03:18  ERROR   OGG-01668  PROCESS ABENDING.



Step 2 : Restart Manager Process

GGSCI (localhost as c##ogg@LOCALCDB1/CDB$ROOT) 10> stop mgr
Manager process is required by other GGS processes.
Are you sure you want to stop it (y/n)?y

Sending STOP request to MANAGER ...
Request processed.
Manager stopped.


GGSCI (localhost as c##ogg@LOCALCDB1/CDB$ROOT) 11> start mgr
Manager started.


GGSCI (localhost as c##ogg@LOCALCDB1/CDB$ROOT) 12> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     STOPPED     EXTPP01      00:00:00      00:16:57
REPLICAT    RUNNING     REPP001     00:00:00      00:00:04

Step 3 : Delete and Add Extract process.

GGSCI (localhost as c##ogg@LOCALCDB1/CDB$ROOT) 24> delete extract EXTPP01
Deleted EXTRACT EXTPP01.

GGSCI (localhost as c##ogg@LOCALCDB1/CDB$ROOT) 26> add extract EXTPP01, INTEGRATED tranlog, SCN 133822379
EXTRACT added.


GGSCI (localhost as c##ogg@LOCALCDB1/CDB$ROOT) 27> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     STOPPED     EXTPP01      00:00:00      00:00:02
REPLICAT    RUNNING     REPP001     00:00:00      00:00:03

GGSCI (localhost as c##ogg@LOCALCDB1/CDB$ROOT) 28> add exttrail /ggtrail/et, extract EXTPP01, megabytes 50
EXTTRAIL added.

Step 4 : Register extract in Database.

GGSCI (localhost as c##ogg@LOCALCDB1/CDB$ROOT) 29> REGISTER EXTRACT EXTPP01 DATABASE CONTAINER (pdblocal)
Extract EXTPP01 successfully registered with database at SCN 133959101.


GGSCI (localhost as c##ogg@LOCALCDB1/CDB$ROOT) 30> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     STOPPED     EXTPP01      00:00:00      00:01:18
REPLICAT    RUNNING     REPP001     00:00:00      00:00:00

Step 5 : Start Extract process

GGSCI (localhost as c##ogg@LOCALCDB1/CDB$ROOT) 31> start EXTPP01

Sending START request to MANAGER ...
EXTRACT EXTPP01 starting

GGSCI (localhost as c##ogg@LOCALCDB1/CDB$ROOT) 49> info all
info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EXTPP01      00:00:00      00:01:48
REPLICAT    RUNNING     REPP001     00:00:00       00:00:00

☺☺☺☺☺☺☺☺☺☺☺☺☺☺☺☺☺☺☺☺☺☺☺☺☺☺☺☺

Oracle GoldenGate: Understanding OGG-01161 Bad Column Index Error

The “OGG-01161 Bad Column Index” error is possibly one of the most reviled and misunderstood Oracle GoldenGate (OGG) error messages ever created.

Possible reasons of this error:



Source and Target structure should be different.


2014-05-19 13:32:37 ERROR OGG-01161 Bad column index (58) specified for table TABLE1.OGG01161,

To Identify this issue, you need to login into the both source and target databases try to find the definition of the mentioned tables, and you will find the difference between them.

Solution :  

1. Alter the target table to match source and target table definitions.

alter table table1 add type(char 1);

2. and start replicate again and it should work and trail files started moving to further.

GGSCI > start replicat rep01
GGSCI > info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER    RUNNING

REPLICAT    RUNNING     REP01    00:00:08      00:00:00

GGSCI >


Now you will find that current replicat lag will start decreasing.


How to move Pluggable database from One CDB database to another CDB database.

From  
Database Name                          :               mdbpdb3
Container Database name           :               mdbcdb
Data files directory                     :               “/app/oracle/mdbcdb/mdbpdb3”
To
Database Name                          :               mdbpdb_new
Container Database name           :               ORCL
Data files directory                    :               /app/oracle/mdbpdb_new

Task: unplug pluggable database from mdbpdb3 from mdbcdb container database and plug it into another container database “ORCL” with new name “mdbpdb_new”  

Move data files from “/app/oracle/mdbcdb/mdbpdb3” to “/app/oracle/mdbpdb_new” directory.

     Connect to mdbcdb database


$ export ORACLE_SID=mdbcdb
$ sqlplus
Enter Username: / as sysdba
SQL>

     Close the pluggable database mdbpdb3

SQL> alter pluggable database mdbpdb3 close;

Pluggable database altered.

Unplug the database by specifying XML file. This XML file will be used to plug this  database into the target CDB

SQL> alter pluggable database mdbpdb3 unplug into '/app/oracle/mdbpdb3.xml';
Pluggable database altered. 

Create target directory

$mkdir /app/oracle/mdbpdb_new

Connect to Target CDB ‘orcl’

$ export ORACLE_SID=orcl
$ sqlplus
Enter User: / as sysdba

               Start the target CDB

SQL> startup

SQL>CREATE PLUGGABLE DATABASE mdbpdb_new  USING '/app/oracle/mdbpdb3.xml' MOVE 
FILE_NAME_CONVERT= '/app/oracle/mdbcdb/mdbpdb3/','/app/oracle/mdbpdb_new/') STORAGE (MAXSIZE 2G MAX_SHARED_TEMP_SIZE 100M);

Pluggable database created.

SQL> show pdbs
CON_ID CON_NAME             OPEN MODE RESTRICTED
------ ------------------------------ ---------- ----------
1      mdbpdb_new                 MOUNTED

         Open the pluggable database mdbpdb_new

SQL> alter pluggable database mdbpdb_new open;


Pluggable database altered.

Cloning an Existing Pluggable Database

From MDBPDB1 to MDBPDB3.

Old Pluggable database Name          :               MDBPDB1
Data files directory                          :               “/app/oracle/mdbcdb/mdbpdb1”

New Pluggable database Name         :               MDBPDB2.
Data Files directory                         :               “/app/oracle/mdbcdb/mdbpdb3”

Create the directory
$mkdir /app/oracle/mdbcdb/mdbpdb3

$sqlplus
Enter User: / as sysdba
SQL>

Note: Before cloning we need to close the source database mdbpdb1 and we should open in Read Only mode.

SQL> alter pluggable database mdbpdb1 close;
Pluggable database altered.

SQL> alter pluggable database mdbpdb1 open read only;
Pluggable database altered.

Then we need to issue the following command.

SQL>CREATE PLUGGABLE DATABASE mdbpdb3 FROM mdbpdb1 FILE_NAME_CONVERT = ('/app/oracle/mdbcdb/mdbpdb1/',
'/app/oracle/mdbcdb/mdbpdb3/') STORAGE (MAXSIZE 2G MAX_SHARED_TEMP_SIZE 100M);


Pluggable database created.

Creating Pluggable Database from existing pluggable database

From MDBPDB1 to MDBPDB2

Old Pluggable database Name         :               MDBPDB1
Data files directory                         :               “/app/oracle/mdbcdb/mdbpdb1”

New Pluggable database Name        :                MDBPDB2.
Data Files directory                        :               “/app/oracle/mdbcdb/mdbpdb2”

Create the directory
$mkdir /app/oracle/mdbcdb/mdbpdb2

$sqlplus
Enter User: / as sysdba
SQL>

Then use the following commands.

SQL>CREATE PLUGGABLE DATABASE mdbpdb2 ADMIN USER mdbpdb2admin IDENTIFIED BY Admin123 STORAGE (MAXSIZE 2G MAX_SHARED_TEMP_SIZE 100M)
DEFAULT TABLESPACE users DATAFILE '/app/oracle/mdbcdb/mdbpdb2/users01.dbf'
SIZE 250M AUTOEXTEND ON FILE_NAME_CONVERT= '/app/oracle/mdbcdb/mdbpdb1/','/app/oracle/mdbcdb/mdbpdb2/')