Oracle GoldenGate: Understanding OGG-01161 Bad Column Index Error

The “OGG-01161 Bad Column Index” error is possibly one of the most reviled and misunderstood Oracle GoldenGate (OGG) error messages ever created.

Possible reasons of this error:



Source and Target structure should be different.


2014-05-19 13:32:37 ERROR OGG-01161 Bad column index (58) specified for table TABLE1.OGG01161,

To Identify this issue, you need to login into the both source and target databases try to find the definition of the mentioned tables, and you will find the difference between them.

Solution :  

1. Alter the target table to match source and target table definitions.

alter table table1 add type(char 1);

2. and start replicate again and it should work and trail files started moving to further.

GGSCI > start replicat rep01
GGSCI > info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER    RUNNING

REPLICAT    RUNNING     REP01    00:00:08      00:00:00

GGSCI >


Now you will find that current replicat lag will start decreasing.


How to move Pluggable database from One CDB database to another CDB database.

From  
Database Name                          :               mdbpdb3
Container Database name           :               mdbcdb
Data files directory                     :               “/app/oracle/mdbcdb/mdbpdb3”
To
Database Name                          :               mdbpdb_new
Container Database name           :               ORCL
Data files directory                    :               /app/oracle/mdbpdb_new

Task: unplug pluggable database from mdbpdb3 from mdbcdb container database and plug it into another container database “ORCL” with new name “mdbpdb_new”  

Move data files from “/app/oracle/mdbcdb/mdbpdb3” to “/app/oracle/mdbpdb_new” directory.

     Connect to mdbcdb database


$ export ORACLE_SID=mdbcdb
$ sqlplus
Enter Username: / as sysdba
SQL>

     Close the pluggable database mdbpdb3

SQL> alter pluggable database mdbpdb3 close;

Pluggable database altered.

Unplug the database by specifying XML file. This XML file will be used to plug this  database into the target CDB

SQL> alter pluggable database mdbpdb3 unplug into '/app/oracle/mdbpdb3.xml';
Pluggable database altered. 

Create target directory

$mkdir /app/oracle/mdbpdb_new

Connect to Target CDB ‘orcl’

$ export ORACLE_SID=orcl
$ sqlplus
Enter User: / as sysdba

               Start the target CDB

SQL> startup

SQL>CREATE PLUGGABLE DATABASE mdbpdb_new  USING '/app/oracle/mdbpdb3.xml' MOVE 
FILE_NAME_CONVERT= '/app/oracle/mdbcdb/mdbpdb3/','/app/oracle/mdbpdb_new/') STORAGE (MAXSIZE 2G MAX_SHARED_TEMP_SIZE 100M);

Pluggable database created.

SQL> show pdbs
CON_ID CON_NAME             OPEN MODE RESTRICTED
------ ------------------------------ ---------- ----------
1      mdbpdb_new                 MOUNTED

         Open the pluggable database mdbpdb_new

SQL> alter pluggable database mdbpdb_new open;


Pluggable database altered.

Cloning an Existing Pluggable Database

From MDBPDB1 to MDBPDB3.

Old Pluggable database Name          :               MDBPDB1
Data files directory                          :               “/app/oracle/mdbcdb/mdbpdb1”

New Pluggable database Name         :               MDBPDB2.
Data Files directory                         :               “/app/oracle/mdbcdb/mdbpdb3”

Create the directory
$mkdir /app/oracle/mdbcdb/mdbpdb3

$sqlplus
Enter User: / as sysdba
SQL>

Note: Before cloning we need to close the source database mdbpdb1 and we should open in Read Only mode.

SQL> alter pluggable database mdbpdb1 close;
Pluggable database altered.

SQL> alter pluggable database mdbpdb1 open read only;
Pluggable database altered.

Then we need to issue the following command.

SQL>CREATE PLUGGABLE DATABASE mdbpdb3 FROM mdbpdb1 FILE_NAME_CONVERT = ('/app/oracle/mdbcdb/mdbpdb1/',
'/app/oracle/mdbcdb/mdbpdb3/') STORAGE (MAXSIZE 2G MAX_SHARED_TEMP_SIZE 100M);


Pluggable database created.

Creating Pluggable Database from existing pluggable database

From MDBPDB1 to MDBPDB2

Old Pluggable database Name         :               MDBPDB1
Data files directory                         :               “/app/oracle/mdbcdb/mdbpdb1”

New Pluggable database Name        :                MDBPDB2.
Data Files directory                        :               “/app/oracle/mdbcdb/mdbpdb2”

Create the directory
$mkdir /app/oracle/mdbcdb/mdbpdb2

$sqlplus
Enter User: / as sysdba
SQL>

Then use the following commands.

SQL>CREATE PLUGGABLE DATABASE mdbpdb2 ADMIN USER mdbpdb2admin IDENTIFIED BY Admin123 STORAGE (MAXSIZE 2G MAX_SHARED_TEMP_SIZE 100M)
DEFAULT TABLESPACE users DATAFILE '/app/oracle/mdbcdb/mdbpdb2/users01.dbf'
SIZE 250M AUTOEXTEND ON FILE_NAME_CONVERT= '/app/oracle/mdbcdb/mdbpdb1/','/app/oracle/mdbcdb/mdbpdb2/')

Oracle Manual Database Creation Script in Linux



CREATE DATABASE SAMPLEDB
    USER SYS IDENTIFIED BY YourPassword
    USER SYSTEM IDENTIFIED BY YourPassword
   LOGFILE       GROUP 1 ('/your/local/path/redo01a.log','/your/local/path/redo01b.log') SIZE 100M BLOCKSIZE 512,
              GROUP 2 ('/your/local/path/redo02a.log','/your/local/path/redo02b.log') SIZE 100M BLOCKSIZE 512,
              GROUP 3 ('/your/local/path/redo03a.log','/your/local/path/redo03b.log') SIZE 100M BLOCKSIZE 512
       ARCHIVELOG
      CHARACTER SET AL32UTF8
   NATIONAL CHARACTER SET AL16UTF16
      EXTENT MANAGEMENT LOCAL
   DATAFILE '/your/local/path/system01.dbf' SIZE 325M REUSE
   SYSAUX DATAFILE '/your/local/path/sysaux01.dbf' SIZE 325M REUSE
      DEFAULT TABLESPACE users
         DATAFILE '/your/local/path/users01.dbf'
      SIZE 500M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED
   DEFAULT TEMPORARY TABLESPACE temp
      TEMPFILE '/your/local/path/temp01.dbf'
      SIZE 20M REUSE
   UNDO TABLESPACE UNDOTBS1
      DATAFILE '/your/local/path/undotbs01.dbf'
          SIZE 200M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED

/



Oracle XML Database Creation

XDB Removal :
The catnoqm.sql script drops XDB.
SQL> spool xdb_removal_1.log
SQL> set echo on;
SQL> connect / as sysdba
SQL> shutdown immediate;
SQL> startup
SQL> @?/rdbms/admin/catnoqm.sql
SQL> spool off;


XDB Installation :
The catqm.sql script requires the following parameters be passed to it when run:
A. XDB user password
B. XDB user default tablespace
(Any tablespace other than SYSTEM, UNDO and TEMP can be specified.
The specified tablespace must already exist prior to running the script.)
C. XDB user temporary tablespace
D. SecureFiles = YES or NO
(If YES is specified, the XDB repository will use SecureFile storage.
 If NO is specified, LOBS will be used.
To use SecureFiles, compatibility must be set to 11.2.
The tablespace specified for the XDB repository must be using
Automatic Segment Space Management (ASSM) for SecureFiles to be used.)
Therefore the syntax to run catqm.sql is the following:
SQL> catqm.sql A B C D

For Example:
SQL> @?/rdbms/admin/catqm.sql XDB XDB TEMP YES
## IMPORTANT: You must shutdown and restart the database between removal and reinstall ##
SQL> spool xdb_install.log
SQL> set echo on;
SQL> connect / as sysdba
SQL> shutdown immediate;
SQL> startup;
SQL> @?/rdbms/admin/catqm.sql <XDB pwd> <XDB default tbs> <XDB temporary tbs> <SecureFiles = YES/NO>
SQL> @?/rdbms/admin/utlrp.sql
SQL> spool off

Verify XDB Installation
SQL> spool xdb_status.txt
SQL> set echo on;
SQL> connect / as sysdba
SQL> set pagesize 1000
SQL> col comp_name format a36
SQL> col version format a12
SQL> col status format a8
SQL> col owner format a12
SQL> col object_name format a35
SQL> col name format a25

-- Check status of  Xml DB (XDB)
SQL> select comp_name, version, status from dba_registry
     where comp_id = 'XDB';

-- Check for invalid objects owned by Xml DB (XDB)
SQL> select owner, object_name, object_type, status from dba_objects
     where status = 'INVALID' and owner = 'XDB';
spool off;

AWR Report Explanation

Cache Sizes (end): This shows the size of each SGA region after AMM has changed them.  This information can be compared to the original init.oraparameters at the end of the AWR report.

Load Profile: This important section shows important rates expressed in units of per second and transactions per second.

Instance Efficiency Percentages: With a target of 100%, these are high-level ratios for activity in the SGA.

Shared Pool Statistics: This is a good summary of changes to the shared pool during the snapshot period.

Top 5 Timed Events: This is the most important section in the AWR report.  It shows the top wait events and can quickly show the overall database bottleneck.

Wait Events Statistics Section: This section shows a breakdown of the main wait events in the database including foreground and background database wait events as well as time model, operating system, service, and wait classes statistics.

Wait Events: This AWR report section provides more detailed wait event information for foreground user processes which includes Top 5 wait events and many other wait events that occurred during the snapshot interval.

Background Wait Events: This section is relevant to the background process wait events.

Time Model Statistics: Time mode statistics report how database-processing time is spent. This section contains detailed timing information on particular components participating in database processing.

Operating System Statistics: The stress on the Oracle server is important, and this section shows the main external resources including I/O, CPU, memory, and network usage.

Service Statistics: The service statistics section gives information about how particular services configured in the database are operating.

SQL Section: This section displays top SQL, ordered by important SQL execution metrics.

SQL Ordered by Elapsed Time: Includes SQL statements that took significant execution time during processing.

SQL Ordered by CPU Time: Includes SQL statements that consumed significant CPU time during its processing.

SQL Ordered by Gets: These SQLs performed a high number of logical reads while retrieving data.

SQL Ordered by Reads: These SQLs performed a high number of physical disk reads while retrieving data.

SQL Ordered by Parse Calls: These SQLs experienced a high number of reparsing operations.

SQL Ordered by Sharable Memory: Includes SQL statements cursors which consumed a large amount of SGA shared pool memory.

SQL Ordered by Version Count: These SQLs have a large number of versions in shared pool for some reason.

Instance Activity Stats: This section contains statistical information describing how the database operated during the snapshot period.

Instance Activity Stats (Absolute Values): This section contains statistics that have absolute values not derived from end and start snapshots.

Instance Activity Stats (Thread Activity): This report section reports a log switch activity statistic.

I/O Section: This section shows the all-important I/O activity for the instance and shows I/O activity by tablespace, data file, and includes buffer pool statistics.
Tablespace IO Stats
File IO Stats
Buffer Pool Statistics 
Advisory Section: This section show details of the advisories for the buffer, shared pool, PGA and Java pool.
Buffer Pool Advisory
PGA Aggr Summary: PGA Aggr Target Stats; PGA Aggr Target Histogram; and PGA Memory Advisory. 
Shared Pool Advisory
Java Pool Advisory 
Buffer Wait Statistics: This important section shows buffer cache waits statistics.

Enqueue Activity: This important section shows how enqueue operates in the database. Enqueues are special internal structures which provide concurrent access to various database resources.

Undo Segment Summary: This section gives a summary about how undo segments are used by the database.

Undo Segment Stats: This section shows detailed history information about undo segment activity.

Latch Activity: This section shows details about latch statistics. Latches are a lightweight serialization mechanism that is used to single-thread access to internal Oracle structures.
Latch Sleep Breakdown
Latch Miss Sources
Parent Latch Statistics
Child Latch Statistics

Segment Section: This report section provides details about hot segments using the following criteria:
Segments by Logical Reads: Includes top segments which experienced high number of logical reads.
Segments by Physical Reads: Includes top segments which experienced high number of disk physical reads.
Segments by Buffer Busy Waits: These segments have the largest number of buffer waits caused by their data blocks.
Segments by Row Lock Waits: Includes segments that had a large number of row locks on their data.
Segments by ITL Waits: Includes segments that had a large contention for Interested Transaction List (ITL). The contention for ITL can be reduced by increasing INITRANS storage parameter of the table.

Dictionary Cache Stats: This section exposes details about how the data dictionary cache is operating.

Library Cache Activity: Includes library cache statistics describing how shared library objects are managed by Oracle.

SGA Memory Summary: This section provides summary information about various SGA regions.


init.ora Parameters: This section shows the original init.ora  parameters for the instance during the snapshot period.