{"id":2732,"date":"2012-08-21T12:50:46","date_gmt":"2012-08-21T10:50:46","guid":{"rendered":"https:\/\/www.dbi-services.com\/blog\/how-to-add-new-tables-in-existing-oracle-goldengate-replication\/"},"modified":"2022-12-14T10:21:47","modified_gmt":"2022-12-14T09:21:47","slug":"how-to-add-new-tables-in-existing-oracle-goldengate-replication","status":"publish","type":"post","link":"https:\/\/www.dbi-services.com\/blog\/how-to-add-new-tables-in-existing-oracle-goldengate-replication\/","title":{"rendered":"How to add new tables in existing Oracle GoldenGate replication"},"content":{"rendered":"<p>Introduction<\/p>\n<p>Often some times after having setting up a replication environment, it is needed to add additional tables to the replication, where the<br \/>\nadditional tables have dependency with the current replicated tables. This tasks must be implemented without disturbing the current replication environment<\/p>\n<p>Explanation<\/p>\n<p>In this example the below tables from schema G001 on Database PROD1 will be added, to an existing Replication from PROD1 to REP<\/p>\n<p>CFG_ADV_COND<br \/>\nCFG_NARRATIVE_TEMPLATE<br \/>\nCFG_REG_REPORT_RULES<br \/>\nCMN_LOOKUP<br \/>\nCMN_USER_LOGIN<\/p>\n<p>Stop the replication environment<br \/>\n================================<\/p>\n<p>\u00e2\u20ac\u00a2\u00a0\u00a0\u00a0 Stop the extract groups<\/p>\n<p>Connect on server from the source database PROD1 and stop all extract groups for the replication to the database REP1<\/p>\n<p>oracle@server1:~\/ [PROD1] PROD1<br \/>\noracle@server1:~\/ [PROD1] cdgh<br \/>\noracle@server1:\/u99\/app\/goldengate\/gss\/11.1.1.1.0\/ [PROD1] ggsci<\/p>\n<p>GGSCI (server1) 1&gt; info all<\/p>\n<p>Program\u00a0\u00a0\u00a0\u00a0 Status\u00a0\u00a0\u00a0\u00a0\u00a0 Group\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Lag\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Time Since Chkpt<br \/>\nMANAGER\u00a0\u00a0\u00a0\u00a0 RUNNING<br \/>\nEXTRACT\u00a0\u00a0\u00a0\u00a0 RUNNING\u00a0\u00a0\u00a0\u00a0 DPG001\u00a0\u00a0\u00a0\u00a0\u00a0 00:00:00\u00a0\u00a0\u00a0\u00a0\u00a0 00:00:06<br \/>\nEXTRACT\u00a0\u00a0\u00a0\u00a0 RUNNING\u00a0\u00a0\u00a0\u00a0 G001\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 00:00:00\u00a0\u00a0\u00a0\u00a0\u00a0 00:00:05<\/p>\n<p>GGSCI (server1) 2&gt; stop extract *<br \/>\nProgram\u00a0\u00a0\u00a0\u00a0 Status\u00a0\u00a0\u00a0\u00a0\u00a0 Group\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Lag\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Time Since Chkpt<\/p>\n<p>MANAGER\u00a0\u00a0\u00a0\u00a0 RUNNING<br \/>\nEXTRACT\u00a0\u00a0\u00a0\u00a0 STOPPED\u00a0\u00a0\u00a0\u00a0 DPG001\u00a0\u00a0\u00a0\u00a0\u00a0 00:00:00\u00a0\u00a0\u00a0\u00a0\u00a0 00:00:01<br \/>\nEXTRACT\u00a0\u00a0\u00a0\u00a0 STOPPED\u00a0\u00a0\u00a0\u00a0 G001\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 00:00:00\u00a0\u00a0\u00a0\u00a0\u00a0 00:00:13<\/p>\n<p>\u00e2\u20ac\u00a2\u00a0\u00a0\u00a0 Stop the replication groups<\/p>\n<p>Connect on server from the target database REP1 and stop all extract groups for the replication groups coming from the database PROD1<\/p>\n<p>GGSCI (server2) 1&gt; info all<\/p>\n<p>Program\u00a0\u00a0\u00a0\u00a0 Status\u00a0\u00a0\u00a0\u00a0\u00a0 Group\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Lag\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Time Since Chkpt<br \/>\nMANAGER\u00a0\u00a0\u00a0\u00a0 RUNNING<br \/>\nREPLICAT\u00a0\u00a0\u00a0 RUNNING\u00a0\u00a0\u00a0\u00a0 PROD1\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 00:00:00\u00a0\u00a0\u00a0\u00a0\u00a0 00:00:04<\/p>\n<p>GGSCI (server2) 2&gt; stop REPLICAT PROD1<br \/>\nGGSCI (server2) 3&gt; info all<\/p>\n<p>Program\u00a0\u00a0\u00a0\u00a0 Status\u00a0\u00a0\u00a0\u00a0\u00a0 Group\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Lag\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Time Since Chkpt<br \/>\nMANAGER\u00a0\u00a0\u00a0\u00a0 RUNNING<br \/>\nREPLICAT\u00a0\u00a0\u00a0 STOPPED\u00a0\u00a0\u00a0\u00a0 PROD1\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 00:00:00\u00a0\u00a0\u00a0\u00a0\u00a0 00:00:04<\/p>\n<p>Update the extract environment with the new tables<br \/>\n==================================================<\/p>\n<p>Now we can update the extract group G001 on the source database with the new tables to be replicated.\u00a0 Edit the G001 parameter file and add the new tables (vi editor)<\/p>\n<p>GGSCI (server1) 30&gt; edit params G001<br \/>\n. . .<br \/>\ntable G001SCHEMA.CFG_ADV_COND;<br \/>\ntable G001SCHEMA.CFG_NARRATIVE_TEMPLATE;<br \/>\ntable G001SCHEMA.CFG_REG_REPORT_RULES;<br \/>\ntable G001SCHEMA.CMN_LOOKUP;<br \/>\ntable G001SCHEMA.CMN_USER_LOGIN;<\/p>\n<p>Add supplemental login on source database PROD1 for the new added tables<\/p>\n<p>GGSCI (server1) 7&gt; DBLOGIN userid goldengate, password ******<br \/>\nSuccessfully logged into database.<\/p>\n<p>GGSCI (server1) 8&gt; add trandata G001SCHEMA.CFG_ADV_COND<br \/>\nLogging of supplemental redo data enabled for table G001SCHEMA.CFG_ADV_COND.<br \/>\nGGSCI (server1) 9&gt; add trandata G001SCHEMA.CFG_NARRATIVE_TEMPLATE<br \/>\nLogging of supplemental redo data enabled for table G001SCHEMA.CFG_NARRATIVE_TEMPLATE.<br \/>\nGGSCI (server1) 10&gt; add trandata G001SCHEMA.CFG_REG_REPORT_RULES<br \/>\nLogging of supplemental redo data enabled for table G001SCHEMA.CFG_REG_REPORT_RULES.<br \/>\nGGSCI (server1) 11&gt; add trandata G001SCHEMA.CMN_LOOKUP<br \/>\nLogging of supplemental redo data enabled for table G001SCHEMA.CMN_LOOKUP.<br \/>\nGGSCI (server1) 13&gt; add trandata G001SCHEMA.CMN_USER_LOGIN<br \/>\nLogging of supplemental redo data enabled for table G001SCHEMA.CMN_USER_LOGIN.<\/p>\n<p>Thus the source database is configured to extract the additional five tables and we can restart the corresponding extract groups<\/p>\n<p>GGSCI (server1) 30&gt; start extract *<br \/>\nGGSCI (server1) 31&gt; info all<br \/>\nProgram\u00a0\u00a0\u00a0\u00a0 Status\u00a0\u00a0\u00a0\u00a0\u00a0 Group\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Lag\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Time Since Chkpt<br \/>\nMANAGER\u00a0\u00a0\u00a0\u00a0 RUNNING<br \/>\nEXTRACT\u00a0\u00a0\u00a0\u00a0 RUNNING\u00a0\u00a0\u00a0\u00a0 DPG001\u00a0\u00a0\u00a0\u00a0\u00a0 00:23:02\u00a0\u00a0\u00a0\u00a0\u00a0 00:00:03<br \/>\nEXTRACT\u00a0\u00a0\u00a0\u00a0 RUNNING\u00a0\u00a0\u00a0\u00a0 G001\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 00:00:00\u00a0\u00a0\u00a0\u00a0\u00a0 00:00:06<\/p>\n<p>Initial Load from the new tables<br \/>\n================================<\/p>\n<p>As on the source database PROD1 the transaction from the additional tables are now extracted, the initial load (expdp\/impdp)\u00a0 from the data to the new database can be started.<\/p>\n<p>But we must start an export for a specific transaction point (SCN) , in order to begin the replication on the target Database from this same SCN number. Thus we are going to read the current_scn from the source database.<\/p>\n<p>SQL&gt;\u00a0 select current_scn from v$database;<\/p>\n<p>CURRENT_SCN<br \/>\n&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;-<br \/>\n1657153626<\/p>\n<p>Create an expdp parfile, with the above selected SCN and the additional tables info\u00a0 to replicate<\/p>\n<p>oracle@server1:~\/app\/oracle\/admin\/PROD1\/create\/goldengate\/add_tables [PROD1] cat expdp_additional_tables.par<\/p>\n<p>flashback_scn=1657153626<br \/>\nSCHEMAS=G001SCHEMA<br \/>\nDUMPFILE=export_tables_G001SCHEMA.dmp<br \/>\nLOGFILE=export_tables_G001SCHEMA.log<br \/>\nINCLUDE=TABLE:&#8221;IN(&#8216;CFG_ADV_COND&#8217;,&#8217;CFG_NARRATIVE_TEMPLATE&#8217;,&#8217;CFG_REG_REPORT_RULES&#8217;,&#8217;CMN_LOOKUP&#8217;,&#8217;CMN_USER_LOGIN&#8217;)&#8221;<br \/>\nDIRECTORY=DATAPUMPDIR<\/p>\n<p>Start the\u00a0 export from the above tables as user system<\/p>\n<p>oracle@server1:~\/app\/oracle\/admin\/PROD1\/create\/goldengate\/ [PROD1] expdp parfile=expdp_additional_tables.par<\/p>\n<p>Username: system<br \/>\nPassword:<\/p>\n<p>Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 &#8211; 64bit Production<br \/>\nWith the Partitioning, OLAP, Data Mining and Real Application Testing options<br \/>\nFLASHBACK automatically enabled to preserve database integrity.<br \/>\nStarting &#8220;SYSTEM&#8221;.&#8221;SYS_EXPORT_SCHEMA_01&#8243;:\u00a0 system\/******** parfile=expdp_additional_tables.par<br \/>\nEstimate in progress using BLOCKS method&#8230;<br \/>\nProcessing object type SCHEMA_EXPORT\/TABLE\/TABLE_DATA<br \/>\nTotal estimation using BLOCKS method: 21.25 MB<br \/>\nProcessing object type SCHEMA_EXPORT\/TABLE\/TABLE<br \/>\nProcessing object type SCHEMA_EXPORT\/TABLE\/GRANT\/OWNER_GRANT\/OBJECT_GRANT<br \/>\nProcessing object type SCHEMA_EXPORT\/TABLE\/INDEX\/INDEX<br \/>\nProcessing object type SCHEMA_EXPORT\/TABLE\/CONSTRAINT\/CONSTRAINT<br \/>\nProcessing object type SCHEMA_EXPORT\/TABLE\/INDEX\/STATISTICS\/INDEX_STATISTICS<br \/>\nProcessing object type SCHEMA_EXPORT\/TABLE\/CONSTRAINT\/REF_CONSTRAINT<br \/>\nProcessing object type SCHEMA_EXPORT\/TABLE\/STATISTICS\/TABLE_STATISTICS<br \/>\n. . exported &#8220;G001SCHEMA&#8221;.&#8221;CMN_USER_LOGIN&#8221;\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 8.735 MB\u00a0 286163 rows<br \/>\n. . exported &#8220;G001SCHEMA&#8221;.&#8221;CFG_ADV_COND&#8221;\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 1.523 MB\u00a0\u00a0\u00a0 1617 rows<br \/>\n. . exported &#8220;G001SCHEMA&#8221;.&#8221;CFG_REG_REPORT_RULES&#8221;\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 100.5 KB\u00a0\u00a0\u00a0\u00a0 714 rows<br \/>\n. . exported &#8220;G001SCHEMA&#8221;.&#8221;CFG_NARRATIVE_TEMPLATE&#8221;\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 9.789 KB\u00a0\u00a0\u00a0\u00a0\u00a0 85 rows<br \/>\n. . exported &#8220;G001SCHEMA&#8221;.&#8221;CMN_LOOKUP&#8221;\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 6.585 KB\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 4 rows<br \/>\nMaster table &#8220;SYSTEM&#8221;.&#8221;SYS_EXPORT_SCHEMA_01&#8243; successfully loaded\/unloaded<br \/>\n******************************************************************************<br \/>\nDump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:<br \/>\n\/u00\/app\/oracle\/admin\/PROD1\/dmp\/export_ARGUS_tables_G001SCHEMA.dmp<br \/>\nJob &#8220;SYSTEM&#8221;.&#8221;SYS_EXPORT_SCHEMA_01&#8243; successfully completed at 11:01:22<\/p>\n<p>Copy the dumpfile from the\u00a0 server1\u00a0 to server2<\/p>\n<p>oracle@server1:~\/app\/oracle\/admin\/PROD1\/create\/goldengate\/ [PROD1]<br \/>\nscp \/u00\/app\/oracle\/admin\/PROD1\/dmp\/export_tables_G001SCHEMA.dmp oracle@server2:~\/app\/oracle\/admin\/REP1\/dmp\/<br \/>\nexport_tables_G001SCHEMA.dmp\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 100%\u00a0\u00a0 11MB\u00a0 10.7MB\/s\u00a0\u00a0 00:00<\/p>\n<p>Create an Impdp parfile in order to load the tables from the above dump<\/p>\n<p>Take care: the correct\u00a0 REMAP and EXCLUDE parameter must be configured based on your environment requirement<\/p>\n<p>oracle@server2:~\/app\/oracle\/admin\/REP1\/create\/goldengate\/add_tables\/ [REP1] cat impdp_PROD1_tables.par<\/p>\n<p>DUMPFILE=export_tables_G001SCHEMA.dmp<br \/>\nLOGFILE=import_tables_G001SCHEMA.log<br \/>\nREMAP_SCHEMA=G001SCHEMA:G001_PROD1<br \/>\nREMAP_TABLESPACE=DATA_01:PROD1_DATA<br \/>\nREMAP_TABLESPACE=DATA_02:PROD1_DATA<br \/>\nREMAP_TABLESPACE=INDEX_01:PROD1_DATA<br \/>\nREMAP_TABLESPACE=INDEX_02:PROD1_DATA<br \/>\nDIRECTORY=DATAPUMPDIR<br \/>\nEXCLUDE=GRANT<br \/>\nEXCLUDE=CONSTRAINT<br \/>\nEXCLUDE=REF_CONSTRAINT<\/p>\n<p>Start the import with impdp as user system and using the above parameter file<\/p>\n<p>oracle@server2:~\/app\/oracle\/admin\/REP1\/create\/goldengate\/add_tables\/ [REP1] impdp parfile=impdp_PROD1_tables.par<\/p>\n<p>Username: system<br \/>\nPassword:<\/p>\n<p>Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 &#8211; 64bit Production<br \/>\nWith the Partitioning, Oracle Label Security, OLAP, Data Mining,<br \/>\nOracle Database Vault and Real Application Testing options<br \/>\nMaster table &#8220;SYSTEM&#8221;.&#8221;SYS_IMPORT_FULL_01&#8243; successfully loaded\/unloaded<br \/>\nStarting &#8220;SYSTEM&#8221;.&#8221;SYS_IMPORT_FULL_01&#8243;:\u00a0 system\/******** parfile=impdp_PROD1_tables.par<br \/>\nProcessing object type SCHEMA_EXPORT\/TABLE\/TABLE<br \/>\nProcessing object type SCHEMA_EXPORT\/TABLE\/TABLE_DATA<br \/>\n. . imported &#8220;G001_PROD1&#8243;.&#8221;CMN_USER_LOGIN&#8221;\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 8.735 MB\u00a0 286163 rows<br \/>\n. . imported &#8220;G001_PROD1&#8243;.&#8221;CFG_ADV_COND&#8221;\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 1.523 MB\u00a0\u00a0\u00a0 1617 rows<br \/>\n. . imported &#8220;G001_PROD1&#8243;.&#8221;CFG_REG_REPORT_RULES&#8221;\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 100.5 KB\u00a0\u00a0\u00a0\u00a0 714 rows<br \/>\n. . imported &#8220;G001_PROD1&#8243;.&#8221;CFG_NARRATIVE_TEMPLATE&#8221;\u00a0\u00a0\u00a0\u00a0 9.789 KB\u00a0\u00a0\u00a0\u00a0\u00a0 85 rows<br \/>\n. . imported &#8220;G001_PROD1&#8243;.&#8221;CMN_LOOKUP&#8221;\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 6.585 KB\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 4 rows<br \/>\nProcessing object type SCHEMA_EXPORT\/TABLE\/INDEX\/INDEX<br \/>\nProcessing object type SCHEMA_EXPORT\/TABLE\/INDEX\/STATISTICS\/INDEX_STATISTICS<br \/>\nProcessing object type SCHEMA_EXPORT\/TABLE\/STATISTICS\/TABLE_STATISTICS<br \/>\nJob &#8220;SYSTEM&#8221;.&#8221;SYS_IMPORT_FULL_01&#8243; successfully completed at 11:49:06<\/p>\n<p>Add table list with filter to replication group and start the replication<br \/>\n=========================================================================<\/p>\n<p>Add the new tables to replicate on the replication group PROD1 with the SCN number used for the expdp<\/p>\n<p>Take Care: into GoldenGate\u00a0 configuration\u00a0 file we spoke about CSN and not SCN, but it contain both the same information ( System Change Number)<\/p>\n<p>oracle@server2:~\/app\/goldengate\/ggs\/11.1.1.1.0\/[REP1] ggsci<\/p>\n<p>GGSCI (server2)&gt; edit params PROD1<\/p>\n<p>REPLICAT PROD1<br \/>\nASSUMETARGETDEFS<br \/>\nUSERID goldengate, PASSWORD *****<br \/>\nDISCARDFILE \/u00\/app\/goldengate\/ggs\/11.1.1.1.0\/discard\/PROD1_discard.txt, append, megabytes 10<br \/>\nMAP G001schema.CFG_ADV_COND\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 , TARGET G001_PROD1.CFG_ADV_COND\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 ,FILTER ( @GETENV (&#8220;TRANSACTION&#8221;, &#8220;CSN&#8221;) &gt; 1657153626);<br \/>\nMAP G001schema.CFG_NARRATIVE_TEMPLATE , TARGET G001_PROD1.CFG_NARRATIVE_TEMPLATE\u00a0 ,FILTER ( @GETENV (&#8220;TRANSACTION&#8221;, &#8220;CSN&#8221;) &gt; 1657153626);<br \/>\nMAP G001schema.CFG_REG_REPORT_RULES\u00a0\u00a0 , TARGET G001_PROD1.CFG_REG_REPORT_RULES\u00a0\u00a0\u00a0 ,FILTER ( @GETENV (&#8220;TRANSACTION&#8221;, &#8220;CSN&#8221;) &gt; 1657153626);<br \/>\nMAP G001schema.CMN_LOOKUP\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 , TARGET G001_PROD1.CMN_LOOKUP\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 ,FILTER ( @GETENV (&#8220;TRANSACTION&#8221;, &#8220;CSN&#8221;) &gt; 1657153626);<br \/>\nMAP G001schema.CMN_USER_LOGIN\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 , TARGET G001_PROD1.CMN_USER_LOGIN\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 ,FILTER ( @GETENV (&#8220;TRANSACTION&#8221;, &#8220;CSN&#8221;) &gt; 1657153626);<br \/>\nMAP G001schema.*, TARGET G001_PROD1.*<\/p>\n<p>Start replication group with the new tables<\/p>\n<p>GGSCI (server2) 2&gt; start replicat PROD1<\/p>\n<p>Sending\u00a0 START request to MANAGER &#8230;<br \/>\nREPLICAT PROD1 starting<\/p>\n<p>Check the configuration until the synchronization is finished<\/p>\n<p>GGSCI (server2) 1&gt; info all<br \/>\nProgram\u00a0\u00a0\u00a0\u00a0 Status\u00a0\u00a0\u00a0\u00a0\u00a0 Group\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Lag\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Time Since Chkpt<br \/>\nMANAGER\u00a0\u00a0\u00a0\u00a0 RUNNING<br \/>\nREPLICAT\u00a0\u00a0\u00a0 RUNNING\u00a0\u00a0\u00a0\u00a0 PROD1\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 00:00:00\u00a0\u00a0\u00a0\u00a0\u00a0 00:00:00<\/p>\n<p>Stop the replication group PROD1 and Restart it after removing from the additional temporary replication filter<br \/>\n==============================================================================================================<\/p>\n<p>GGSCI (server2) 4&gt; stop replicat PROD1<br \/>\nSet back the original extraction group PROD1 without the filters for the new tables<\/p>\n<p>oracle@server2:~\/app\/goldengate\/ggs\/11.1.1.1.0\/[REP1] ggsci<\/p>\n<p>GGSCI (server2) 2&gt; edit params PROD1<\/p>\n<p>REPLICAT PROD1<br \/>\nASSUMETARGETDEFS<br \/>\nUSERID goldengate, PASSWORD *****<br \/>\nDISCARDFILE \/u00\/app\/goldengate\/ggs\/11.1.1.1.0\/discard\/PROD1_discard.txt, append, megabytes 10<br \/>\nMAP G001schema.*, TARGET G001_PROD1.*<\/p>\n<p>GGSCI (server2) 3&gt; start REPLICAT PROD1<br \/>\nGGSCI (server2) 4&gt; info all<\/p>\n<p>Program\u00a0\u00a0\u00a0\u00a0 Status\u00a0\u00a0\u00a0\u00a0\u00a0 Group\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Lag\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Time Since Chkpt<br \/>\nMANAGER\u00a0\u00a0\u00a0\u00a0 RUNNING<br \/>\nREPLICAT\u00a0\u00a0\u00a0 RUNNING\u00a0\u00a0\u00a0\u00a0 PROD1\u00a0\u00a0\u00a0\u00a0 00:00:00\u00a0\u00a0\u00a0\u00a0\u00a0 00:00:00<\/p>\n<pre>Introduction\n\nOften some times after having setting up a replication environment, it is needed to add additional tables to the replication, where the\nadditional tables have dependency with the current replicated tables. This tasks must be implemented without disturbing the current replication environment\n\n\nExplanation\n\nIn this example the below tables from schema G001 on Database PROD1 will be added, to an existing Replication from PROD1 to REP\n\nCFG_ADV_COND\nCFG_NARRATIVE_TEMPLATE\nCFG_REG_REPORT_RULES\nCMN_LOOKUP\nCMN_USER_LOGIN\n\nStop the replication environment\n================================\n\n\u00e2\u20ac\u00a2    Stop the extract groups \n\n     Connect on server from the source database PROD1 and stop all extract groups for the replication to the database REP1\n\noracle@server1:~\/ [PROD1] PROD1\noracle@server1:~\/ [PROD1] cdgh\noracle@server1:\/u99\/app\/goldengate\/gss\/11.1.1.1.0\/ [PROD1] ggsci\n\nGGSCI (server1) 1&gt; info all\n\nProgram     Status      Group       Lag           Time Since Chkpt\nMANAGER     RUNNING\nEXTRACT     RUNNING     DPG001      00:00:00      00:00:06\nEXTRACT     RUNNING     G001        00:00:00      00:00:05\n\n\nGGSCI (server1) 2&gt; stop extract *\nProgram     Status      Group       Lag           Time Since Chkpt\n\nMANAGER     RUNNING\nEXTRACT     STOPPED     DPG001      00:00:00      00:00:01\nEXTRACT     STOPPED     G001        00:00:00      00:00:13\n\n\u00e2\u20ac\u00a2    Stop the replication groups \n\n      Connect on server from the target database REP1 and stop all extract groups for the replication groups coming from the database PROD1\n\nGGSCI (server2) 1&gt; info all\n\nProgram     Status      Group       Lag           Time Since Chkpt\nMANAGER     RUNNING\nREPLICAT    RUNNING     PROD1       00:00:00      00:00:04\n\nGGSCI (server2) 2&gt; stop REPLICAT PROD1\nGGSCI (server2) 3&gt; info all\n\nProgram     Status      Group       Lag           Time Since Chkpt\nMANAGER     RUNNING\nREPLICAT    STOPPED     PROD1       00:00:00      00:00:04\n\n\nUpdate the extract environment with the new tables \n==================================================\n\nNow we can update the extract group G001 on the source database with the new tables to be replicated.  Edit the G001 parameter file and add the new tables (vi editor)\n\nGGSCI (server1) 30&gt; edit params G001\n. . . \ntable G001SCHEMA.CFG_ADV_COND;\ntable G001SCHEMA.CFG_NARRATIVE_TEMPLATE;\ntable G001SCHEMA.CFG_REG_REPORT_RULES;\ntable G001SCHEMA.CMN_LOOKUP;\ntable G001SCHEMA.CMN_USER_LOGIN;\n\nAdd supplemental login on source database PROD1 for the new added tables \n\nGGSCI (server1) 7&gt; DBLOGIN userid goldengate, password ******\nSuccessfully logged into database.\n\nGGSCI (server1) 8&gt; add trandata G001SCHEMA.CFG_ADV_COND\nLogging of supplemental redo data enabled for table G001SCHEMA.CFG_ADV_COND.\nGGSCI (server1) 9&gt; add trandata G001SCHEMA.CFG_NARRATIVE_TEMPLATE\nLogging of supplemental redo data enabled for table G001SCHEMA.CFG_NARRATIVE_TEMPLATE.\nGGSCI (server1) 10&gt; add trandata G001SCHEMA.CFG_REG_REPORT_RULES\nLogging of supplemental redo data enabled for table G001SCHEMA.CFG_REG_REPORT_RULES.\nGGSCI (server1) 11&gt; add trandata G001SCHEMA.CMN_LOOKUP\nLogging of supplemental redo data enabled for table G001SCHEMA.CMN_LOOKUP.\nGGSCI (server1) 13&gt; add trandata G001SCHEMA.CMN_USER_LOGIN\nLogging of supplemental redo data enabled for table G001SCHEMA.CMN_USER_LOGIN.\n\nThus the source database is configured to extract the additional five tables and we can restart the corresponding extract groups \n\nGGSCI (server1) 30&gt; start extract *\nGGSCI (server1) 31&gt; info all\nProgram     Status      Group       Lag           Time Since Chkpt\nMANAGER     RUNNING\nEXTRACT     RUNNING     DPG001      00:23:02      00:00:03\nEXTRACT     RUNNING     G001        00:00:00      00:00:06\n\nInitial Load from the new tables\n================================\n\nAs on the source database PROD1 the transaction from the additional tables are now extracted, the initial load (expdp\/impdp)  from the data to the new database can be started.\n\nBut we must start an export for a specific transaction point (SCN) , in order to begin the replication on the target Database from this same SCN number. Thus we are going to read the current_scn from the source database.\n\nSQL&gt;  select current_scn from v$database;\n\nCURRENT_SCN\n----------------------\n 1657153626\n\n\nCreate an expdp parfile, with the above selected SCN and the additional tables info  to replicate\n\noracle@server1:~\/app\/oracle\/admin\/PROD1\/create\/goldengate\/add_tables [PROD1] cat expdp_additional_tables.par\n\nflashback_scn=1657153626\nSCHEMAS=G001SCHEMA\nDUMPFILE=export_tables_G001SCHEMA.dmp\nLOGFILE=export_tables_G001SCHEMA.log\nINCLUDE=TABLE:\"IN('CFG_ADV_COND','CFG_NARRATIVE_TEMPLATE','CFG_REG_REPORT_RULES','CMN_LOOKUP','CMN_USER_LOGIN')\"\nDIRECTORY=DATAPUMPDIR\n\n\nStart the  export from the above tables as user system \n\noracle@server1:~\/app\/oracle\/admin\/PROD1\/create\/goldengate\/ [PROD1] expdp parfile=expdp_additional_tables.par\n\nUsername: system\nPassword:\n\nConnected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production\nWith the Partitioning, OLAP, Data Mining and Real Application Testing options\nFLASHBACK automatically enabled to preserve database integrity.\nStarting \"SYSTEM\".\"SYS_EXPORT_SCHEMA_01\":  system\/******** parfile=expdp_additional_tables.par\nEstimate in progress using BLOCKS method...\nProcessing object type SCHEMA_EXPORT\/TABLE\/TABLE_DATA\nTotal estimation using BLOCKS method: 21.25 MB\nProcessing object type SCHEMA_EXPORT\/TABLE\/TABLE\nProcessing object type SCHEMA_EXPORT\/TABLE\/GRANT\/OWNER_GRANT\/OBJECT_GRANT\nProcessing object type SCHEMA_EXPORT\/TABLE\/INDEX\/INDEX\nProcessing object type SCHEMA_EXPORT\/TABLE\/CONSTRAINT\/CONSTRAINT\nProcessing object type SCHEMA_EXPORT\/TABLE\/INDEX\/STATISTICS\/INDEX_STATISTICS\nProcessing object type SCHEMA_EXPORT\/TABLE\/CONSTRAINT\/REF_CONSTRAINT\nProcessing object type SCHEMA_EXPORT\/TABLE\/STATISTICS\/TABLE_STATISTICS\n. . exported \"G001SCHEMA\".\"CMN_USER_LOGIN\"               8.735 MB  286163 rows\n. . exported \"G001SCHEMA\".\"CFG_ADV_COND\"                 1.523 MB    1617 rows\n. . exported \"G001SCHEMA\".\"CFG_REG_REPORT_RULES\"         100.5 KB     714 rows\n. . exported \"G001SCHEMA\".\"CFG_NARRATIVE_TEMPLATE\"       9.789 KB      85 rows\n. . exported \"G001SCHEMA\".\"CMN_LOOKUP\"                   6.585 KB       4 rows\nMaster table \"SYSTEM\".\"SYS_EXPORT_SCHEMA_01\" successfully loaded\/unloaded\n******************************************************************************\nDump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:\n  \/u00\/app\/oracle\/admin\/PROD1\/dmp\/export_ARGUS_tables_G001SCHEMA.dmp\nJob \"SYSTEM\".\"SYS_EXPORT_SCHEMA_01\" successfully completed at 11:01:22\n\n\nCopy the dumpfile from the  server1  to server2 \n\n\noracle@server1:~\/app\/oracle\/admin\/PROD1\/create\/goldengate\/ [PROD1] \nscp \/u00\/app\/oracle\/admin\/PROD1\/dmp\/export_tables_G001SCHEMA.dmp oracle@server2:~\/app\/oracle\/admin\/REP1\/dmp\/\nexport_tables_G001SCHEMA.dmp                  100%   11MB  10.7MB\/s   00:00\n\nCreate an Impdp parfile in order to load the tables from the above dump \n\nTake care: the correct  REMAP and EXCLUDE parameter must be configured based on your environment requirement\n\noracle@server2:~\/app\/oracle\/admin\/REP1\/create\/goldengate\/add_tables\/ [REP1] cat impdp_PROD1_tables.par\n\nDUMPFILE=export_tables_G001SCHEMA.dmp\nLOGFILE=import_tables_G001SCHEMA.log\nREMAP_SCHEMA=G001SCHEMA:G001_PROD1\nREMAP_TABLESPACE=DATA_01:PROD1_DATA\nREMAP_TABLESPACE=DATA_02:PROD1_DATA\nREMAP_TABLESPACE=INDEX_01:PROD1_DATA\nREMAP_TABLESPACE=INDEX_02:PROD1_DATA\nDIRECTORY=DATAPUMPDIR\nEXCLUDE=GRANT\nEXCLUDE=CONSTRAINT\nEXCLUDE=REF_CONSTRAINT\n\nStart the import with impdp as user system and using the above parameter file\n\noracle@server2:~\/app\/oracle\/admin\/REP1\/create\/goldengate\/add_tables\/ [REP1] impdp parfile=impdp_PROD1_tables.par\n\nUsername: system\nPassword:\n\nConnected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production\nWith the Partitioning, Oracle Label Security, OLAP, Data Mining,\nOracle Database Vault and Real Application Testing options\nMaster table \"SYSTEM\".\"SYS_IMPORT_FULL_01\" successfully loaded\/unloaded\nStarting \"SYSTEM\".\"SYS_IMPORT_FULL_01\":  system\/******** parfile=impdp_PROD1_tables.par\nProcessing object type SCHEMA_EXPORT\/TABLE\/TABLE\nProcessing object type SCHEMA_EXPORT\/TABLE\/TABLE_DATA\n. . imported \"G001_PROD1\".\"CMN_USER_LOGIN\"             8.735 MB  286163 rows\n. . imported \"G001_PROD1\".\"CFG_ADV_COND\"               1.523 MB    1617 rows\n. . imported \"G001_PROD1\".\"CFG_REG_REPORT_RULES\"       100.5 KB     714 rows\n. . imported \"G001_PROD1\".\"CFG_NARRATIVE_TEMPLATE\"     9.789 KB      85 rows\n. . imported \"G001_PROD1\".\"CMN_LOOKUP\"                 6.585 KB       4 rows\nProcessing object type SCHEMA_EXPORT\/TABLE\/INDEX\/INDEX\nProcessing object type SCHEMA_EXPORT\/TABLE\/INDEX\/STATISTICS\/INDEX_STATISTICS\nProcessing object type SCHEMA_EXPORT\/TABLE\/STATISTICS\/TABLE_STATISTICS\nJob \"SYSTEM\".\"SYS_IMPORT_FULL_01\" successfully completed at 11:49:06\n\n\nAdd table list with filter to replication group and start the replication\n=========================================================================\n\nAdd the new tables to replicate on the replication group PROD1 with the SCN number used for the expdp\n\nTake Care: into GoldenGate  configuration  file we spoke about CSN and not SCN, but it contain both the same information ( System Change Number)\n\noracle@server2:~\/app\/goldengate\/ggs\/11.1.1.1.0\/[REP1] ggsci\n\nGGSCI (server2)&gt; edit params PROD1\n\nREPLICAT PROD1\nASSUMETARGETDEFS\nUSERID goldengate, PASSWORD *****\nDISCARDFILE \/u00\/app\/goldengate\/ggs\/11.1.1.1.0\/discard\/PROD1_discard.txt, append, megabytes 10\nMAP G001schema.CFG_ADV_COND           , TARGET G001_PROD1.CFG_ADV_COND            ,FILTER ( @GETENV (\"TRANSACTION\", \"CSN\") &gt; 1657153626);\nMAP G001schema.CFG_NARRATIVE_TEMPLATE , TARGET G001_PROD1.CFG_NARRATIVE_TEMPLATE  ,FILTER ( @GETENV (\"TRANSACTION\", \"CSN\") &gt; 1657153626);\nMAP G001schema.CFG_REG_REPORT_RULES   , TARGET G001_PROD1.CFG_REG_REPORT_RULES    ,FILTER ( @GETENV (\"TRANSACTION\", \"CSN\") &gt; 1657153626);\nMAP G001schema.CMN_LOOKUP             , TARGET G001_PROD1.CMN_LOOKUP              ,FILTER ( @GETENV (\"TRANSACTION\", \"CSN\") &gt; 1657153626);\nMAP G001schema.CMN_USER_LOGIN         , TARGET G001_PROD1.CMN_USER_LOGIN          ,FILTER ( @GETENV (\"TRANSACTION\", \"CSN\") &gt; 1657153626);\nMAP G001schema.*, TARGET G001_PROD1.* \n\nStart replication group with the new tables\n\n\nGGSCI (server2) 2&gt; start replicat PROD1\n\nSending  START request to MANAGER ...\nREPLICAT PROD1 starting\n\n\nCheck the configuration until the synchronization is finished\n\n\nGGSCI (server2) 1&gt; info all\nProgram     Status      Group       Lag           Time Since Chkpt\nMANAGER     RUNNING\nREPLICAT    RUNNING     PROD1       00:00:00      00:00:00\n\n\n\nStop the replication group PROD1 and Restart it after removing from the additional temporary replication filter\n==============================================================================================================\n\nGGSCI (server2) 4&gt; stop replicat PROD1\n\n \n   Set back the original extraction group PROD1 without the filters for the new tables\n\noracle@server2:~\/app\/goldengate\/ggs\/11.1.1.1.0\/[REP1] ggsci\n\nGGSCI (server2) 2&gt; edit params PROD1\n\nREPLICAT PROD1\nASSUMETARGETDEFS\nUSERID goldengate, PASSWORD *****\nDISCARDFILE \/u00\/app\/goldengate\/ggs\/11.1.1.1.0\/discard\/PROD1_discard.txt, append, megabytes 10\nMAP G001schema.*, TARGET G001_PROD1.* \n\n\nGGSCI (server2) 3&gt; start REPLICAT PROD1\nGGSCI (server2) 4&gt; info all\n\nProgram     Status      Group       Lag           Time Since Chkpt\nMANAGER     RUNNING\nREPLICAT    RUNNING     PROD1     00:00:00      00:00:00<\/pre>\n","protected":false},"excerpt":{"rendered":"<p>Introduction Often some times after having setting up a replication environment, it is needed to add additional tables to the replication, where the additional tables have dependency with the current replicated tables. This tasks must be implemented without disturbing the current replication environment Explanation In this example the below tables from schema G001 on Database [&hellip;]<\/p>\n","protected":false},"author":27,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[42],"tags":[],"type_dbi":[],"class_list":["post-2732","post","type-post","status-publish","format-standard","hentry","category-operating-systems"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>How to add new tables in existing Oracle GoldenGate replication - dbi Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.dbi-services.com\/blog\/how-to-add-new-tables-in-existing-oracle-goldengate-replication\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"How to add new tables in existing Oracle GoldenGate replication\" \/>\n<meta property=\"og:description\" content=\"Introduction Often some times after having setting up a replication environment, it is needed to add additional tables to the replication, where the additional tables have dependency with the current replicated tables. This tasks must be implemented without disturbing the current replication environment Explanation In this example the below tables from schema G001 on Database [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.dbi-services.com\/blog\/how-to-add-new-tables-in-existing-oracle-goldengate-replication\/\" \/>\n<meta property=\"og:site_name\" content=\"dbi Blog\" \/>\n<meta property=\"article:published_time\" content=\"2012-08-21T10:50:46+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2022-12-14T09:21:47+00:00\" \/>\n<meta name=\"author\" content=\"Oracle Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Oracle Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"15 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/how-to-add-new-tables-in-existing-oracle-goldengate-replication\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/how-to-add-new-tables-in-existing-oracle-goldengate-replication\/\"},\"author\":{\"name\":\"Oracle Team\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/66ab87129f2d357f09971bc7936a77ee\"},\"headline\":\"How to add new tables in existing Oracle GoldenGate replication\",\"datePublished\":\"2012-08-21T10:50:46+00:00\",\"dateModified\":\"2022-12-14T09:21:47+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/how-to-add-new-tables-in-existing-oracle-goldengate-replication\/\"},\"wordCount\":1509,\"commentCount\":2,\"articleSection\":[\"Operating systems\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/how-to-add-new-tables-in-existing-oracle-goldengate-replication\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/how-to-add-new-tables-in-existing-oracle-goldengate-replication\/\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/how-to-add-new-tables-in-existing-oracle-goldengate-replication\/\",\"name\":\"How to add new tables in existing Oracle GoldenGate replication - dbi Blog\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\"},\"datePublished\":\"2012-08-21T10:50:46+00:00\",\"dateModified\":\"2022-12-14T09:21:47+00:00\",\"author\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/66ab87129f2d357f09971bc7936a77ee\"},\"breadcrumb\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/how-to-add-new-tables-in-existing-oracle-goldengate-replication\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/how-to-add-new-tables-in-existing-oracle-goldengate-replication\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/how-to-add-new-tables-in-existing-oracle-goldengate-replication\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/www.dbi-services.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"How to add new tables in existing Oracle GoldenGate replication\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/\",\"name\":\"dbi Blog\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/66ab87129f2d357f09971bc7936a77ee\",\"name\":\"Oracle Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/f711f7cd2c9b09bf2627133755b569fb5be0694810cfd33033bdd095fedba86d?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/f711f7cd2c9b09bf2627133755b569fb5be0694810cfd33033bdd095fedba86d?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/f711f7cd2c9b09bf2627133755b569fb5be0694810cfd33033bdd095fedba86d?s=96&d=mm&r=g\",\"caption\":\"Oracle Team\"},\"url\":\"https:\/\/www.dbi-services.com\/blog\/author\/oracle-team\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"How to add new tables in existing Oracle GoldenGate replication - dbi Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.dbi-services.com\/blog\/how-to-add-new-tables-in-existing-oracle-goldengate-replication\/","og_locale":"en_US","og_type":"article","og_title":"How to add new tables in existing Oracle GoldenGate replication","og_description":"Introduction Often some times after having setting up a replication environment, it is needed to add additional tables to the replication, where the additional tables have dependency with the current replicated tables. This tasks must be implemented without disturbing the current replication environment Explanation In this example the below tables from schema G001 on Database [&hellip;]","og_url":"https:\/\/www.dbi-services.com\/blog\/how-to-add-new-tables-in-existing-oracle-goldengate-replication\/","og_site_name":"dbi Blog","article_published_time":"2012-08-21T10:50:46+00:00","article_modified_time":"2022-12-14T09:21:47+00:00","author":"Oracle Team","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Oracle Team","Est. reading time":"15 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.dbi-services.com\/blog\/how-to-add-new-tables-in-existing-oracle-goldengate-replication\/#article","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/how-to-add-new-tables-in-existing-oracle-goldengate-replication\/"},"author":{"name":"Oracle Team","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/66ab87129f2d357f09971bc7936a77ee"},"headline":"How to add new tables in existing Oracle GoldenGate replication","datePublished":"2012-08-21T10:50:46+00:00","dateModified":"2022-12-14T09:21:47+00:00","mainEntityOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/how-to-add-new-tables-in-existing-oracle-goldengate-replication\/"},"wordCount":1509,"commentCount":2,"articleSection":["Operating systems"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.dbi-services.com\/blog\/how-to-add-new-tables-in-existing-oracle-goldengate-replication\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.dbi-services.com\/blog\/how-to-add-new-tables-in-existing-oracle-goldengate-replication\/","url":"https:\/\/www.dbi-services.com\/blog\/how-to-add-new-tables-in-existing-oracle-goldengate-replication\/","name":"How to add new tables in existing Oracle GoldenGate replication - dbi Blog","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/#website"},"datePublished":"2012-08-21T10:50:46+00:00","dateModified":"2022-12-14T09:21:47+00:00","author":{"@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/66ab87129f2d357f09971bc7936a77ee"},"breadcrumb":{"@id":"https:\/\/www.dbi-services.com\/blog\/how-to-add-new-tables-in-existing-oracle-goldengate-replication\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.dbi-services.com\/blog\/how-to-add-new-tables-in-existing-oracle-goldengate-replication\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.dbi-services.com\/blog\/how-to-add-new-tables-in-existing-oracle-goldengate-replication\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/www.dbi-services.com\/blog\/"},{"@type":"ListItem","position":2,"name":"How to add new tables in existing Oracle GoldenGate replication"}]},{"@type":"WebSite","@id":"https:\/\/www.dbi-services.com\/blog\/#website","url":"https:\/\/www.dbi-services.com\/blog\/","name":"dbi Blog","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/66ab87129f2d357f09971bc7936a77ee","name":"Oracle Team","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/f711f7cd2c9b09bf2627133755b569fb5be0694810cfd33033bdd095fedba86d?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/f711f7cd2c9b09bf2627133755b569fb5be0694810cfd33033bdd095fedba86d?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/f711f7cd2c9b09bf2627133755b569fb5be0694810cfd33033bdd095fedba86d?s=96&d=mm&r=g","caption":"Oracle Team"},"url":"https:\/\/www.dbi-services.com\/blog\/author\/oracle-team\/"}]}},"_links":{"self":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/2732","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/users\/27"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/comments?post=2732"}],"version-history":[{"count":1,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/2732\/revisions"}],"predecessor-version":[{"id":21027,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/2732\/revisions\/21027"}],"wp:attachment":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media?parent=2732"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/categories?post=2732"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/tags?post=2732"},{"taxonomy":"type","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/type_dbi?post=2732"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}