Thursday, March 19, 2026

GoldenGate Extract Patterns: Online, Downstream & Cascaded

Online, Registered, Downstream, and Cascaded Downstream Extract Patterns in Oracle GoldenGate

Untangle the terms first, because Oracle mixes topology words and lifecycle words

Oracle documentation uses online Extract for continuous change synchronization, register Extract for the database-side preparation step required by integrated capture, and downstream Extract for a topology where mining happens on another database. These are related, but they are not interchangeable.

An online Extract is the change-data process created with ADD EXTRACT when the source is a transaction log rather than source tables. Oracle GoldenGate explicitly describes this as an online group with checkpoints so processing continuity is maintained from run to run. A registered Extract, by contrast, is not a separate runtime executable. It is the database registration of a primary Extract group so integrated capture can exist and, in multitenant estates, so the intended PDB scope is known. A downstream Extract is still a primary integrated Extract, but the database logmining server runs on a mining database rather than the source. Cascaded downstream adds a physical standby hop between the source and the mining database.

Online

The Extract is reading change history continuously, not running as an initial load task. In Oracle terms, it is an online change-synchronization process that uses checkpoints.

Registered

The integrated Extract is known to the database. Registration establishes integrated capture and, when BEGIN NOW is used, effectively fixes the start boundary at registration time.

Downstream

The source still generates redo, but the mining database owns the logmining server, archive landing rules, and some of the most important failure dependencies.

Boundary Rule

Do not call a data pump a registered Extract. Registration applies to a primary Oracle Extract group that needs integrated capture, not to a trail-to-trail pump.

Start-Point Rule

For an integrated Extract, BEGIN NOW does not mean "the moment I added the group" in the same way it does for a classic online Extract. Oracle ties that begin point to registration time.

Operational Rule

If a capture design uses REGISTER EXTRACT, treat registration SCN, mining location, and fetch identity as first-class design objects. They are not implementation trivia.

How the four labels fit together
Primary source database Generates redo and owns supplemental logging.
1 direct
Online upstream Extract Registration and mining are source-side.
2 or
Downstream mining database Registration uses both source and mining connections.
Physical standby / ADG Optional hop for cascaded transport and read-only fetch access.
3 feeds
Cascaded downstream Extract Mining still happens on the mining database, not on the standby.
4 writes
Primary trail Commit-ordered output for Distribution Service or downstream delivery.

Choose the right pattern by asking where redo is mined and where fetches happen

The best way to select among these patterns is to ignore marketing labels and ask five operator questions: where is redo mined, which database do you register against, which database does Extract fetch from, what network path carries redo, and what breaks recovery.

Pattern Mining location Registration target Runtime database identity Typical reason to choose it Main tradeoff
Online primary Extract Source-side logmining server Source database or PDB scope USERIDALIAS to source Simplest integrated design with the fewest moving parts Capture overhead and fetch load stay close to the source estate
Registered Extract Same as the topology it belongs to Source for upstream, source plus mining context for downstream Uses the same identity family that will own the Extract Mandatory control point for integrated capture, SCN start, and multitenant scope Wrong registration user, scope, or timing causes confusing start behavior later
Downstream Extract Mining database Registered with source context while also logging in to the mining database USERIDALIAS to source, MININGDBLOGIN and MININGUSERALIAS to mining database, optional source fetch identity Offload mining from the production server and isolate capture services Redo transport, archive landing, and mining database recovery become part of capture correctness
Cascaded downstream Extract Mining database ADG is used as the source-side registration and fetch endpoint NOUSERID on Extract, MININGUSERALIAS to mining database, FETCHUSERIDALIAS to ADG Reduce direct source connectivity and use a nearby standby to forward redo More Data Guard dependencies, no per-PDB downstream capture with ADG, and stricter transport design
When Online Wins

Favor the online primary pattern if simplicity is worth more than offload

Use it when source-side overhead is acceptable, the source database can be reached directly for registration and fetches, and you want the smallest blast radius for troubleshooting.

When Downstream Wins

Favor downstream if redo mining must move off the source host

Use it when the production server is sensitive, when you want mining state isolated, or when database version decoupling between source and mining database is operationally useful.

When Cascaded Wins

Use cascaded downstream only when the standby hop solves a real problem

The pattern is justified when source connectivity is constrained or when a nearby physical standby reduces network cost and source-side transport overhead.

What Registration Solves

Registration is about correctness, not performance vanity

It establishes integrated capture, container scope, and optional SCN or sharing choices. If registration is wrong, the rest of the configuration can be syntactically valid and still be operationally wrong.

Build an online primary Extract when direct source capture is the right default

This is the pattern most teams should understand first. The source database is reachable, Extract is created as a continuous online integrated Extract, and the source database itself owns the logmining server that feeds the Extract process.

Oracle GoldenGate treats this as an online change-synchronization Extract rather than a source-table initial load task. The source database must be in ARCHIVELOG mode, ENABLE_GOLDENGATE_REPLICATION must be enabled, and the database objects that matter must have supplemental logging in place. In current multitenant practice, per-PDB Extract is the cleaner default unless the design truly requires root-level scope or you are configuring downstream capture.

01

Prepare the source database before thinking about Extract syntax

Enable Oracle GoldenGate replication, place the source in archive log mode, and enable the supplemental logging level that matches the objects you intend to capture. If this step is partial, later registration and trail output will still exist, but data completeness will not.

02

Log in with the same identity that will own integrated capture

For integrated Extract, the identity in the parameter file is not just a runtime convenience. Oracle expects the same privilege-bearing user family to handle DBLOGIN, registration, and the running Extract.

03

Register first, then add the Extract group

Oracle explicitly requires registration before creating an integrated Extract. If you remember only one sequencing rule from this article, remember this one.

04

Add the local trail only after the primary group exists

The trail definition is bound to the named Extract that writes it. Trail configuration follows group creation, not the other way around.

GoldenGate Command or configuration snippet
DBLOGIN USERIDALIAS gg_ledger_pdb
REGISTER EXTRACT exledg DATABASE

ADD EXTRACT exledg, INTEGRATED TRANLOG, BEGIN NOW
ADD EXTTRAIL ledger/ea, EXTRACT exledg

EDIT PARAMS exledg
START EXTRACT exledg
GoldenGate Command or configuration snippet
EXTRACT exledg
USERIDALIAS gg_ledger_pdb
EXTTRAIL ledger/ea
DDL INCLUDE MAPPED
TABLE acctg.*;
Why This Order Matters

With integrated Extract, registration is what creates the database-side capture binding. ADD EXTRACT ... INTEGRATED TRANLOG assumes that binding already exists.

Multitenant Consequence

If you connect at the PDB level for a per-PDB Extract, do not add a root-style container clause out of habit. That often signals that the ownership model is already confused.

Operational Payoff

The online primary pattern gives the cleanest restart story because redo generation, registration, fetch, and mining all sit inside the same source-side trust boundary.

Checkpoint to verify Command or view What a healthy result means
Registration exists INFO EXTRACT exledg after start, plus registration review in the report The group is not just a stopped object in the deployment; it is a real integrated primary Extract.
Container scope is correct INFO EXTRACT exledg, CONTAINERS The PDB list matches the intended capture scope instead of silently capturing too much or too little.
Read and write checkpoints exist INFO EXTRACT exledg, SHOWCH The Extract has both a source-side read position and a trail-side write position, which is what makes normal restart predictable.
Approximate lag is interpretable INFO EXTRACT exledg The lag being shown is checkpoint-based, so it is usable only when you remember it reflects the last checkpointed record timestamp rather than end-to-end business latency.

Treat registration as a control boundary, not as a cosmetic step before startup

"Registered Extract" sounds like a flavor of process. In practice it is a lifecycle state that changes what the database believes about the Extract and where the earliest valid start point lives.

Oracle uses REGISTER EXTRACT to enable integrated capture for a primary Extract and to define multitenant options such as container scope. The command is not valid for a data pump. From Oracle GoldenGate 21.3 onward, specifying the database name is no longer mandatory in the Oracle syntax, but the command still must be issued before ADD EXTRACT. That ordering matters even more than many operators realize because BEGIN NOW for integrated Extract is interpreted from registration time rather than simply from the later moment the group is added.

Registration Does

Enable integrated capture, define PDB scope, optionally set SCN, and optionally define LogMiner dictionary sharing behavior for multiple Extracts.

Registration Does Not

Create the trail, define the running parameter file, or make a data pump meaningful. Those are separate objects and separate commands.

Registration Mistake

Registering with the wrong user, wrong container scope, or wrong timing often produces later symptoms that look like lag or missing data even though the real fault is earlier.

GoldenGate Command or configuration snippet
-- non-CDB or a per-PDB Extract after logging into the PDB
REGISTER EXTRACT exacct DATABASE

-- root-level multitenant scope for selected PDBs
REGISTER EXTRACT exroot DATABASE CONTAINER (PDBACCT, PDBBILL)

-- optional explicit SCN and dictionary-share choice when you know why you need them
REGISTER EXTRACT exhist DATABASE CONTAINER (PDBACCT)
  SCN 948372615
  SHARE AUTOMATIC
Start Boundary

For integrated Extract, a later ADD EXTRACT ... BEGIN NOW still resolves "now" to the registration moment. If registration happened earlier than you intended, so did your effective earliest capture boundary.

User Boundary

The Extract identity should match the user family that issues DBLOGIN and REGISTER EXTRACT. Mixing aliases casually is an easy way to create privilege drift and unpredictable maintenance behavior.

Drop-Container Surprise

In multitenant scope changes, dropping containers from registration is not fully complete until Extract runs and passes the required checkpoint boundary. Treat scope removal as a controlled lifecycle event, not an instant metadata edit.

Registration design choice Why it exists What can go wrong Healthy operator habit
Per-PDB registration Narrow ownership and blast radius for multitenant capture Teams accidentally keep using root conventions and over-capture Use a PDB-local alias and omit container syntax when the Extract is truly per-PDB
Root-level registration Capture across named PDBs or support downstream capture designs Common-user sprawl becomes the default even when it is unnecessary Use root only when the topology requires it and document the exact container list
SCN-based registration Backdate integrated capture to a valid dictionary-build boundary An invalid or unavailable SCN causes setup delay or forces rework Use SCN only when you have confirmed the dictionary-build boundary and the required archives exist
Shared dictionary registration Reduce LogMiner dictionary duplication across multiple Extracts Blindly sharing without understanding the other Extract lifecycle Keep shared Extracts documented as an intentional group, not as isolated one-off processes

Offload mining with downstream Extract when the source should generate redo but not mine it

Downstream Extract is still integrated Extract for Oracle, but the database logmining server lives on a separate mining database. Redo transport therefore becomes part of capture design, not an unrelated storage concern.

A downstream mining database can accept archived logs and, in real-time mining mode, standby redo logs from the source database. Oracle's current guidance is explicit on several points that matter immediately in production: the mining database must be the same database version or higher than the source, the source and downstream servers must use the same endianness and processor type, the mining database and source must be in archive log mode, and the downstream database must have distinct identity settings such as DBID, DBNAME, DB_UNIQUE_NAME, and GLOBAL_NAME. For Oracle 21c and lower, STREAMS_POOL_SIZE must also be set on the mining database.

Database Prep

Prepare two databases, not one

The source still needs supplemental logging and redo generation settings. The mining database needs archive behavior, archive destinations, and logmining readiness.

Transport Prep

Design redo landing explicitly

Foreign archive logs should be kept separate from local mining database logs and must not use the mining database recovery area for staging.

Credential Prep

Use both source and mining identities

Registration and runtime both rely on the distinction between source login, mining database login, and optional fetch login.

Mode Prep

Decide real-time or archive mining up front

If you want real-time downstream mining, standby redo logs on the mining database must be configured correctly and the Extract must set the integrated parameter that enables it.

GoldenGate Command or configuration snippet
-- source database
SHOW PARAMETER enable_goldengate_replication
ARCHIVE LOG LIST
SELECT force_logging, open_mode, database_role
FROM   v$database;

-- mining database
SHOW PARAMETER enable_goldengate_replication
ARCHIVE LOG LIST
SHOW PARAMETER streams_pool_size
SELECT force_logging, open_mode, database_role
FROM   v$database;

-- verify that standby redo logs and archive destinations are ready for downstream mining
SELECT group#, thread#, sequence#, archived, status
FROM   v$standby_log
ORDER  BY thread#, group#;

SELECT dest_id, status, type, database_mode, recovery_mode
FROM   v$archive_dest_status
ORDER  BY dest_id;
GoldenGate Command or configuration snippet
DBLOGIN USERIDALIAS gg_root_src
MININGDBLOGIN USERIDALIAS gg_root_mine

REGISTER EXTRACT exdwn DATABASE CONTAINER (PDBLEDGER)

ADD EXTRACT exdwn, INTEGRATED TRANLOG, BEGIN NOW
ADD EXTTRAIL dstream/ea, EXTRACT exdwn

EDIT PARAMS exdwn
START EXTRACT exdwn
GoldenGate Command or configuration snippet
EXTRACT exdwn
USERIDALIAS gg_root_src DOMAIN OGGDOM
TRANLOGOPTIONS MININGUSERALIAS gg_root_mine DOMAIN OGGDOM
TRANLOGOPTIONS INTEGRATEDPARAMS (DOWNSTREAM_REAL_TIME_MINE Y)
FETCHUSERIDALIAS gg_root_src DOMAIN OGGDOM
EXTTRAIL dstream/ea
SOURCECATALOG PDBLEDGER
DDL INCLUDE MAPPED
TABLE finops.*;
Real-Time Restriction

Oracle allows only one real-time downstream Extract on a downstream database for a given source database. Additional Extracts for that same source must mine archives rather than real-time redo.

Foreign Archive Storage

Do not stage foreign archive logs in the mining database fast recovery area. Keep them in their own filesystem location and manage their lifecycle deliberately.

Mining Login Boundary

The user used for MININGDBLOGIN must match the mining user declared for the Extract. If those identities drift apart, registration and runtime will fail in ways that are hard to diagnose from summary status alone.

Downstream design choice Use it when Operational consequence
Archive mining only Latency can tolerate archive arrival and you prefer simpler transport dependencies Capture starts only after redo is archived, so the lag floor is usually higher but the transport model is easier to reason about.
Real-time mining Lower latency matters enough to justify standby redo log design on the mining database Standby redo logs and redo transport health become part of capture correctness, and only one real-time Extract per source is allowed on that downstream database.
Direct source fetch identity Some tables or operations need fetch support beyond pure redo capture The runtime design still depends on a live source or source-equivalent fetch endpoint even though mining is offloaded.

Use cascaded downstream Extract with Active Data Guard only when the standby hop is intentional

Cascaded downstream is not "capture from ADG." Oracle is explicit that Extract cannot capture from a Data Guard standby or Active Data Guard standby that is open read-only. The mining database still performs capture. The standby exists to forward redo and to provide read-only metadata and fetch access.

In this design, a physical standby database cascades redo to the mining database. Oracle Data Guard defines cascading as redo transport from a standby to a terminal destination and places important restrictions on it: only physical standbys can cascade redo, a cascading standby can feed up to 30 terminal destinations, and real-time cascading requires the Active Data Guard option. Oracle GoldenGate then layers an Extract design on top of that transport topology. When the source database cannot be reached directly, GoldenGate can use the ADG standby for registration redirection, metadata queries, and fetches while the mining database continues to own integrated capture.

Cascaded downstream topology
Primary database Generates redo and owns actual DML.
redo ship
Physical standby / ADG Forwards redo and supplies read-only metadata and fetch access.
redo cascade
Mining database Hosts the logmining server and the downstream Extract.
GoldenGate Command or configuration snippet
EXTRACT excas
NOUSERID
TRANLOGOPTIONS MININGUSERALIAS gg_root_mine DOMAIN OGGDOM
TRANLOGOPTIONS INTEGRATEDPARAMS (DOWNSTREAM_REAL_TIME_MINE Y)
FETCHUSERIDALIAS gg_root_adg DOMAIN OGGDOM
EXTTRAIL cascade/ea
SOURCECATALOG PDBLEDGER
DDL INCLUDE MAPPED
TABLE finops.*;
GoldenGate Command or configuration snippet
DBLOGIN USERIDALIAS gg_root_adg
MININGDBLOGIN USERIDALIAS gg_root_mine

REGISTER EXTRACT excas DATABASE

ADD EXTRACT excas, INTEGRATED TRANLOG, BEGIN NOW
ADD EXTTRAIL cascade/ea, EXTRACT excas
START EXTRACT excas
ADG-enabled downstream behavior What it means Why it matters
NOUSERID on Extract Runtime capture is sourceless from the primary database perspective You are telling Extract not to log in directly to the primary source database during normal runtime.
FETCHUSERIDALIAS to ADG Read-only fetches and metadata queries go to the standby Without this, non-native fetch requirements become a design hole rather than a runtime detail.
ADG registration redirection REGISTER EXTRACT and selected database commands are redirected through the standby This is what makes the design workable when a direct source connection is unavailable.
No per-PDB support in this ADG mode The downstream capture with ADG flow is not available for per-PDB Extract This pushes the design toward root-level common-user handling when you choose cascaded downstream.
Redirection Scope

ADG redirection is available for commands and features such as TRANDATA, SCHEMATRANDATA, FLUSH SEQUENCE, TRACETABLE, HEARTBEATTABLE, and REGISTER EXTRACT.

Version Gate

Oracle documents ADG redirection for this downstream registration model only on Oracle Database 21c and higher. If your estate is older, do not assume the redirection path exists.

Design Misunderstanding

The standby is not the capture database. If teams treat ADG as the capture host, they eventually write wrong runbooks for restart, wrong archive retention rules, and wrong post-failover expectations.

Physical Standby Rule

Only a physical standby can cascade redo. A logical standby is not a valid substitute for this transport pattern.

License Rule

Real-time cascading is an Active Data Guard capability. If you cannot use that option, the design falls back toward non-real-time archive movement.

Transport Rule

Non-real-time cascading is limited to destinations 1 through 10, while real-time cascading is supported across all destination slots. Plan the archive destination layout accordingly.

Verify the pattern you built, not the pattern you think you built

Most GoldenGate mistakes in this area are not syntax errors. They are topology mismatches. The Extract exists, the trail exists, and yet the surrounding database state does not match the intended pattern. Verification has to cross both GoldenGate and database boundaries.

GoldenGate Command or configuration snippet
INFO EXTRACT exledg
INFO EXTRACT exledg, SHOWCH
INFO EXTRACT exledg, CONTAINERS
SEND EXTRACT exledg, STATUS
VIEW REPORT exledg

INFO EXTRACT exdwn
INFO EXTRACT exdwn, SHOWCH
SEND EXTRACT exdwn, STATUS
VIEW REPORT exdwn
GoldenGate Command or configuration snippet
-- determine whether any source tables require fetch support
SELECT owner, object_name, support_mode
FROM   dba_goldengate_support_mode
WHERE  owner = 'FINOPS'
ORDER  BY object_name;

-- inspect archive transport health and destination role
SELECT dest_id, status, type, database_mode, recovery_mode
FROM   v$archive_dest_status
ORDER  BY dest_id;

-- verify standby redo logs on a mining database used for real-time downstream capture
SELECT group#, thread#, sequence#, archived, status
FROM   v$standby_log
ORDER  BY thread#, group#;
What you are proving Where to look Healthy signal Interpretation
The group is truly online INFO EXTRACT group Status is no longer STARTING and the trail binding is visible The process has moved beyond mere creation and is running as a continuous Extract.
Checkpoint continuity exists INFO EXTRACT group, SHOWCH Both read and write checkpoints are present and advancing sensibly This confirms restart state in the source stream and in the trail, which matters more than a simple lag number.
Container registration is right INFO EXTRACT group, CONTAINERS The listed PDBs match your intended scope Especially important after root-level registration or later add/drop container maintenance.
Lag is being read correctly INFO EXTRACT group You treat Checkpoint Lag as approximate and checkpoint-based Low lag does not automatically prove low end-to-end replication latency; it only proves checkpoint recency.
Transport is healthy for downstream V$ARCHIVE_DEST_STATUS Destination status is valid and the role or recovery mode makes sense for the topology Downstream incidents often begin as redo transport problems long before GoldenGate reports a capture symptom.
Real-time mining prerequisites exist V$STANDBY_LOG Standby redo logs exist and show expected lifecycle state Without this, a real-time downstream design silently degrades into a broken design rather than a slightly slower one.
NOUSERID is safe for the table set DBA_GOLDENGATE_SUPPORT_MODE Captured objects do not unexpectedly require fetch-only support, or a fetch alias exists This is the cleanest way to catch sourceless downstream misdesign before runtime fetch failures do it for you.

Expect the failure paths to cluster around registration, fetch, and redo transport

These patterns usually fail at the boundaries between systems, not inside the easy syntax. Registration is done against the wrong database, the mining login does not match the parameter file, foreign archives land in the wrong place, or a supposedly sourceless design still needs fetches for unsupported columns.

Symptom Likely root cause Inspect first Likely correction
ADD EXTRACT ... INTEGRATED TRANLOG works poorly or start behavior is unexpected Registration timing or scope was wrong, so the effective begin boundary is not what you intended Registration history, report file, and whether registration happened long before group creation Rebuild with the intended registration moment and correct scope instead of chasing phantom lag
Downstream Extract exists but never behaves like real-time mining Standby redo logs or downstream real-time parameterization are missing V$STANDBY_LOG, Extract parameter file, and mining database archive configuration Fix the platform prerequisites before tuning Extract itself
Cascaded downstream Extract abends on fetch-related operations NOUSERID was used without a viable ADG fetch identity, or the table set includes objects that need fetch support FETCHUSERIDALIAS, DBA_GOLDENGATE_SUPPORT_MODE, and report messages Add or repair the fetch identity, or narrow the table set to objects that are redo-complete
Transport looks healthy but downstream registration fails MININGDBLOGIN identity and TRANLOGOPTIONS MININGUSERALIAS are inconsistent, or source and mining privilege models differ Credential aliases, registration commands, and current privilege grant model for the database release Align the identity used for MININGDBLOGIN with the declared mining user and grant the correct privileges for that database version
Post-upgrade privilege procedures stop working The estate moved into the newer role-based privilege model Database release level and whether DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE is still valid for that database Grant OGG_CAPTURE for Extract on Oracle AI Database 26ai and higher instead of relying on disabled package procedures
Operators keep trying to capture directly from ADG The topology was misunderstood; ADG is being confused with the mining database Current parameter file, runtime login path, and redo transport map Redraw the topology and re-state that mining is on the downstream database while ADG is a metadata and fetch endpoint
Version Rule

From Oracle GoldenGate 21.3 onward, Oracle registration syntax no longer requires the database name. That is a syntax simplification only. It does not relax the need to register before adding integrated Extract.

Multitenant Rule

Release guidance in the 23.9 and 26ai notes shifts normal practice toward per-PDB Extract and away from root-level common-user Extract, with downstream capture remaining the major exception.

Privilege Rule

Oracle AI Database 26ai and higher use role-based GoldenGate privileges such as OGG_CAPTURE. Continuing to call the older package procedures in those releases does not grant the intended privileges.

Choose the topology first, then align registration, fetch identity, and transport with that choice

If you want the smallest and clearest design, build an online primary integrated Extract and register it directly against the source boundary that truly owns the captured objects. If you need mining offload, move to a downstream mining database and design redo transport, archive landing, and mining credentials as part of the Extract architecture rather than as separate infrastructure chores. If you need to remove direct source connectivity, only then move to a cascaded downstream pattern with Active Data Guard and accept the extra Data Guard rules that come with it.

The key distinction is that registration is not a deployment style by itself. It is the control-plane step that makes integrated capture real. Once that is clear, the four labels stop competing with each other: online describes the continuous Extract role, registered describes the integrated capture state, downstream describes where mining happens, and cascaded downstream describes how redo reaches the mining database and where fetches are served.

Test your understanding

Select an answer and click Check.

Q1 — In this article, which operational approach best matches "Untangle the terms first, because Oracle mixes topology words and lifecycle words"?

Q2 — In this article, which operational approach best matches "Choose the right pattern by asking where redo is mined and where fetches happen"?

Q3 — In this article, which operational approach best matches "Build an online primary Extract when direct source capture is the right default"?

Q4 — In this article, which operational approach best matches "Treat registration as a control boundary, not as a cosmetic step before startup"?