Sunday, March 15, 2026

Oracle GoldenGate Microservices: Services, Deployments & Access Patterns

GoldenGate Microservices: Services, Deployments & Access

Define the architecture boundary before you touch a deployment wizard

The most useful modern distinction is between the control plane and the replication runtime. Microservices Architecture is a REST-oriented control and access model wrapped around the same durable GoldenGate responsibilities you already know: capture, trail persistence, transport, and apply.

That sounds obvious, but it prevents two common mistakes. First, operators often treat Service Manager as if it were the equivalent of a classic Manager process inside one replication estate. It is not. It is the host-level watchdog and entry point for one or more deployments. Second, operators often treat a deployment as merely a directory tree. It is more than that: it is the security, port, runtime-state, and service boundary that defines how users, services, and replication processes live together.

Why MA Exists

It standardizes management around browser UI, Admin Client, and REST endpoints, which means process control and topology management no longer depend on logging into the host that owns the binaries.

What Does Not Change

Extract still captures, trails still provide durable handoff, and Replicat still owns apply semantics. Microservices changes the administration surface more than the replication physics.

What Becomes Explicit

Routing becomes a named service responsibility, inbound landing becomes a named service responsibility, and deployment users become clearly scoped instead of informally shared.

Browser UI

Operator-friendly access to Service Manager and the deployment-local services.

Admin Client

Command-line entry point that issues service requests through the published REST surfaces.

REST Consumers

Automation, orchestration, or custom tooling that drives the same interfaces directly.

One host, one Service Manager, multiple deployments is the normal baseline

That baseline keeps upgrade and maintenance simpler, while preserving separate deployment-level users, ports, state, and process inventories.

Host perspective

`core_west` deployment

Owns its service endpoints, local users, parameters, credential store, directories, and replication objects.

Administration
Distribution
Receiver
Performance Metrics

`edge_finance` deployment

Lives under the same Service Manager but remains a separate security and runtime island.

Administration
Distribution
Receiver
Performance Metrics
CaptureExtract writes local trails.
RouteDistribution owns outbound paths.
LandReceiver owns incoming trails.
ApplyReplicat is under Administration.
ObserveMetrics centralizes telemetry.

Map host scope and deployment scope correctly or the rest of the design stays fuzzy

Microservices operations become clean when you decide whether a concern belongs to the host or to the deployment. That single distinction explains most login behavior, most directory choices, and most listener exposure decisions.

Scope Owned by What lives here Operational consequence
Host scope Service Manager & OGG Installation Service launch policy, host inventory, Service Manager admin surface, binaries, optional daemon/XAG integration Patch planning and host lifecycle decisions are made here, not inside an individual deployment.
Deployment scope The deployment home & endpoints Administration, Distribution, Receiver, Metrics, deployment users, local state, GGSCHEMA Two deployments on the same host can share binaries but must not be treated as one security/runtime domain.
Path scope Distribution & Receiver Transport direction, protocol choice, authentication method, encryption selection, trail landing location Connectivity failures are often path-design failures, not service-instability failures.
Process scope Administration Service Extract/Replicat creation, start/stop, registration, status, reports, checkpoint administration Replication process control belongs in Administration Service even when routing belongs elsewhere.
Host-Level Rule

Keep one Service Manager per host unless your HA design forces a different shape

The standard recommendation is a single Service Manager per host to reduce maintenance. That is the right default for standalone installations. The caveat is that some XAG-based HA patterns intentionally collapse Service Manager and deployment into a 1:1 relationship so each pair gets its own VIP.

Deployment-Level Rule

Never place deployment homes under `OGG_HOME`

The deployment home is runtime territory, not software-install territory. Keeping it outside `OGG_HOME` protects upgrade clarity, makes customized directory placement sane, and prevents operators from mixing installed binaries with deployment-owned state.

Common Misread

If someone says "the GoldenGate admin user," stop and ask whether they mean the Service Manager administrator or a deployment administrator. Those are not automatically the same identity, and the wrong assumption shows up later as an authorization problem that looks like a network issue.

📘
eBook
Exadata DBA Guide
A comprehensive PDF guide for Oracle DBAs covering Exadata architecture, Smart Scan, Flash Cache, Storage HA and performance tuning.
Get the PDF →

Read the services by ownership, not by UI label

Current Oracle material uses both older "Server" wording and newer "Service" wording across the documentation set. The terminology drift matters less than the ownership boundary. The service responsibilities themselves are stable.

Component Scope What it should own What it should not be mistaken for
Service Manager Host Deployment inventory, start/stop control, SM user management, diagnostic entry point It is not the place to manage Extract/Replicat detail. It supervises the services that do that.
Administration Service Deployment Extract/Replicat lifecycle, parameters, registration, credential stores, `MASTERKEY` It is not a transport engine. It coordinates replication objects and operational control.
Distribution Service Deployment Outbound path creation, trail dispatch, protocol choice, remote transport It is not a transformation tier. Treat it as a transport and routing service.
Receiver Service Deployment Inbound trail handling, incoming path stats, target-initiated path definition It is not a Replicat replacement. It lands trail data so downstream apply can proceed.
Performance Metrics Service Deployment Central metrics collection, drill-down views, telemetry, JSON/XML metrics access It is not the administration state store. Treat metrics separately from process state.
Admin Client Client Command-line control of the microservices via REST, wallet-backed access It is not a separate management plane. It is a CLI on top of the same service model.
Service Manager

The host watchdog

Its job is to know which deployments exist on the host, whether their services are up, and how to reach them.

Administration

Replication control

Where Extract and Replicat ownership lives. If checking lag, checkpoints, or credential stores, you are here.

Distribution

The routing engine

It replaces the old habit of multiplying data pumps. One service handles concurrent outbound streams.

Receiver

The landing point

Gives the target side a named service boundary for inbound paths, and pull-style target-initiated designs.

Performance Metrics

Telemetry plane

Centralizes process metrics rather than forcing you to infer health entirely from live process pages.

Client Surfaces

Browser, CLI, REST

The browser is not the "easy path." All clients are alternate consumers of the exact same REST surfaces.

Understand what a deployment really is before you design ports, users, and state directories

A deployment is the runtime package that makes GoldenGate Microservices usable for an actual replication estate. It is not just a name in Service Manager. It is the chosen security mode, endpoint set, directory layout, user scope, and metrics-store shape for a specific administrative island.

01

Security mode is a foundational choice

The secure versus non-secure decision is not cosmetic. Once configured, you do not flip an existing deployment from one mode to the other. Certificate planning and listener exposure need to be resolved before deployment.

02

Deployment home placement is a runtime decision

The deployment home must stay outside `OGG_HOME`. Current OGGCA flows let you customize the runtime directories (`OGG_ETC_HOME`, `OGG_VAR_HOME`, `OGG_DATA_HOME`, etc.). Use that flexibility deliberately when storage classes differ.

03

Ports define the access surface of the deployment

Each deployment allocates ports for the microservices. That is how direct service access works, how reverse proxy rules are generated, and how operators discover what they accidentally exposed.

04

User scope is per deployment

OGGCA can let you reuse Service Manager admin credentials, but that is a convenience. Deployment users created later remain local to that deployment and do not become host-wide administrators.

05

Metrics and supporting services belong in the early design

Creation flow forces you to think about the Performance Metrics data store, optional StatsD, and plugins. Those settings shape operability and compliance posture.

Deployment decision Why it matters What to validate before you finalize it
Secure vs Non-Secure Locks in certificate and transport expectations. Whether you will use server certificates directly, front with reverse proxy, or keep access restricted internally.
Deployment Home Separates installed software from runtime state. That the path is not under `OGG_HOME`, aligns to your storage policy, and can host customized state directories.
Service Ports Defines direct reachability and firewall rules. No collisions, predictable numbering, and alignment with reverse proxy models.
Deployment Admin Controls who manages Extract, Replicat, and paths. Whether shared credentials with Service Manager help or whether separation of duty matters more.
Identity Scope

SM users vs Deployment users

A Service Manager administrator can log into Service Manager, but that doesn't automatically grant access to deployment microservices unless explicitly configured. Users created in Administration Service are deployment-scoped only.

Monitoring Scope

Metrics are a deployment concern

The Performance Metrics Service belongs to the deployment. Data-store location and access patterns should be agreed upon before the estate grows; retrofitting metrics placement is difficult.

Choose the right access pattern instead of exposing every port and hoping governance catches up

Microservices gives you several legitimate ways in: direct browser access, Admin Client, raw REST, reverse proxy, and target-initiated pathing for constrained networks. The right answer depends on who needs access, from where, and who is allowed to initiate transport sessions.

Pattern A

Direct Service Access

Best for lab work or tightly controlled internal administration. Every service keeps its own port, and Service Manager acts as the directory.

Pattern B

Reverse Proxy Front Door

Best when operators want one public listener and do not want to expose a separate port per microservice. OGG provides NGINX generation utils.

Pattern C

Target-Initiated Path

Best when the target side can call back to the source Distribution Service, but the source side cannot open inbound sessions through a firewall.

Access pattern Good fit Primary caution
Browser to direct ports Initial provisioning, internal troubleshooting Easy to overshare on the network if every listener is reachable well beyond the intended admin segment.
Admin Client Repeatable CLI administration, automation Operators often confuse database `USERIDALIAS` storage with deployment-login credential storage. They are different.
REST clients Automation and orchestration stacks Concurrent changes are real. The browser and automation can both touch the deployment; control is required.
NGINX reverse proxy Single-entry admin surface, simpler edge exposure When proxy-terminated TLS is used, lock down origin listeners. mTLS is not supported through reverse proxy.
Target-initiated path DMZ/Cloud constraints where target must pull The source trail must already exist. It changes who opens the session, not whether outbound trail generation is needed.
CLI Admin Client access with explicit deployment target
cd $OGG_HOME/bin
./adminclient

ADD CREDENTIALS dep_core USER ma_admin PASSWORD "<deployment-password>"
CONNECT <service-manager-endpoint> DEPLOYMENT core_west AS dep_core

That pattern matters because it keeps the login target explicit. You are not "connecting to GoldenGate" in a vague sense. You are connecting to a specific Service Manager endpoint and telling Admin Client which deployment context to enter.

CLI Reverse proxy utility entry point
cd $OGG_HOME/lib/utl/reverseproxy
./ReverseProxySettings --help
./ReverseProxySettings --user sm_admin --host gg-edge-01.example.net --port 443 --output ogg.conf

Use that model when you want one external listener instead of one exposed port per service. If you front an unsecured internal deployment with proxy-terminated TLS, restrict direct origin access aggressively.

Build a sane deployment flow so the architecture stays clean after day one

The wizard matters because it encodes the runtime contract. A careless first deployment becomes the template that later operators clone mentally, even when they never reuse the exact response file.

A

Launch OGGCA from the software home

Run `oggca.sh`. The first run creates the Service Manager. Later runs usually add new deployments to the existing Service Manager.

B

Choose the host identity deliberately

When defining the Service Manager hostname/IP, decide whether this environment is local-only, internal-reachable, or fully resolved.

C

Decide lifecycle management

Manual startup is for labs. Daemon mode is the normal persistent model. XAG belongs to clustered clustered deployments.

D

Lock security mode early

Decide on direct certificate security vs. proxy-terminated TLS. You don't flip a deployment from secure to non-secure later easily.

E

Assign ports like a troubleshooter

Keep port numbering predictable. Consistent ranges make firewall review, proxy mapping, and human troubleshooting far easier.

F

Save the response file

OGGCA supports response-file reuse. That is valuable for reproducible topology shapes, but sensitive values must be handled carefully.

CLI Host-level daemon registration step
sudo $SM_HOME/bin/registerservicemanager.sh
Why The Registration Step Matters

If you choose daemon registration and never complete the generated root script step, the environment looks half-configured. It becomes visible only after a reboot or failover event, which is the worst time to discover it.

Verify the installation and access surface like an operator, not like a screenshot reader

A Microservices deployment is not "done" because OGGCA exits cleanly. It is done when the host sees the Service Manager, the Service Manager sees the deployment, the deployment services answer on the intended surfaces, and the access model behaves exactly the way you designed it to behave.

Verification point How to inspect it If it fails, suspect first
Service Manager reachability Open SM web UI or connect via Admin Client. Wrong listener address, unfinished daemon script, firewall, cert trust.
Deployment inventory Check deployment appears in SM and links resolve. Wrong SM choice during OGGCA, port collision.
Admin Service login Log in with deployment admin; manage objects. User scope confusion, relying on SM-only credentials.
Distribution/Receiver access Open service pages/paths; confirm intended network path. Reverse proxy gaps, listener exposure mistakes.
Metrics visibility Confirm metrics surface loads post-process creation. Metrics data-store placement errors during initial design.
Browser Check

Service Manager should show the deployment and expose clean drill-down access to local services.

CLI Check

Admin Client should connect to the intended deployment explicitly. Proves endpoint & credentials.

Network Check

Listeners you intend to expose must be reachable; those you don't must be firewalled/bound safely.

CLI Simple post-build verification bundle
cd $OGG_HOME/bin
./adminclient
CONNECT <service-manager-endpoint> DEPLOYMENT core_west AS ma_admin

From there, verify the deployment context and pivot into service-specific checks from the web interface. The important outcome is that login works only where it should, using only the correct credentials.

Diagnose structural mistakes early because they look like ordinary incidents later

Microservices incidents often start as topology-design mistakes. The visible symptom might be a failed login, a dead path, or a listener timeout, but the real defect is usually in user scope, endpoint exposure, or initiation direction.

Symptom Likely root cause Operational consequence if ignored
SM login works, but Admin Service login fails User exists only in SM scope, not deployed to services. Teams think the deployment is down when the real issue is identity scope.
Proxy is up, but one service is unreachable Proxy surface incomplete, or backend map is wrong. Operators bypass proxy to use direct ports, defeating the access model.
Path cannot reach a target behind a firewall Source-initiated push chosen when network requires target-pull. Teams troubleshoot certs/ports when the initiation model itself is wrong.
Unsecured internal deployment reachable directly Proxy added, but origin listeners were never restricted. Proxy gives a false sense of security while direct access remains open.
Metrics look empty when processes are healthy Metrics store placement was treated as secondary. You lose deployment-wide telemetry during an incident.
Security Consequence

Reverse proxy does not erase origin listeners

Proxy simplifies the entry model. But if the underlying Admin, Distribution, Receiver, and SM listeners are still broadly reachable on the network, the simpler front door is mostly cosmetic.

Transport Consequence

Target-initiated paths solve one network problem

They address who opens the session across a constrained firewall. They do not remove the need for a source trail, do not make Receiver optional, and do not replace path verification.

Do Not Normalize The Hybrid Edge Case

Microservices can interoperate with classic architecture, but do not let the hybrid case become the architecture you mentally optimize around. New designs should center on Microservice boundaries first. Bring classic interoperability into the picture only where a real migration requires it.

Keep version distinctions straight, but treat the architectural core as stable

Older 12.3-era training material says Administration Server, Distribution Server, Receiver Server. Current 23ai and 26-era material increasingly says Service. The naming drift should not distract you from the fact that the architectural roles are stable.

Area Current practical view What to carry forward
Terminology "Service" wording dominates current docs and OGGCA. Treat "Server" and "Service" as the same roles unless release notes state otherwise.
Deployment planning OGGCA surfaces metrics, StatsD, configuration-service, TLS explicitly. Plan observability and security during creation, not after adoption.
Security posture Highlights reverse proxy, TLS protocols, external identity integrations. The modern question is not whether to secure it, but where to terminate and authenticate.
Path design Target-initiated paths are standard for cloud-to-on-prem directionality. Design transport initiation around network policy, not habit.
Reverse proxy Actively documented; NGINX config gen via `ReverseProxySettings`. Use it when a single-entry access surface materially improves operability.

Oracle GoldenGate Microservices Architecture becomes straightforward once you name the boundaries correctly. Service Manager is the host watchdog. A deployment is the runtime package (services, users, ports, state). Administration Service controls replication objects. Distribution Service owns outbound routing. Receiver Service owns inbound landing. Performance Metrics Service owns telemetry.

The right access pattern follows: Use direct ports for tightly controlled admin. Use Admin Client for repeatable CLI. Use REST for automation. Use reverse proxy for a clean edge. Use target-initiated paths when the network dictates initiation. If those choices are explicit up front, the Microservices estate is easier to scale, secure, and debug.

Test your understanding

Select an answer and click Check.

Q1 — In GoldenGate Microservices, which service is the host-level gateway that manages the inventory of deployments?

Q2 — What defines a GoldenGate deployment in Microservices Architecture?

Q3 — Which deployment-local service is responsible for managing the Extract and Replicat process lifecycle?

Q4 — When the target network prevents inbound connections from the source, which routing feature solves this securely?

No comments:

Post a Comment