Deployments define the runtime boundary They are the unit that owns service URLs, runtime state, security posture, upgrade handling, and most of the operational blast radius on a host.
In Oracle GoldenGate Microservices Architecture, the software home is shared infrastructure. The deployment is where the actual runtime lives. That distinction sounds simple, but it drives nearly every practical question: where to split environments, how to assign users, what can be upgraded together, why ports and certificates matter, and what must be stopped before a patch or removal.
How the described components fit into the GoldenGate operational model.
Intermediate to advanced Oracle GoldenGate practitioners.
Confident operation and configuration of the described components.
A deployment is the runnable GoldenGate instance, not the software installation
GoldenGate Microservices separates binaries from runtime. Oracle installs software into OGG_HOME, but the deployment carries the service definitions, configuration directories, credentials, security material, ports, and process inventory that make the instance operational. That separation is the reason out-of-place upgrade works cleanly, and it is also the reason careless deployment design turns into chronic operational friction.
OGG_HOME is shared software
The install home is the binary location. Multiple deployments can point at it, and upgrades can move deployments to a newer home later.
Service Manager is host-local control
Service Manager maintains inventory and controls one or more local deployments on the same host. It is the watchdog, not the workload itself.
A deployment is an endpoint tier
Source and target deployments commonly map to different database roles, but the deployment boundary is about runtime ownership, not a magical one-to-one database law.
This model is self-managed MA
OCI GoldenGate uses a managed control plane. For Microservices on your own host, OGGCA and Service Manager remain the defining concepts.
Older materials often say Administration Server, Distribution Server, Receiver Server, and Performance Metrics Server. Current documentation more often uses Service terminology. The runtime meaning is the same: these are the microservices contained by and operated within a deployment.
Shared install layer
OGG_HOME, Admin Client binaries, patch inventory, and the executables used to create or manage deployments.
Host control layer
Service Manager inventory, host-local startup model, and the one-to-many control relationship to local deployments.
Deployment runtime layer
Ports, users, certificates, environment settings, parameter files, reports, local state, trails, and microservice restart behavior.
Database and network contracts
Credential store entries, TNS settings, path endpoints, TLS trust, and the upstream or downstream systems the deployment talks to.
Service Manager
Host-local watchdog, inventory, start/stop surface, deployment access point.
Deployment: harbor_src
- Admin, Distribution, Receiver, Metrics services
- Own ports, users, directories, TLS material
- Own Extracts, paths, and runtime state
Deployment: harbor_tgt
- Same host, different runtime boundary
- Separate stop/start and maintenance semantics
- Possible different patch timing, users, or DB contract
The useful boundary lines are storage, identity, ports, and restart semantics
The safest way to think about a deployment is to ask what becomes confusing or dangerous if two workloads share it. Oracle exposes that answer directly in OGGCA: separate homes, separate ports, separate users, separate certificate material, separate environment variables, and separate microservice restart configuration. Those are not decorative inputs. They are the deployment contract.
| Area | Scope | Why it matters operationally | What usually goes wrong |
|---|---|---|---|
| Software binaries | Shared per Oracle GoldenGate home | Lets you patch or install a new home separately from deployment state. | Teams mistake a new OGG_HOME for a complete redesign and forget that the deployment still carries the old runtime assumptions. |
| Service inventory | Service Manager on the local host | Provides the inventory and control plane for multiple deployments, and is expected to be unique per host in the normal design. | Multiple Service Managers create duplicate maintenance effort and make host-level operations harder to reason about. |
| Directory set | Per deployment | OGG_ETC_HOME, OGG_CONF_HOME, OGG_SSL_HOME, OGG_VAR_HOME, OGG_DATA_HOME, and archive or metrics locations are where the deployment actually lives. |
Putting a deployment under the software home or reusing directories across deployments blurs ownership and complicates recovery. |
| Ports and URLs | Per deployment | Every Administration, Distribution, Receiver, and Performance Metrics endpoint needs a unique port, which becomes part of the deployment identity. | Port collisions or unclear naming create accidental cross-connection to the wrong deployment. |
| Users and roles | Primarily per deployment | Each deployment has its own set of users and roles. Service Manager access is not automatically the same as deployment access unless you choose that at creation time. | Operators assume a host-level user can administer every deployment identically and discover the mismatch during an incident. |
| TLS trust and certificates | Per secure deployment | Secure deployments carry server certificates and, for Distribution or Receiver interactions, client-side trust material that defines who can talk to whom. | Teams change certificate assumptions late and learn that moving from non-secure to secure is not a small toggle. |
| Restart behavior | Per service in the deployment | Restart options belong to the deployment's microservices, so failure handling is local to that runtime boundary. | One noisy workload degrades another because both were forced into the same deployment when they should have been split. |
| Patch and upgrade sequence | Host plus deployment | Service Manager must move first, then deployments can remain at the older level temporarily unless XAG changes the rule. | Teams stop at the binary install and never repoint the deployment to the new home. |
A secure deployment is not just the same runtime with a padlock icon. The certificate chain, the server certificate, and the client certificate used for Distribution or Receiver trust become part of the deployment's identity. In production, treat that as an architectural choice made at creation time, not a cosmetic afterthought.
If you need to move from a non-secure deployment to a secure one, current upgrade guidance treats that as a new secure deployment plus manual movement of Extracts, Replicats, and path roles, not as a trivial in-place flip.
One deployment is usually reasonable when
The same operations team owns the services, the same security model applies, the same maintenance window is acceptable, and the source or target role is intentionally shared as one operational unit.
Use separate deployments when
Ownership, certificates, ports, patch cadence, environment variables, database connectivity assumptions, or stop windows diverge. Different blast radius means different deployment.
Creation choices persist for the whole life of the deployment
OGGCA is where most deployment debt is introduced. The wizard invites a fast setup, but the important fields are the ones that look boring: names, homes, ports, user model, certificates, and extra services such as Configuration Service and StatsD. Those choices later determine whether the deployment feels self-explanatory or permanently fragile.
Lay down software first
Install the Oracle GoldenGate home. Do not place deployment directories underneath it.
Create or reuse Service Manager
The first run creates Service Manager. Later runs typically attach new deployments to the existing host-level controller.
Name and place the deployment
Use a name that tells operators what the runtime owns, then keep the deployment home outside both the Service Manager home and the software home.
Assign ports deliberately
Each microservice port becomes part of the deployment's API surface and the routing vocabulary used later by operators.
Decide secure versus non-secure
Certificates, trust, and access handling should match the intended production posture from day one.
Prove the boundary exists
Log in through Service Manager and Admin Client, verify services, and confirm that directories and ports match your design.
The deployment name is not just a label in a console. It appears in connection commands, inventory, upgrade actions, and stop procedures. Pick names that expose role and scope, such as environment plus function, instead of project nicknames that mean nothing six months later.
cd /u01/app/oracle/product/26ai/ogg/bin ./oggca.sh
$OGG_HOME/bin/oggca.sh -silent -responseFile /u02/ogg/response/harbor_src.rsp
OGG_HOME
The install path is for binaries. The deployment path is for runtime state. Combining them makes patching and cleanup harder than it needs to be.
Do not allocate ports one at a time from memory. Treat the group of service ports as part of the deployment design artifact.
OGGCA can align them, but that is an explicit choice. If you want tighter separation, keep them distinct.
Recent 23ai and 26ai-era OGGCA flows expose them directly. They affect where configuration and metrics live, so decide them intentionally.
| OGGCA choice | What it defines | Why the choice sticks | Operational reading |
|---|---|---|---|
| Service Manager mode | Manual, system service or daemon, or XAG-integrated control model | Changes how start and stop work and who owns bootstrap after reboot | Know whether the OS, the scripts, or CRS owns restart semantics before you schedule maintenance |
| Deployment home customization | Where etc, conf, ssl, var, data, archive, and metrics state live |
Those directories become the evidence trail for audits, rollback, and troubleshooting | If the paths are unclear, the deployment is unclear |
| User deployment ports | Administration, Distribution, Receiver, and Performance Metrics endpoints | Every tool, reverse proxy, and operator note will rely on them later | Ports are part of the identity of the deployment, not a temporary bootstrap detail |
| Secure deployment inputs | Server certificate, private key, CA certificate, optional client certificate material | These values define how other deployments and operators trust the runtime | Certificate management belongs in the deployment design, not only in the security team's post-build checklist |
TNS_ADMIN and deployment environment variables |
Database connectivity context and any required external library paths | Wrong values can leave the deployment healthy at the service level but useless at the data plane level | Environment drift is a deployment problem, not just a database problem |
First boot should prove both control access and runtime separation
The local training materials used browser access and Admin Client as the first proof that a deployment had been created correctly. That is still the right instinct. On first boot, do not stop at "the page opens." Prove that Service Manager sees the deployment, that the deployment can be selected explicitly, that the microservices are the ones you intended, and that your environment variables point to the correct runtime homes.
Service Manager overview
- Confirms inventory and service visibility on the host.
- Shows whether the deployment is present and whether service links resolve.
- Is the fastest visual check after OGGCA.
Admin Client
- Confirms login path and explicit deployment selection.
- Lets you prove service state without relying on the web UI.
- Matters because most maintenance procedures later rely on it.
Deployment home review
- Confirms
etc,var,conf, andsslland where expected. - Prevents accidental reuse of an old runtime tree.
- Becomes essential when a host carries multiple deployments.
export OGG_HOME=/u01/app/oracle/product/26ai/ogg export OGG_ETC_HOME=/u02/ogg/deployments/harbor_src/etc export OGG_VAR_HOME=/u02/ogg/deployments/harbor_src/var export OGG_CLIENT_TLS_CAPATH=/u02/ogg/certs/rootca.pem $OGG_HOME/bin/adminclient
CONNECT <deployment-url> DEPLOYMENT harbor_src AS oggadmin PASSWORD "Admin#7421" INFO ALL
Admin Client can connect without the deployment name when only one non-ServiceManager deployment exists. That convenience becomes a liability the moment a second deployment appears. During real operations, use the DEPLOYMENT clause when the host is multi-deployment, so the command transcript itself proves the intended target.
| Verification point | Command or location | What to see | Interpretation |
|---|---|---|---|
| Service Manager inventory | Service Manager overview page | The deployment is listed and its services are visible. | The host-level controller knows the deployment exists. |
| CLI connection path | CONNECT ... DEPLOYMENT harbor_src ... |
Successful login to the intended deployment without ambiguity. | User mapping and port targeting are correct enough to administer the runtime. |
| Service state | Service Manager overview page or deployment overview page | The expected microservices are present and reachable. | The deployment is alive as a service boundary, not just as a directory tree. |
| Deployment homes | Filesystem inspection | Separate runtime directories under the chosen deployment home. | The binary layer and runtime layer are not accidentally merged. |
| Security trust | Admin Client login to HTTPS deployment | Expected certificate behavior for your environment. | If secure access only works with ad hoc workarounds, fix trust now instead of teaching operators risky habits. |
Daily operations should treat the deployment as a blast-radius container
Most production mistakes happen when teams operate only at the process level. Extracts and Replicats matter, but the deployment is the boundary that groups their services, users, paths, local state, and restart rules. If that boundary is not part of the operational vocabulary, incident response becomes slower and maintenance windows become noisier than they need to be.
Users and roles are local
Each deployment has its own user set and role assignments. Additional users created later do not automatically span other deployments on the same host.
Distribution paths are endpoint contracts
Source and target deployments act as replication endpoints. Distribution and Receiver relationships inherit the deployment's TLS, user, and URL design.
Restart behavior is service-scoped
Microservice restart options live inside the deployment. That is one more reason to isolate workloads that should fail and recover independently.
If an operator cannot answer "which deployment owns this path, this certificate, this service port, and this stop window?" in one sentence, the environment is already harder to run than it should be.
A single Service Manager can still be the right host-level design even when the host carries deployments for different database-specific GoldenGate builds. In that case, add the deployment by running oggca.sh from the build that matches the deployment you are creating.
| Operational question | Ask it at which level? | Why this level is correct | Typical evidence |
|---|---|---|---|
| Who is allowed to administer this runtime? | Deployment | User creation and role scope are deployment-centric after the initial Service Manager bootstrap. | User administration pages, deployment login tests, role assignments. |
| Which certificates and trust chain are in effect? | Deployment | Secure communication between services and between deployments is defined there. | Deployment certificate management and successful secure connections. |
| What must stop for patching? | Deployment first, then Service Manager | Processes, paths, and services belong to the deployment; Service Manager is the final host-level controller to stop. | Admin Client stop sequence and service-state checks. |
| Can two workloads share the same change window? | Deployment design decision | If they cannot, they should not be in one deployment. | Maintenance calendar, patch ownership, security separation requirements. |
| Where is the runtime evidence stored? | Deployment directories | Reports, configuration, TLS assets, and operational files live in the deployment homes, not the abstract concept of the host. | etc, conf, var, data, archive, and metrics homes. |
CONNECT <deployment-url> DEPLOYMENT harbor_src AS oggadmin PASSWORD "Admin#7421" INFO ALL
Change, patch, and upgrade all make sense only if you respect the deployment order
Microservices upgrades are clean because binaries and deployments are strongly separated, but the sequence still matters. Install the new software independently, then move control and runtime to it in the right order. For maintenance that touches the installed software, the deployment is what you drain first. For version movement, Service Manager leads and deployments follow.
For planned change, stop workload before infrastructure: Extracts and Replicats first, distribution paths next, deployment services after that, and Service Manager last. This makes the stop reason obvious and preserves clean operational evidence.
export OGG_HOME=/u01/app/oracle/product/26ai/ogg export OGG_ETC_HOME=/u02/ogg/ServiceManager/etc export OGG_VAR_HOME=/u02/ogg/ServiceManager/var /u02/ogg/ServiceManager/bin/stopSM.sh /u02/ogg/ServiceManager/bin/startSM.sh
If Service Manager was registered as a system service or daemon, the operating system owns start and stop. If it is XAG-managed, the CRS stack owns that lifecycle. Do not expect manual scripts to exist in those modes.
systemctl status OracleGoldenGate systemctl stop OracleGoldenGate systemctl start OracleGoldenGate
curl -u smadmin:smsecret -X PATCH \
<service-manager-url>/services/v2/deployments/harbor_src \
-H 'cache-control: no-cache' \
-d '{"oggHome":"/u01/app/oracle/product/26ai/ogg_patch01","status":"restart"}'
| Lifecycle event | Required order | Why the order is correct | Version-aware note |
|---|---|---|---|
| Binary patching | Drain deployment workload, stop services, stop Service Manager, patch binaries, start back up | The binaries are shared, but the processes holding them open live inside deployments. | For Microservices, Oracle's patching flow documents the deployment stop sequence explicitly. |
| Out-of-place upgrade | Install new home, update Service Manager first, then update deployments to the new home | Service Manager must run at a version greater than or equal to the deployments it manages. | That greater-than-or-equal rule changes under XAG, where mixed releases under the upgraded Service Manager are not supported. |
| Enable secure posture later | Create a new secure deployment, then move workloads | Certificates and trust are deployment-defining choices, not a tiny patch to a running runtime. | This is one of the most important lifecycle distinctions to surface early in a design review. |
| Heartbeat table change after upgrade | Upgrade software and deployments, then upgrade heartbeat objects if they are in use | The runtime and the metadata model must stay aligned. | Modern upgrade guidance explicitly calls out UPGRADE HEARTBEATTABLE after completion. |
The clean split between binaries and deployment state is the real reason Microservices feels safer to upgrade than older layouts. The new software home can be installed without touching the deployment's runtime files. But that safety only materializes if your deployment homes were designed clearly in the first place and if the runtime is repointed through a deliberate upgrade step, not by wishful thinking.
Removal is a lifecycle event, not a garbage-collection shortcut
Oracle is explicit on two points that teams often miss. First, removing a deployment is not the same as removing Service Manager. Second, removing a deployment does not automatically stop everything for you. A clean retirement means proving the workload is drained, stopping processes and services first, then removing the deployment through OGGCA, and finally handling any leftover operating-system registration tasks when Service Manager was installed as a service.
Stop the deployment, its microservices, and the Extract or Replicat processes that depend on it. Removal is an inventory and file operation. It is not a substitute for runtime shutdown discipline.
A host can carry multiple deployments, so removing one deployment does not imply removing Service Manager. Service Manager removal becomes available only after other deployments are gone.
CONFIGURATION_OPTION=REMOVE CREATE_NEW_SERVICEMANAGER=false ADMINISTRATOR_PASSWORD=******** $OGG_HOME/bin/oggca.sh -silent -responseFile /u02/ogg/response/harbor_src_remove.rsp
| Retirement step | Why it exists | What to verify | What people skip |
|---|---|---|---|
| Stop ER and path workload | A removed deployment should not still be carrying live process state. | No active workload remains tied to the deployment. | Teams jump directly into OGGCA because the environment looks quiet in the browser. |
| Stop services | Ensures the deployment is not still serving API or routing requests. | The deployment overview shows the intended stopped state. | Assuming deletion will stop services implicitly. |
| Run OGGCA removal | Removes the deployment from Service Manager inventory and optionally deletes deployment files from disk. | The deployment no longer appears in inventory. | Leaving the inventory clean but forgetting to remove runtime files when that was the intended outcome. |
| Handle service registration residue | When Service Manager was registered as a daemon or service, unregister and file cleanup can still be required. | OS service state and registration files match the new desired state. | Removing a deployment and assuming host-level service registration is automatically rewritten. |
For significant posture changes, especially security model changes, a controlled rebuild is often cleaner than repeated in-place mutation. Create the new deployment with the correct homes, ports, users, and certificates, validate it fully, migrate workload, and only then retire the old one. That pattern aligns with how Oracle separates deployments from software homes.
The common failure patterns are boundary mistakes disguised as process problems
When deployment design is weak, incidents often present as random process issues. The real causes are usually blurred boundaries: ambiguous ports, shared users that should not be shared, deployment homes under the wrong parent, or maintenance assumptions that ignored the Service Manager versus deployment order. Diagnose at the deployment level first.
Admin Client reaches the wrong runtime
Usually caused by relying on default deployment selection on a multi-deployment host, or by using memorized ports instead of documented ones.
Patching window keeps expanding
Often means deployments were grouped together even though they had different stop windows or operational owners.
Security hardening becomes a migration project
That usually indicates the original deployment was created non-secure and the environment now needs a secure replacement deployment rather than a small edit.
Upgrade to a new home seems done, but behavior is still old
The new binaries may be installed, yet Service Manager or the deployment may still point at the previous OGG_HOME.
| Symptom | Likely boundary issue | Inspect next | Next action |
|---|---|---|---|
| Operators can log into Service Manager but not the deployment | User model separation was misunderstood | Whether the deployment admin was configured as the same credentials as Service Manager or separately | Fix deployment user assignments instead of treating it as a random authentication bug |
| Secure distribution between deployments fails after an otherwise normal build | Certificate or trust boundary mismatch | Server certificate, client certificate, and trusted root chain on both deployment endpoints | Correct trust design before adding more replication paths |
| Cleanup after removal feels incomplete | Service registration was treated as if it were deployment-local | OS service registration files and any generated unregister scripts | Complete the host-level cleanup rather than rerunning deployment deletion blindly |
| Commands keep using the wrong deployment homes | OGG_ETC_HOME or OGG_VAR_HOME still points to an old runtime |
Shell environment and command wrappers used by operators and automation | Repair the environment contract before continuing deeper troubleshooting |
| A host with multiple deployments is hard to explain | The design has no clean reason for the splits that exist | Deployment names, owners, ports, certificates, and patch groups | Refactor the documentation first; if the logic still fails, refactor the deployment layout next |
In GoldenGate Microservices, the deployment is the operational unit that matters most after installation. It owns the service surface, the runtime filesystem, the user boundary, the certificate posture, and the change window. Service Manager is the host-local controller that sees and manages those deployments, but it is not a substitute for thinking clearly about deployment design.
If you keep one idea from this topic, make it this: split deployments when blast radius, security posture, or maintenance semantics differ, and keep them together only when those concerns are intentionally shared. Once that rule is applied consistently, lifecycle actions such as validation, patching, upgrading, and retirement become much easier to execute without surprises.
Test your understanding
Select an answer and click Check.
Q1 — A GoldenGate deployment shares which resource with other deployments on the same host?
Q2 — What is the first step when creating a new deployment with OGGCA?
Q3 — Which directory stores the deployment-specific runtime state, trails, and parameter files?
Q4 — During an upgrade, the GoldenGate software home is updated while deployments:
No comments:
Post a Comment