JD Edwards on AWS: How recreating an Oracle instance degraded performance (and how OP&S detected it)
- Jean-Michel Alluguette – OP&S

- Dec 17, 2025
- 5 min read
Migrating, rebuilding or “recreating” an Oracle instance on AWS for a JD Edwards environment is a fairly common operation: need for space, cleanup, change of instance type, storage redesign, standardization…
The problem is that behind a term that seems simple (“we recreate the instance”), there is often an underestimated risk: some of the Oracle parameters and architectural choices revert to a default state .
And these “small differences” can be enough to transform a stable environment into one that struggles — without anyone immediately understanding why.
In this feedback, I tell you about a real case: a rebuild on AWS led to a performance degradation on JD Edwards, and how OP&S made it possible to very quickly identify the origin of the problem.
The context: AWS reconstruction to regain space
The client operates a JD Edwards ERP on an Oracle database hosted on AWS.
Due to a need for more space and cleaning, the team decided to recreate the Oracle instance (new instance, data reloading, technical upgrade).
On paper, the operation is under control: same Oracle version, same data, same application flows.
However, in practice, a reconstruction is rarely perfectly “equivalent”:
Storage can change (EBS, IOPS parameters, throughput),
The sizes and configuration of some objects may vary.
Oracle settings revert to default values.
and elements invisible at first glance (redo logs, TEMP/UNDO, stats, etc.) can drift.
Symptoms from a business perspective: “JD Edwards is slow” (but without proof)
In the following days:
slowdowns experienced by users,
treatments that last longer,
and an overall perception of degradation.
On the infrastructure side, nothing seems to be exploding: no major incident, no breakdown.
This is exactly the kind of situation where you waste a lot of time, because you keep switching between hypotheses:
“AWS has changed something”,
“It’s JDE”,
“That’s the basics”,
“It’s storage”,
“It’s volumetrics.”
In this case, the objective was to stop the assumptions and start again from a factual basis.
What OP&S shows: before/after, black and white
Thanks to OP&S, the before/after comparison is immediate.
Volumetrics
The data (DATA) is correct.
but the TEMP / UNDO and redo logs spaces appear to be incorrectly sized .
DB Charges
New Oracle wait times of the “Configuration” type are emerging,
which points to unsuitable parameters.
Redo logs
The indicators show an increase in “log file switch (checkpoint incomplete)” events.
redologs too small → up to 45,000 seconds of waiting per day .
Other signals
“Buffer busy wait” events and other symptoms of poorly adapted storage or unoptimized settings.
Step 1 — Compare before / after (no debate)
The first thing to do in this type of incident is to compare the environment before and after .
OP&S allows you to compare:
volumetrics (DATA / TEMP / UNDO),
database load,
Waiting times (wait events) and their distribution,
write indicators (redo logs),
Containment and I/O signals.
The crucial point: without historical context or comparison, we waste a lot of time.
With a “before/after” view, you can immediately see if the environment “behaves” differently.
Step 2 — Check the volume and sizing (DATA, TEMP, UNDO)
In this specific case, volumetric analysis already reveals a signal:
The data volume is consistent.
However, TEMP/UNDO appear differently sized compared to the previous environment.
and some sizing choices no longer correspond to the actual load profile.

However, on JDE (batches, processing, large queries), improperly sized TEMP and UNDO can cause:
additional I/O costs,
slowdowns during peak periods,
and saturation phenomena that are difficult to explain from an ERP perspective.
Step 3 — Analyze wait times: when “Configuration” appears
In parallel, the analysis of “wait events” highlights the appearance (or increase) of events classified on Oracle's side as related to the configuration .
This is an important signal: this is not an isolated request or a “normal” application load.
It's the infrastructure/database configuration that no longer matches the actual need.
Step 4 — The key point of the REX: redo logs too small, and explosion of wait events
In this case, the diagnosis was accelerated thanks to a very clear observation: the indicators linked to the redo logs were going in the wrong direction.
When the redo logs are too small, the database switches too often and can end up generating waits like:
log file switch (checkpoint incomplete)
or waiting times related to checkpoints / writing.
And this has a direct consequence:
The database spends time managing its write operations instead of serving queries.
and JDE treatments slow down, sometimes intermittently (so difficult to prove without measurements).
In our case, a very telling order of magnitude: up to ~45,000 seconds of waiting per day on these phenomena.
That's huge. And that alone is enough to explain a large part of the perceived decline.

Step 5 — Check other signals (buffer busy wait, storage, etc.)
A rebuild on AWS often changes several parameters at once.
In the same diagnostic tool, OP&S also allows monitoring of:
Buffer busy waits (containment on hot blocks),
I/O indicators (latency, saturation),
CPU usage
memory behavior.
The benefit: avoiding correcting one issue (redo logs) while overlooking a second factor (storage or contention) that would continue to degrade the ERP.
Step 6 — Correction and validation: we measure the return to normal
Once the redologs have been resized (and the associated parameters reviewed), the validation must be factual:
reduction in wait times related to log switches / checkpoints
improvement of load curves,
reduction in total waiting time
visible improvement in processing times.
That's exactly the advantage of OP&S in this type of context: you don't "think" it's better, you see it in the metrics.
Checklist: What to check after an Oracle recreation (specifically JDE on AWS)
Oracle Settings & Files
Size and number of redo logs (and switch frequency)
TEMP and UNDO sizing
Basic memory-related parameters (SGA/PGA) and peak behavior
Statistics / data collection methods and associated jobs
Checkpoint and writing settings
AWS storage (often underestimated)
Volume type (EBS), IOPS and throughput consistent with the load
Actual I/O latency during batch processing
File distribution (DATA, TEMP, UNDO, REDO) and potential contention
Operations / Runbook
Before/after comparison over a representative period
Monitoring of waits by class (I/O, Concurrency, Configuration)
Validation on critical JDE processes (nighttime batches, R42800, etc.)
What this feedback shows (and what many teams underestimate)
Recreating an Oracle instance is not neutral: Even if the data and the Oracle version are identical, the “technical” environment may change.
Redo logs are a classic trap: They are often “small” on the checklist, but the impact on production can be huge.
Without historical data, you waste a lot of time: A good tool should allow you to compare and isolate changes objectively.
Conclusion
This real-life case illustrates a simple reality: in an AWS project, performance is not solely determined by the power of the instance.
They are played out in the details: redo logs, TEMP/UNDO, storage, Oracle settings.
With OP&S you can:
to objectify before/after ,
quickly detect abnormal waits,
correct and validate the winnings,
without relying on expensive Oracle options.
Have you migrated JDE to AWS (or are you considering it)?
A 1-month OP&S PoC allows for:
to capture the current state of your Oracle JDE databases,
to identify configuration or performance deviations,
and to calmly prepare for your future developments.

About the author
Jean-Michel Alluguette is the founder of OP&S, a software dedicated to Oracle and ERP (JD Edwards / E-Business Suite) environments to manage performance, costs (FinOps) and security/licenses.
Do you want a factual analysis of your Oracle databases? → Request a free 1-month PoC.




Comments