<environment>
<JAVA_HEAP>16g</JAVA_HEAP> <!-- upper end; most deployments use 4–12 GB -->
<JAVA_PARALLEL_GC>false</JAVA_PARALLEL_GC>
<JAVA_OPTS>
-XX:+UseG1GC
-XX:MaxGCPauseMillis=200
-XX:+ParallelRefProcEnabled
-XX:InitiatingHeapOccupancyPercent=45
-XX:ParallelGCThreads=2
-XX:ConcGCThreads=1
</JAVA_OPTS>
</environment>
GC Optimization for CoreMedia Applications
Learn how to choose and tune the right garbage collector for CoreMedia containerized applications, including Parallel GC and G1GC.
What you will learn
- Understanding GC trade-offs and how to configure them for CoreMedia delivery components
Prerequisites
- Blueprint Workspace
- CAE knowledge
Target Audience
Default: Parallel GC
CoreMedia uses Parallel GC (-XX:+UseParallelGC) as the default garbage collector for all containerized Spring Boot applications, controlled via the JAVA_PARALLEL_GC=true environment variable evaluated by the java-application-base entrypoint.
This default exists as a deliberate workaround for a JDK bug (JDK-8192647) that causes G1GC to throw OutOfMemoryError under specific memory allocation patterns. The bug affects all JDK versions prior to JDK 25 and will not be backported.
Why delivery components are most affected: The image transformation pipeline allocates large, contiguous heap regions via JNI. While a JNI call is active, G1GC cannot release that memory region — if heap pressure is high enough, the JVM raises OutOfMemoryError. Parallel GC does not have this limitation.
Parallel GC vs. G1GC
| Parallel GC (default) | G1GC (opt-in) | |
|---|---|---|
OOM risk |
Low — JDK bug does not apply |
Elevated on JDK < 25 |
GC pause duration |
Long — seconds on large heaps |
Short and bounded by |
Responsiveness under load |
Degraded during Full GC |
Consistently good |
Raw GC throughput |
Higher |
Slightly lower (concurrent GC threads) |
Large image transformation |
Safe |
Risk of heap fragmentation |
CoreMedia support |
Full (default) |
Customer responsibility |
Switching to G1GC
G1GC is worth considering for deployments with a well-provisioned heap and active GC monitoring. Most relevant for:
-
Content Application Engine (CAE) — high-traffic live instances
-
Headless Server — sustained load
Set JAVA_PARALLEL_GC=false and configure G1GC via JAVA_OPTS:
Adjust ParallelGCThreads and ConcGCThreads to match available CPU cores.
|
For distroless images (no java-application-base entrypoint), set the flag directly in jvmFlags:
<jvmFlag>-XX:+UseParallelGC</jvmFlag>
Tuning G1GC for Image-Heavy Workloads
G1GC allocates objects larger than half the region size as Humongous Objects directly in the old generation, causing fragmentation and potential OOM.
Typical allocation scale:
-
A 4000×2000 px PNG decompresses to ~23 MB (RGB) or ~30 MB (RGBA).
-
Peak allocation during a single image operation (source + working + output buffers): 50–90 MB.
Mitigation — increase G1 region size:
-XX:G1HeapRegionSize=32m
Valid values: 1m, 2m, 4m, 8m, 16m, 32m. The 32 m maximum gives a Humongous threshold of only 16 MB — still below a single decompressed 4000×2000 RGB buffer. For deployments regularly processing high-resolution images, Humongous allocations cannot be avoided; Parallel GC remains the more appropriate choice.
For moderate resolutions (≤ ~2000×1000 px), 32m can still help. Monitor GC logs for Humongous allocation events.
Mitigation — throttle concurrent image transformations (works at any resolution):
com.coremedia.transform.throttle.permits=<n>
Limits concurrent image transformations, capping peak native memory demand. Default: one quarter of configured heap size (in bytes). Reducing this lowers peak memory pressure at the cost of transformation throughput.
Heap Sizing
| Mistake | Symptom |
|---|---|
Heap too small |
Parallel GC thrashes → |
Heap too large |
Heap + off-heap exceeds container limit → OOM kill by OS/Kubernetes |
Guidelines:
-
Set Java heap to 70–80 % of the container memory limit via
JAVA_HEAPor-XX:MaxRAMPercentage. The remaining 20–30 % covers off-heap (native buffers, Metaspace, thread stacks). -
4 GB is a tested baseline for a delivery CAE or Headless Server without heavy customisation. Scale up for higher traffic or more complex business logic.
For Kubernetes with a distroless image, use percentage-based sizing:
<jvmFlag>-XX:InitialRAMPercentage=45.0</jvmFlag>
<jvmFlag>-XX:MaxRAMPercentage=70.0</jvmFlag>