Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(Possibly) Incorrect initial max memory size when building a direct memory buffer pool #2201

Open
kofemann opened this issue Dec 15, 2023 · 0 comments

Comments

@kofemann
Copy link
Contributor

kofemann commented Dec 15, 2023

When creating a buffer pool using PooledMemoryManager, then DEFAULT_HEAP_USAGE_PERCENTAGE (3%) of the heap is used to calculate the max pool size per slice:

        final long heapSize = Runtime.getRuntime().maxMemory();
        final long memoryPerSubPool = (long) (heapSize * percentOfHeap / numberOfPools);

https://github.com/eclipse-ee4j/grizzly/blob/master/modules/grizzly/src/main/java/org/glassfish/grizzly/memory/PooledMemoryManager.java#L170C1-L171C89

However, this calculation is not correct when direct buffers are created, as the amount of direct memory is not reported by Runtime.getRuntime().maxMemory(). Thus, on machines with large heap, but a small direct memory, the initialization fails with OOM. This simple code:

    // run with -XX:MaxDirectMemorySize=128m
    public static void main(String[] args) {

        MemoryManager<? extends Buffer> POOLED_BUFFER_ALLOCATOR =
                new PooledMemoryManager(
                        512 * 1024, // base chunk size
                        1, // number of pools
                        2, // grow factor per pool, ignored, see above
                        4, // expected concurrency
                        PooledMemoryManager.DEFAULT_HEAP_USAGE_PERCENTAGE,
                        PooledMemoryManager.DEFAULT_PREALLOCATED_BUFFERS_PERCENTAGE,
                        true  // direct buffers
                );
    }

fails with:

Exception in thread "main" java.lang.OutOfMemoryError: Cannot reserve 524288 bytes of direct buffer memory (allocated: 133701632, limit: 134217728)
	at java.base/java.nio.Bits.reserveMemory(Bits.java:178)
	at java.base/java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:127)
	at java.base/java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:360)
	at org.glassfish.grizzly.memory.PooledMemoryManager$PoolSlice.allocate(PooledMemoryManager.java:685)
	at org.glassfish.grizzly.memory.PooledMemoryManager$PoolSlice.<init>(PooledMemoryManager.java:569)
	at org.glassfish.grizzly.memory.PooledMemoryManager$Pool.<init>(PooledMemoryManager.java:432)
	at org.glassfish.grizzly.memory.PooledMemoryManager.<init>(PooledMemoryManager.java:175)
	at dev.kofemann.playground.GerizzlyAllocateorOOM.main(GerizzlyAllocateorOOM.java:65)

I assume that with max direct memory 128MB, creating a single slice with 4x 512KB chunks should be possible.

I think, for direct buffers instead of heapSize max direct memory size should be used, something like:

        final long maxMem = isDirect ? jdk.internal.misc.VM.maxDirectMemory()
                                                     : Runtime.getRuntime().maxMemory();
        final long memoryPerSubPool = (long) (maxMem * percentOfHeap / numberOfPools);
kofemann added a commit to dCache/dcache that referenced this issue Mar 7, 2024
Motivation:
Grizzly memory management uses heap memory size fraction even if direct
memory is used.

eclipse-ee4j/grizzly#2201

Moreover, for each memory slice (a memory segment per thread) allocates
at least x16 chunks, which ends up at 16MB per slice (with 1MB chunk)

Modification:
as dCache needs `2 * #Cores * maxIObuf` memory, pre-calculate the
required amount of direct memory and the corresponding fraction
in relation to heap. Initialise the memory pool with 1/16 of the
desired slice size to compensate memory allocator internal x16
increase.

Result:
dCache starts with 256m of direct memory (we still have xroot mover)

Fixes: #7522
Acked-by: Svenja Meyer
Acked-by: Lea Morschel
Target: master, 9.2
Require-book: no
Require-notes: yes
kofemann added a commit to kofemann/dcache that referenced this issue Mar 7, 2024
Motivation:
Grizzly memory management uses heap memory size fraction even if direct
memory is used.

eclipse-ee4j/grizzly#2201

Moreover, for each memory slice (a memory segment per thread) allocates
at least x16 chunks, which ends up at 16MB per slice (with 1MB chunk)

Modification:
as dCache needs `2 * #Cores * maxIObuf` memory, pre-calculate the
required amount of direct memory and the corresponding fraction
in relation to heap. Initialise the memory pool with 1/16 of the
desired slice size to compensate memory allocator internal x16
increase.

Result:
dCache starts with 256m of direct memory (we still have xroot mover)

Fixes: dCache#7522
Acked-by: Svenja Meyer
Acked-by: Lea Morschel
Target: master, 9.2
Require-book: no
Require-notes: yes
(cherry picked from commit 8f3b984)
Signed-off-by: Tigran Mkrtchyan <[email protected]>
mksahakyan pushed a commit to dCache/dcache that referenced this issue Mar 11, 2024
Motivation:
Grizzly memory management uses heap memory size fraction even if direct
memory is used.

eclipse-ee4j/grizzly#2201

Moreover, for each memory slice (a memory segment per thread) allocates
at least x16 chunks, which ends up at 16MB per slice (with 1MB chunk)

Modification:
as dCache needs `2 * #Cores * maxIObuf` memory, pre-calculate the
required amount of direct memory and the corresponding fraction
in relation to heap. Initialise the memory pool with 1/16 of the
desired slice size to compensate memory allocator internal x16
increase.

Result:
dCache starts with 256m of direct memory (we still have xroot mover)

Fixes: #7522
Acked-by: Svenja Meyer
Acked-by: Lea Morschel
Target: master, 9.2
Require-book: no
Require-notes: yes
(cherry picked from commit 8f3b984)
Signed-off-by: Tigran Mkrtchyan <[email protected]>
kofemann added a commit to kofemann/dcache that referenced this issue Apr 4, 2024
Motivation:
Grizzly memory management uses heap memory size fraction even if direct
memory is used.

eclipse-ee4j/grizzly#2201

Moreover, for each memory slice (a memory segment per thread) allocates
at least x16 chunks, which ends up at 16MB per slice (with 1MB chunk)

Modification:
as dCache needs `2 * #Cores * maxIObuf` memory, pre-calculate the
required amount of direct memory and the corresponding fraction
in relation to heap. Initialise the memory pool with 1/16 of the
desired slice size to compensate memory allocator internal x16
increase.

Result:
dCache starts with 256m of direct memory (we still have xroot mover)

Fixes: dCache#7522
Acked-by: Svenja Meyer
Acked-by: Lea Morschel
Target: master, 9.2
Require-book: no
Require-notes: yes
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant