Couchbase Java SDK Out of memory error on initialization

Getting Out of memory error when initializing couchbase client while running testcases .

Environment
JDK 16
OS - macOS Monterey
gradle-version : 7.2
-Xmx8000m
-Xms2000m
-XX:MaxDirectMemorySize=1024m
-XX:+UseG1GC

Please find the stacktrace below .

Caused by: java.lang.OutOfMemoryError: Cannot reserve 131072 bytes of direct buffer memory (allocated: 536760640, limit: 536870912)
at java.base/java.nio.Bits.reserveMemory(Bits.java:178)
at java.base/java.nio.DirectByteBuffer.(DirectByteBuffer.java:121)
at java.base/java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:330)
at com.couchbase.client.core.deps.io.netty.channel.unix.Buffer.allocateDirectWithNativeOrder(Buffer.java:40)
at com.couchbase.client.core.deps.io.netty.channel.kqueue.KQueueEventArray.(KQueueEventArray.java:52)
at com.couchbase.client.core.deps.io.netty.channel.kqueue.KQueueEventLoop.(KQueueEventLoop.java:88)
at com.couchbase.client.core.deps.io.netty.channel.kqueue.KQueueEventLoopGroup.newChild(KQueueEventLoopGroup.java:185)
at com.couchbase.client.core.deps.io.netty.channel.kqueue.KQueueEventLoopGroup.newChild(KQueueEventLoopGroup.java:33)
at com.couchbase.client.core.deps.io.netty.util.concurrent.MultithreadEventExecutorGroup.(MultithreadEventExecutorGroup.java:84)
at com.couchbase.client.core.deps.io.netty.util.concurrent.MultithreadEventExecutorGroup.(MultithreadEventExecutorGroup.java:60)
at com.couchbase.client.core.deps.io.netty.util.concurrent.MultithreadEventExecutorGroup.(MultithreadEventExecutorGroup.java:49)
at com.couchbase.client.core.deps.io.netty.channel.MultithreadEventLoopGroup.(MultithreadEventLoopGroup.java:59)
at com.couchbase.client.core.deps.io.netty.channel.kqueue.KQueueEventLoopGroup.(KQueueEventLoopGroup.java:111)
at com.couchbase.client.core.deps.io.netty.channel.kqueue.KQueueEventLoopGroup.(KQueueEventLoopGroup.java:98)
at com.couchbase.client.core.deps.io.netty.channel.kqueue.KQueueEventLoopGroup.(KQueueEventLoopGroup.java:74)
at com.couchbase.client.core.env.IoEnvironment.createEventLoopGroup(IoEnvironment.java:510)
at com.couchbase.client.core.env.IoEnvironment.(IoEnvironment.java:303)
at com.couchbase.client.core.env.IoEnvironment.(IoEnvironment.java:66)
at com.couchbase.client.core.env.IoEnvironment$Builder.build(IoEnvironment.java:726)
at com.couchbase.client.core.env.CoreEnvironment.(CoreEnvironment.java:153)
at com.couchbase.client.java.env.ClusterEnvironment.(ClusterEnvironment.java:53)
at com.couchbase.client.java.env.ClusterEnvironment.(ClusterEnvironment.java:46)
at com.couchbase.client.java.env.ClusterEnvironment$Builder.build(ClusterEnvironment.java:213)

This happens inside gradle build when all test cases run . The couchbase server is running inside the TestContainer
https://www.testcontainers.org/modules/databases/couchbase/

Hello @vishal which version of Java SDK are you using and what is the test that you are running ? Are you doing any batch operation if so what size ? Are you using Reactive API ? How may core cpu or vCPU are you using ?

Hi Vishal. I wonder if it could be a resource leak.

Can you check the tests and see if they’re properly cleaning up all Cluster and ClusterEnvironment objects they create?

For every call to Cluster.connect there should be a matching call to Cluster.disconnect(). If you’re customizing the cluster environment, for every call to ClusterEnvironment.Builder.build() there should be a matching call to ClusterEnvironment.shutdown().

Thanks,
David