Error from Java SDK

Hi

I have this error from my production server:

Caused by: java.util.NoSuchElementException: Sequence contains no elements
at rx.internal.operators.OperatorSingle$ParentSubscriber.onCompleted(OperatorSingle.java:115)
at rx.internal.operators.OperatorMerge$MergeSubscriber.emitLoop(OperatorMerge.java:656)
at rx.internal.operators.OperatorMerge$MergeSubscriber.emit(OperatorMerge.java:568)
at rx.internal.operators.OperatorMerge$InnerSubscriber.onCompleted(OperatorMerge.java:857)
at rx.internal.operators.OperatorZip$Zip.tick(OperatorZip.java:239)
at rx.internal.operators.OperatorZip$Zip$InnerSubscriber.onCompleted(OperatorZip.java:307)
at rx.internal.operators.OperatorOnBackpressureBuffer$BufferSubscriber.complete(OperatorOnBackpressureBuffer.java:163)
at rx.internal.util.BackpressureDrainManager.drain(BackpressureDrainManager.java:187)
at rx.internal.util.BackpressureDrainManager.terminateAndDrain(BackpressureDrainManager.java:115)
at rx.internal.operators.OperatorOnBackpressureBuffer$BufferSubscriber.onCompleted(OperatorOnBackpressureBuffer.java:134)
at rx.subjects.SubjectSubscriptionManager$SubjectObserver.onCompleted(SubjectSubscriptionManager.java:231)
at rx.subjects.AsyncSubject.onCompleted(AsyncSubject.java:101)
at com.couchbase.client.core.endpoint.query.parser.YasjlQueryResponseParser.finishParsingAndReset(YasjlQueryResponseParser.java:358)
at com.couchbase.client.core.endpoint.query.QueryHandlerV2.decodeResponse(QueryHandlerV2.java:176)
at com.couchbase.client.core.endpoint.query.QueryHandlerV2.decodeResponse(QueryHandlerV2.java:61)
at com.couchbase.client.core.endpoint.AbstractGenericHandler.decode(AbstractGenericHandler.java:287)
at com.couchbase.client.deps.io.netty.handler.codec.MessageToMessageCodec$2.decode(MessageToMessageCodec.java:81)
at com.couchbase.client.deps.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88)
at com.couchbase.client.deps.io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111)
at com.couchbase.client.deps.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at com.couchbase.client.deps.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at com.couchbase.client.deps.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at com.couchbase.client.deps.io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438)
at com.couchbase.client.deps.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:312)
at com.couchbase.client.deps.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:286)
at com.couchbase.client.deps.io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
at com.couchbase.client.deps.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at com.couchbase.client.deps.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at com.couchbase.client.deps.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at com.couchbase.client.deps.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
at com.couchbase.client.deps.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at com.couchbase.client.deps.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at com.couchbase.client.deps.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at com.couchbase.client.deps.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1302)
at com.couchbase.client.deps.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at com.couchbase.client.deps.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at com.couchbase.client.deps.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at com.couchbase.client.deps.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:135)
at com.couchbase.client.deps.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:646)
at com.couchbase.client.deps.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:581)
at com.couchbase.client.deps.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498)
at com.couchbase.client.deps.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:460)
at com.couchbase.client.deps.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at com.couchbase.client.deps.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:811)

In my test environment with a copy of the production data from a couple of months ago I donā€™t have this error.

The error only occurs when I run a function that process many documents (65000+). If I run the same query from the web admin interface it returns the data without any error.

I have tried to search for this error - but have not found any explanations or solutions.

This is my environment:
Couchbase Server: Community Edition 6.6.0 build 7909
Java SDK: couchbase-java-client-2.5.7.jar
Java version: 1.8

Hi @jda
Java SDK 2.5.7 is very old at this point (April 2018), can you try upgrading to 2.7.19? I donā€™t believe this is a known issue, but always good to be running with the latest and greatest.
And speaking of that, are you considering a move to SDK 3.X, which has been GA for over a year now? SDK 3.x uses a different parsing method than the ā€˜Yasjlā€™ one used in SDK 2.X, so if this is an issue in the parser, it certainly wonā€™t exist in 3.x.

Itā€™s a good point. I had to downgrade to 2.5.x at some point due to issues with ā€œSnappy flagā€ on Community Server - and have left it there since. I could try out the latest version 2.7.x to see if it behaves Ok - and fixes this issue.

I have not had the chance to look at the version 3.x SDK and how many changes it would require to upgrade to using that. I would normally want to run with the most up-to-date version to get the latest fixes and performance improvements.

Thanks!

The change to 3.x is not too huge and highly recommended, we have documentation with full details. (Donā€™t be put off by the length of it :slightly_smiling_face: - itā€™s covering all services, you likely wonā€™t need all of it). The 3.x interface is cleaner and clearer, and thereā€™s a number of performance improvements too under the hood.

Thanks Graham

Iā€™m installing 2.7.19 now in our dev & test environments. If that works Iā€™ll put it on the production server.

ā€¦ and then it seems Iā€™ll need to have a look at the v.3.x SDK again. Iā€™m not going to use collections nor scopes now - but again would like to be on the latest version. There is just some work in doing this and making sure that the code also works with the new SDKā€¦

Why are the metric-micrometer and metric-opentelemetry jars packaged with the 3.1.3 SDK? According to the documentation the only dependencies are:

  • java-client
  • core-io
  • reactor-core
  • reactive-streams

Just curious if I need to install the others too?

The code doesnā€™t fail with the error in production any more! :+1:

However, it did time out (when generating and exporting som data)ā€¦ I donā€™t have anything fact based - but my feeling is that it seems slower after upgrading to 2.7.19ā€¦ I wonder if there are some upgrade/tuning steps that I missed by jumping from 2.5.7 directly to this version :slight_smile:

Ok, started to try to upgrade to SDK 3.1.3ā€¦

There are many changes neededā€¦ :frowning:

Basically, all access to the JsonDocument classes etcā€¦ So that is all of my approx. 30 DAO classes. Ouchā€¦ Not sure I can do that right nowā€¦

@jda the metrics and tracing are optional dependencies we include in the zip for convenience, they are not mandatory.

1 Like

can you quantify ā€œslowerā€? Type of operation / latency / throughput etc ā€“ also can you show us your current configuration? It is hard to give generic advice :slight_smile:

Yep, I appreciate that.

		if (dbEnv == null) {
			dbEnv = DefaultCouchbaseEnvironment.builder().kvTimeout(60000) //60000ms = 60s
					.build();
		}
		Cluster cluster = CouchbaseCluster.create(dbEnv, ConfigurationBean.get().getDatabaseServerNames());
		cluster.authenticate(CB_DATA_USER, CB_DATA_PASSWORD);
		db = cluster.openBucket(CB_DATA_STORE);

This is the current setup.

Iā€™ve started to migrate the code to SDK 3.1.3 and then the same should look like this:

String timeoutOpts = "timeout.kvTimeout=60s";
Cluster cluster = Cluster.connect(ConfigurationBean.get().getDatabaseServerNames() + "?" + timeoutOpts, CB_DATA_USER ,CB_DATA_PASSWORD );
Bucket bucket = cluster.bucket(CB_DATA_STORE);
db = bucket.defaultCollection();

But there are a number of situations I find difficult to migrate:

  1. In rare situations I have to read some data that Iā€™ve just written (e.g. when creating a user object). In SDK 2 I used N1qlParams.build().consistency(ScanConsistency.STATEMENT_PLUS); added it to the query (and reset it after first call). How should I do this in SDK3?
  2. I use JsonDocument in many places setting data on doc.contents(). Now I use a JsonObject instead - but how can I check if a document is ā€œnewā€ (used the doc.cas() before).

I know this is a little out of the original scope. But this is where I struggle at the moment trying to get to the newest versionā€¦

Ok, think I found a way to build QueryOptions with scan consistency.

Using a JsonObject for the document I just fell back to checking if a specific item exists on the document. Not ideal - but should workā€¦

@jda the cas is still returned from every mutation in the result - can you explain a little more how youā€™d check before that a doc is ā€œnewā€?

Also note that you donā€™t have to build up the connection string with the timeout, you can still use the builder on the environment and set it on the TimeoutConfig.

Yes, but if I use the connection string then the client will handle close of connection on itā€™s own - otherwise I would have to do that (which is really not possible in the application - itā€™s a web site)

This is the way I did it before:

protected void updateRevisionInfo(JsonDocument doc, BaseData data) {
	if (doc.cas() == 0) { // New doc
		:
	}

But as Iā€™m using the JsonObject now I donā€™t get the cas()

not being able to shutdown the app when it stops is a general issue - why is there no way to shut down the cluster when the app stops/gets undeployed?

that is a bit ambiguous, the cas value itself does not indicate if something is new or not. If you perform a insert operation youā€™ll get an exception if the document already exists - that is the ā€œproperā€ way to find out ā€¦ unless Iā€™m missing something in the overall workflow of your app

Leaving aside the doc.cas() == 0 check, which confuses me a bit also - you still get the CAS, itā€™s just separate from the JsonDocument now. collection::get returns a GetResult that has a .cas() method, and same goes for most read and write operations. To make it easier to reuse existing code you could perhaps pass around the GetResult rather than the JsonDocument?

Well, I donā€™t know of a way. Itā€™s an IBM XPages application. So I just thought that letting the SDK client handle it would be a safer way :slight_smile: