Java SDK Counter Bug

I am using counters as an AI id in my application.
I am using couchbase server 4.0 CE.

Using java sdk version > 2.2.0 I get ‘DocumentDoesNotExist’ exceptions when creating new counters.

I am explicitly setting an initial value, which should not throw this exception according to this documentation:

See a sample code snippet:

String id = getCounterId();
JsonLongDocument value = bucket.counter(id, 1L, 1L); // throws DDNE exception

it’s a
See my post: Couchbase 4.0 community huge delay returning result with “full ejection” set

public workaroud: do not use “full eviction” for bucket
IMHO, this makes 4.0.0 CE completely unusable for configurations with 512…1G RAM

Ahh I see, thanks for the information.
I don’t think that workaround will be possible for us, as we are storing a large history of data in our bucket; the recent data in RAM is most accessed but we need to have all of it accessible. Therefore the RAM required to store all keys is extremely high.

While not as high performing, do you think it would be reasonable for me to implement the AI id feature using CB locks? I’d prefer not to do this but if the 4.1 release is not soon forthcoming on CE I may need to implement some workaround.

I just don’t know what “AI id feature” is :slight_smile:

As i suppose 4.1.1-EE (released recently) has broken indexer; see my post 4.1.0-EE vs 4.1.1-EE: indexer too slow
So, 4.1.1-EE really fixes, but introduces (probably) broken indexer. And, of course, there is no info about (possible ?) 4.1.1-CE release.

I’m referring to the traditional SQL ‘Auto Increment Id’ feature :slight_smile:

Ah, ok :slight_smile: I thougt “AI = Artifical intellegence” :slight_smile:
I don’t think you should “reinvent the wheel” by using locks.

  1. Google (or search this forum) for something like “couchbase sql auto increment implementation”, obviously, you are not the first one in the world with this problem.
  2. “Think different”: does your application really needs “sequential ids” ? Try timestamps as IDs, data-chunk-hashes as IDs, user-defined ids etc. And google “different ideas” too.

Anyway, i think further discussion is out of bounds for current topic.

I got this error too.
how to resolve this error?
@grep what does “full eviction” mean?

.counter(“counter”, 1)
.flatMap(new Func1<JsonLongDocument , Observable<?>>() {
public Observable<?> call(JsonLongDocument counterRes) {
String id = “doc” + counterRes.content();
System.out.println("id = " + id);

            return bucket.async().insert(RawJsonDocument.create(id, "dddd"));
at rx.internal.operators.OnSubscribeMap$MapSubscriber.onNext(
at rx.observers.Subscribers$5.onNext(
at rx.internal.operators.OnSubscribeDoOnEach$DoOnEachSubscriber.onNext(
at rx.internal.producers.SingleProducer.request(
at rx.Subscriber.setProducer(
at rx.internal.operators.OnSubscribeMap$MapSubscriber.setProducer(
at rx.internal.operators.OnSubscribeMap$MapSubscriber.setProducer(
at rx.Subscriber.setProducer(
at rx.Subscriber.setProducer(
at rx.subjects.AsyncSubject.onCompleted(
at com.couchbase.client.core.endpoint.AbstractGenericHandler.completeResponse(
at com.couchbase.client.core.endpoint.AbstractGenericHandler.access$000(
at com.couchbase.client.core.endpoint.AbstractGenericHandler$
at java.util.concurrent.Executors$
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(
at java.util.concurrent.ScheduledThreadPoolExecutor$
at java.util.concurrent.ThreadPoolExecutor.runWorker(
at java.util.concurrent.ThreadPoolExecutor$
Caused by: rx.exceptions.OnErrorThrowable$OnNextValue: OnError while emitting onNext value: com.couchbase.client.core.message.kv.CounterResponse.class
at rx.exceptions.OnErrorThrowable.addValueAsLastCause(
at rx.internal.operators.OnSubscribeMap$MapSubscriber.onNext(
… 20 more

Hello guys,
Any body has the solution for this exception?
I also have this exception when I use couchbase client to increase/decrease a counter.

bucket.counter("sequenceNoGenerator", 1L, 1L, 1000000)