Right now what I can possibly think is:-
1)First I need to find all the distinct expiryDay from my bucket.
2)Then I need to query for keywords or documentIds whose expiryDay is let’s say 90 days and find the difference between the document’s timestamp with current-date and then again need to check the resulted difference is equal or greater than 90 or not. If yes then return the keywords or documentIds.
So there might be two different query to achieve my objective. And I don’t know how to do index for the above query as I don’t know how the query will look.
I wonder if it is possible using couchbase-java api, something similar to below:- Statement expiryDays= Select.select("Don't know how to select distinct expiryDay").from("My_Bucket"); Statement keywords= Select.select("keyword").from("My_Bucket").where("Don't know what to supply here");
Okay. But I am using 4.1.1. I should have mentioned it before. Anyway now I am facing problem while executing DISTINCT query. It is throwing java.lang.RuntimeException: java.util.concurrent.TimeoutException when executing below query Statement statement = Select.selectDistinct("expiryDay").from("My_Bucket") .where(Expression.x("expiryDay").isNotNull());
which is similar to below n1ql SELECT DISTINCT expiryDay FROM My_Bucket WHERE expiryDay IS NOT NULL.
But surprisingly if I switch to Couchbase 4.5.0v it is working fine.
Okay, this is definitely a bug in Couchbase server (In my case 4.1.1 and don’t know about 4.5.0). Earlier I actually kept nearly 10 million of documents in CB(4.1.1) for testing purpose. Just a week back I flushed the entire Bucket and kept only 4 document for the purpose of above testing.
Now when I run my Distinct query with a timeout of 30 seconds it throws ConcurrentTimeoutException. So I changed the query and run only Select query instead of Distinct and saw thousands/millions of expiryDay value printing in my java console.
I am doubting that the indexing is keeping a separate set of data for himself which is intact even after the data is flushed from the bucket. I wonder if it is not fixed in new version of CB. If so then this is a real threat.
1)Definitely, I’ll post the explain query as soon as my server is accessible to me.
2)Just mentioning, I did not reduce the bucket. I simply flush the bucket and then entered 4 document.
3)That’s a good suggestion but in real time I have to delete some data or move the data to some different location. In that case, I can’t drop or create index every time I perform this task.
4)I am not saying the issue will not come in 4.5.0. I am facing this problem in 4.1.1 because I stored nearly 10million data nearly 3 to 4 months back and flushed it just a week before. So there is 90% chance that this issue might come in 4.5.0 or any other higher version if we re-iterate the same test scenario.