[MB-12500] Indexing with more than one KV node does not work with toy-indexing Created: 29/Oct/14  Updated: 30/Oct/14

Status: Open
Project: Couchbase Server
Component/s: secondary-index
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Blocker
Reporter: Prathibha Bisarahalli Assignee: Pratap Chakravarthy
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
In reference to bug https://www.couchbase.com/issues/browse/CBIDXT-246 , currently this works only for one KV node, ie one node running KV, indexer and query engine.

If couchbase is setup on two nodes that are joined to a cluster, secondary index creation fails with "Internal error". This is because projector is started in each node with KV address: 127.0.0.1 but when nodes are joined to cluster, their host names are internal IPs of AWS nodes. Projecto is unable to get mutation stream from KV nodes.

This currently blocks testing with multi node setup. However single node setup works fine if node name is configured as 127.0.0.1 as part of couchbase setup.


 Comments   
Comment by Pratap Chakravarthy [ 30/Oct/14 ]
The hostname supplied to projector should match the vbmap obtained from ns_server.
Comment by Prathibha Bisarahalli [ 30/Oct/14 ]
Right. Currently host name supplied is 127.0.0.1 but ns_server as host names as IP addresses of nodes after they are joined to a cluster.
Comment by Pratap Chakravarthy [ 30/Oct/14 ]
One way is to assign the correct hostname while adding nodes to cluster. It is allowed via UI. But I am not sure whether we are going to advice our users (customers) to do that.




[MB-12545] "Wire Protocol Between Query and Index" needs to be updated in 2i documentation Created: 04/Nov/14  Updated: 12/Nov/14

Status: Open
Project: Couchbase Server
Component/s: secondary-index
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Blocker
Reporter: Prathibha Bisarahalli Assignee: Sriram Melkote
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: No

 Description   
The 2i documentation for "Wire Protocol Between Query and Index" talks about HTTP based protocol but that is changed to use protobuf over TCP. The documentation at https://docs.google.com/document/d/1j9D4ryOi1d5CNY5EkoRuU_fc5Q3i_QwIs3zU9uObbJY/edit?usp=sharing needs to be updated.




[MB-12422] [windows 2012 R2] Couchbase has crashes after installation Created: 23/Oct/14  Updated: 23/Oct/14

Status: Open
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 3.0.1
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Critical
Reporter: Parag Agarwal Assignee: Aleksey Kondratenko
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: windows 2012 r2, 3.0.1-1440

Triage: Untriaged
Link to Log File, atop/blg, CBCollectInfo, Core dump: https://s3.amazonaws.com/bugdb/jira/MB-12422/172.23.106.34-10232014-1313-diag.zip
Is this a Regression?: Unknown

 Description   
Couchbase has crashes after installation

cannot connect to localhost:8091, seeing erlang crashes

 Comments   
Comment by Aleksey Kondratenko [ 23/Oct/14 ]
It's caused by inability of erlang to spawn it's external port program that performs name resolution:

[error_logger:error,2014-10-23T13:15:49.857,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================SUPERVISOR REPORT=========================
     Supervisor: {local,kernel_safe_sup}
     Context: child_terminated
     Reason: {eacces,
                     [{erlang,open_port,
                          [{spawn,"inet_gethost 4 "},[{packet,4},eof,binary]],
                          []},
                      {inet_gethost_native,do_open_port,2,
                          [{file,"inet_gethost_native.erl"},{line,400}]},
                      {inet_gethost_native,server_init,2,
                          [{file,"inet_gethost_native.erl"},{line,183}]}]}
     Offender: [{pid,<0.214.0>},
                  {name,inet_gethost_native_sup},
                  {mfargs,{inet_gethost_native,start_link,undefined}},
                  {restart_type,temporary},
                  {shutdown,1000},
                  {child_type,worker}]
Comment by Parag Agarwal [ 23/Oct/14 ]
Alk/Dave,

172.23.106.43 has issues after uninstall of 1440, reboot and install 1421. The box which I initially referred to seems have been shut down. So I tried on 172.23.106.43, which had similar issues.

The build 1440 works fine as you can observe in sanity tests: 172.23.107.90. This implies that issue isolated to those set of VMs

Comment by Parag Agarwal [ 23/Oct/14 ]
fails for 1402 as well
Comment by Ray Chin [ 23/Oct/14 ]
After disabling ipv6 and rebooting the vm it seems the process can bind to the port. The windows 2012 template we use enables both ipv6 and ipv4 by default.




[MB-12257] n1ql + 3.0.0 primary index doesn't work after reload bucket or loading more items Created: 25/Sep/14  Updated: 27/Oct/14

Status: Open
Project: Couchbase Server
Component/s: query
Affects Version/s: cbq-alpha
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Critical
Reporter: Iryna Mironava Assignee: Manik Taneja
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: build 3.0.0-1209
n1ql version : dp3

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
after installing couchbase I run create primary index on my_bucket
and start running queries
if I add items and run same query it keeps old result
If I delete bucket and then add bucket with same name again - for every query i see error: {u'code': 5000, u'message': u'Bucket default not found.', u'caller': u'view_index:200', u'key': u'Internal Error'}

how can I update the index? or how can I delete it and create again? if bucket is deleted should the index be deleted also?

 Comments   
Comment by Gerald Sangudi [ 25/Sep/14 ]
Test blocker.
Comment by Manik Taneja [ 26/Sep/14 ]
In general we need a mechanism that informs the query engine whenever a bucket or an index is removed either via the UI or through another instance of a query engine. I guess these set of issues will be addressed once we have a query metadata store in place.

For now the best we can do, is to have a long poll function that checks the pool/buckets at a preconfigured frequency. Results may be inconsistent until the next pass of the pool refresh thread.
Comment by Manik Taneja [ 29/Sep/14 ]
As for the new items not showing up in the query, this is because by default the views engine uses stale=update_after which means that the views will be updated right after a query is made. So the next time you run the same query you should see the updated results.
Comment by Gerald Sangudi [ 02/Oct/14 ]
Hi Iryna,

For now, please insert a sleep or delay in the test, between CREATE INDEX and the time you issue the query. This should unblock you.

Thanks.
Comment by Iryna Mironava [ 03/Oct/14 ]
Hi, sure, can add a sleep but also we recreate bucket, and I cannot delete or update primary index after that, can you advise what can we do with that?
Comment by Gerald Sangudi [ 14/Oct/14 ]
Hi Iryna,

Deleting and recreating the bucket will not work, and primary indexes cannot (or should not) be deleted. Did you try adding the sleep delay, and did that help?

Thanks.
Comment by Iryna Mironava [ 15/Oct/14 ]
I did, but how much time should I wait? is there any indicator which can show me that index was built for all my items.
Our tests recreates bucket many times, so it is not even supposed to work in this case? I am understanding it correctly?
Comment by Gerald Sangudi [ 16/Oct/14 ]
Hi Iryna,

We can unblock your testing, and then we can fix the underlying issues. To unblock your testing:

- Primary indexes cannot be deleted, and query engine keeps state of indexes and buckets.

- So to run a test that creates the same bucket many times, you must: kill the query engine, delete the bucket (not via N1QL), create the bucket again, restart query engine, create primary index.

The problem of being notified when CREATE PRIMARY INDEX completes is in a separate JIRA ticket.
Comment by Iryna Mironava [ 16/Oct/14 ]
thanks! these steps helped
Comment by Manik Taneja [ 22/Oct/14 ]
From my previous comments :

"In general we need a mechanism that informs the query engine whenever a bucket or an index is removed either via the UI or through another instance of a query engine. I guess these set of issues will be addressed once we have a query metadata store in place. "

Will address this in alpha
Comment by Gerald Sangudi [ 27/Oct/14 ]
Hi Iryna,

I'm downgrading this from "Test Blocker". If you feel that it is still a Test Blocker, please change it back.

Thank you!




[MB-12118] XDCR Replication lag over WAN, 2.5.1 vs 3.0.0-1205 147% regression Created: 03/Sep/14  Updated: 28/Oct/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: 3.0
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Critical
Reporter: Thomas Anderson Assignee: Thomas Anderson
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: atlas 5x5 two cluster configuration.

Triage: Untriaged
Operating System: Centos 64-bit
Link to Log File, atop/blg, CBCollectInfo, Core dump: http://ci/sc/couchbase.com/job/xdcr-5x5/483/artifact/172.23.100.17.zip
 http://ci/sc/couchbase.com/job/xdcr-5x5/483/artifact/172.23.100.18.zip
 http://ci/sc/couchbase.com/job/xdcr-5x5/483/artifact/172.23.100.19.zip
 http://ci/sc/couchbase.com/job/xdcr-5x5/483/artifact/172.23.100.20.zip
 http://ci/sc/couchbase.com/job/xdcr-5x5/483/artifact/172.23.100.21.zip
 http://ci/sc/couchbase.com/job/xdcr-5x5/483/artifact/172.23.100.22.zip
 http://ci/sc/couchbase.com/job/xdcr-5x5/483/artifact/172.23.100.23.zip
 http://ci/sc/couchbase.com/job/xdcr-5x5/483/artifact/172.23.100.24.zip
 http://ci/sc/couchbase.com/job/xdcr-5x5/483/artifact/172.23.100.25.zip
 http://ci/sc/couchbase.com/job/xdcr-5x5/483/artifact/172.23.100.26.zip
Is this a Regression?: Yes

 Description   
replication lag increased over simulated WAN to XDCR cluster.


 Comments   
Comment by Cihan Biyikoglu [ 04/Sep/14 ]
I suggest we close this as won't fix, if you can test and validate that changing the replicator count fixes the issue.
thanks
-cihan




[MB-12499] Projector blocks indexer when projector is started with a different KV address than KV host name Created: 29/Oct/14  Updated: 29/Oct/14

Status: Open
Project: Couchbase Server
Component/s: secondary-index
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Critical
Reporter: Prathibha Bisarahalli Assignee: Pratap Chakravarthy
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: No

 Description   
If projector is started with KV address different from what cluster has been initialized with, the indexer remains hung when create index query is issued. Ideally projector should return a suitable error message in response to the MutationStream Request instead of a success response which can make indexer wait for mutations and hence blocking it.

We have seen this scenario occur while testing on AWS node. Projector was started with KV address of 127.0.0.1 (after ns_server integration). However when Couchbase was setup, server host name was specified as public DNS name (which is valid for the host name). However, in this case, when Create index query was fired, the projector could not connect to KV address due host name mismatch and instead of returning error, the indexer is in hung state at Create index query. The only way to recover here is to restart indexer.

Though this particular scenario itself could be rare (it could happen due to a bug or in some other scenario not tested yet), still it should be handled in way indexer component doesn't get blocked and an error should be returned.




[MB-12396] Projector continues to retry UPR Connection even after deletion of a bucket with secondary index(es) Created: 22/Oct/14  Updated: 29/Oct/14

Status: Open
Project: Couchbase Server
Component/s: secondary-index
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Critical
Reporter: Prathibha Bisarahalli Assignee: Pratap Chakravarthy
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
Create a bucket and 2i in it.
Delete the bucket.

From projector logs, we can see that it is continuing to retry connection with the bucket -

2014/10/22 15:36:43 go-couchbase: Error connecting to upr feed of 127.0.0.1:12000: MCResponse status=0x20, opcode=SASL_AUTH, opaque=0, msg: Auth failure
2014/10/22 15:36:43 go-couchbase: Error connecting to upr feed of 127.0.0.1:12000: MCResponse status=0x20, opcode=SASL_AUTH, opaque=0, msg: Auth failure
2014/10/22 15:37:13 go-couchbase: UPR connection lost; reconnecting to bucket "test1" in 30s
2014/10/22 15:37:13 go-couchbase: UPR connection lost; reconnecting to bucket "test1" in 30s
2014/10/22 15:37:13 Unable to refresh bucket HTTP error 404 Object Not Found getting "http://127.0.0.1:9001/pools/default/buckets/test1?bucket_uuid=b47791fa38e35cda4e0ea4e7d4ad9ceb": Requested resource not found.
 
2014/10/22 15:37:13 Unable to refresh bucket HTTP error 404 Object Not Found getting "http://127.0.0.1:9001/pools/default/buckets/test1?bucket_uuid=b47791fa38e35cda4e0ea4e7d4ad9ceb": Requested resource not found.





[MB-12414] Write unit tests for packages in https://github.com/Xiaomei-Zhang/couchbase_goxdcr_impl Created: 22/Oct/14  Updated: 12/Nov/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Task Priority: Critical
Reporter: Aruna Piravi Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: 32h
Time Spent: Not Specified
Original Estimate: 32h

Epic Link: XDCR next release




[MB-12652] Couchbase XDCR some key can't replication. Created: 12/Nov/14  Updated: 13/Nov/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: 3.0
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Critical
Reporter: vinhbt Assignee: Aleksey Kondratenko
Resolution: Unresolved Votes: 1
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Operating System: Centos 64-bit
Is this a Regression?: Unknown

 Description   
I have 2 data center group (A group and B group), each group have 2 node.
And I setting bidirectional XDCR for it. But some document of my vbucket won't sync when change.

 Comments   
Comment by Aleksey Kondratenko [ 13/Nov/14 ]
Without any diagnostics I cannot diagnose what is not working. Please collect diagnostics on all nodes (http://i.imgur.com/A167Hn7.png) and upload it somewhere (attach here or upload to any service like gdrive or dropbox)




[MB-12157] Intrareplication falls behind OPs causing data loss situation Created: 09/Sep/14  Updated: 19/Nov/14

Status: In Progress
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 3.0.1, 3.0, 3.0-Beta
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Critical
Reporter: Thomas Anderson Assignee: Thomas Anderson
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: 4 node cluster; 4 core nodes; beer-sample application run at 60Kops (50/50 ratio), nodes provisioned on RightScale EC2 x1.large images

Triage: Untriaged
Operating System: Centos 64-bit
Is this a Regression?: Yes

 Description   
the intra-replication queue grows to unacceptable limits, exposing dataloss of multiple seconds of queued replication.
the problem is more pronounced on the RightScale provision cluster, but can be seen on local physical clusters with long enough test run (>20min). recovery requires stopping input request queue.
initial measurements of the erlang process suggests that minor retries on scheduled network i/o eventually build up into a limit for push of replication data, scheduler_wait appears to be the consuming element, epoll_wait counter increases per measurement, as does the mean time wait, suggesting thrashing in the erlang event scheduler. there are various papers/presentations that suggest Erlang is sensitive to the balance of tasks (a mix of long event and short event can cause performance thruput issues).

cbcollectinfo logs will be attached shortly

 Comments   
Comment by Aleksey Kondratenko [ 09/Sep/14 ]
Still don't have any evidence. Cannot own this ticket until evidence is provided.




[MB-12639] Multiple index scans while KV Creates are in progress lead to "Error in scan response" and indexer crash Created: 12/Nov/14  Updated: 20/Nov/14

Status: Open
Project: Couchbase Server
Component/s: secondary-index
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Critical
Reporter: Prathibha Bisarahalli Assignee: Deepkaran Salooja
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: File ErrorInScanResponse_ScanDuringCreates    
Triage: Untriaged
Is this a Regression?: Unknown

 Description   
Scenario:

With 2i being created and while doc keys were being created in KV (in order of 50k - 1 lakh keys), multiple sequential Scan requests led to Indexer crashing. The select query gave "Error in scan response" error.

Additionaly, after creates were done, any further Select query also results in same error - "Error in scan response" :-

cbq> select personal_details.first_name, profile_details.loyalty.loyalty_score from default where profile_details.loyalty.loyalty_score < 5;
{
    "error":
        {
            "caller": "lsm_index:119",
            "cause": "Post http://localhost:9101/scan: EOF",
            "code": 5000,
            "key": "Internal Error",
            "message": "Error in Scan response"
        }
}


Tried the below repro twice and in both cases, it led to same error.

Repro steps that I followed:
1. Create a bucket, populate 100 docs (I used: ./gen_user_profiles.py -d -N 100 )
2. Create a 2i on the bucket: create index in12_loyalty on default(profile_details.loyalty.loyalty_score) using forestdb;
3. Create about 100K documents (I used: ./gen_user_profiles.py -d -N 100000) .
4. While documents are being created, issue index scan requests: select personal_details.first_name, profile_details.loyalty.loyalty_score from default where profile_details.loyalty.loyalty_score < 5;

During initial query requests, the data is returned. Around time when there are ~ 50K docs in bucket, the select query results in error "Error in Scan response" .

Error from indexer logs:

[ns_server:warn,2014-11-12T5:55:23.995,babysitter_of_n_0@127.0.0.1:<0.96.0>:ns_port_server:log:186]Dropped 800 log lines from indexer
[user:info,2014-11-12T5:55:23.996,n_0@127.0.0.1:<0.563.0>:ns_log:crash_consumption_loop:70]Port server indexer on node 'babysitter_of_n_0@127.0.0.1' exited with status 2. Restarting. Messages: Snapshot: Type: 5 Start: 70 End: 70
rflags 0x10246
cs 0xe033
fs 0x0
gs 0x0
[error_logger:error,2014-11-12T5:55:23.997,babysitter_of_n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.96.0> terminating
** Last message in was {#Port<0.5975>,{exit_status,2}}
** When Server state == {state,#Port<0.5975>,indexer,
                               {["gs 0x0","fs 0x0","cs 0xe033",
                                 "rflags 0x10246"],
                                ["\tSnapshot: Type: 5 Start: 70 End: 70 "]},
                               {ok,{1415789724013,#Ref<0.0.0.6744>}},
                               ["\tSnapshot: Type: 1 Start: 75 End: 75 ",
                                "\tMeta: Bucket: default Vbucket: 655 Vbuuid: 3454616442812 Seqno: 0 ",
                                "\tStreamId: 0","\tType: 7",
                                "\tMessage: MsgStream",
                                "05:55:23.813310 [DEBUG] Timekeeper::handleSnapshotMarker ",
                                "\tSnapshot: Type: 1 Start: 61 End: 61 ",
                                "\tMeta: Bucket: default Vbucket: 318 Vbuuid: 199361847244951 Seqno: 0 ",
                                "\tStreamId: 0","\tType: 7",
                                "\tMessage: MsgStream",
                                "05:55:23.813275 [DEBUG] Timekeeper::handleSnapshotMarker ",
                                "\tSnapshot: Type: 5 Start: 60 End: 60 ",
                                "\tMeta: Bucket: default Vbucket: 237 Vbuuid: 260384674554459 Seqno: 0 ",
                                "\tStreamId: 0","\tType: 7",
                                "\tMessage: MsgStream",
                                "05:55:23.812963 [DEBUG] Timekeeper::handleSnapshotMarker ",
                                "\tSnapshot: Type: 5 Start: 60 End: 60 ",
                                "\tMeta: Bucket: default Vbucket: 362 Vbuuid: 164872837798132 Seqno: 0 ",
                                "\tStreamId: 0","\tType: 7",
                                "\tMessage: MsgStream",
                                "05:55:23.812929 [DEBUG] Timekeeper::handleSnapshotMarker ",
                                "\tSnapshot: Type: 5 Start: 56 End: 56 ",
                                "\tMeta: Bucket: default Vbucket: 777 Vbuuid: 45756131265848 Seqno: 0 ",
                                "\tStreamId: 0","\tType: 7",
                                "\tMessage: MsgStream",
                                "05:55:23.812399 [DEBUG] Timekeeper::handleSnapshotMarker ",
                                "\tSnapshot: Type: 5 Start: 51 End: 51 ",
                                "\tMeta: Bucket: default Vbucket: 726 Vbuuid: 157497853464131 Seqno: 0 ",
                                "\tStreamId: 0","\tType: 7",
                                "\tMessage: MsgStream",
                                "05:55:23.812263 [DEBUG] Timekeeper::handleSnapshotMarker ",
                                "\tSnapshot: Type: 1 Start: 55 End: 55 ",
                                "\tMeta: Bucket: default Vbucket: 384 Vbuuid: 27138642466517 Seqno: 0 ",
                                "\tStreamId: 0","\tType: 7",
                                "\tMessage: MsgStream",
                                "05:55:23.812157 [DEBUG] Timekeeper::handleSnapshotMarker ",
                                "\tSnapshot: Type: 1 Start: 92 End: 92 ",
                                "\tMeta: Bucket: default Vbucket: 176 Vbuuid: 210344920786279 Seqno: 0 ",
                                "\tStreamId: 0","\tType: 7",
                                "\tMessage: MsgStream",
                                "05:55:23.812074 [DEBUG] Timekeeper::handleSnapshotMarker ",
                                "\tSnapshot: Type: 1 Start: 75 End: 75 ",
                                "\tMeta: Bucket: default Vbucket: 284 Vbuuid: 251422750423866 Seqno: 0 ",
                                "\tStreamId: 0","\tType: 7",
                                "\tMessage: MsgStream",
                                "05:55:23.812039 [DEBUG] Timekeeper::handleSnapshotMarker ",
                                "\tSnapshot: Type: 5 Start: 60 End: 60 ",
                                "\tMeta: Bucket: default Vbucket: 875 Vbuuid: 248521441493134 Seqno: 0 ",
                                "\tStreamId: 0","\tType: 7",
                                "\tMessage: MsgStream",
                                "05:55:23.811913 [DEBUG] Timekeeper::handleSnapshotMarker "],
                               800,undefined}
** Reason for termination ==
** {abnormal,2}

[ns_server:info,2014-11-12T5:55:23.997,babysitter_of_n_0@127.0.0.1:<0.95.0>:supervisor_cushion:handle_info:58]Cushion managed supervisor for indexer failed: {abnormal,2}
[error_logger:error,2014-11-12T5:55:24.002,babysitter_of_n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
  crasher:
    initial call: ns_port_server:init/1
    pid: <0.96.0>
    registered_name: []
    exception exit: {abnormal,2}
      in function gen_server:terminate/6
    ancestors: [<0.95.0>,ns_child_ports_sup,ns_babysitter_sup,<0.56.0>]
    messages: [{'EXIT',#Port<0.5975>,normal}]
    links: [<0.95.0>]
    dictionary: []
    trap_exit: true
    status: running
    heap_size: 46368
    stack_size: 24
    reductions: 6550598
  neighbours:

[error_logger:error,2014-11-12T5:55:24.003,babysitter_of_n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.95.0> terminating
** Last message in was {die,{abnormal,2}}
** When Server state == {state,indexer,5000,
                               {1415,789390,445217},
                               undefined,infinity}
** Reason for termination ==
** {abnormal,2}

[ns_server:debug,2014-11-12T5:55:24.004,babysitter_of_n_0@127.0.0.1:<0.100.0>:supervisor_cushion:init:39]starting ns_port_server with delay of 5000
[error_logger:error,2014-11-12T5:55:24.004,babysitter_of_n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
  crasher:
    initial call: supervisor_cushion:init/1
    pid: <0.95.0>
    registered_name: []
    exception exit: {abnormal,2}
      in function gen_server:terminate/6
    ancestors: [ns_child_ports_sup,ns_babysitter_sup,<0.56.0>]
    messages: []
    links: [<0.72.0>]
    dictionary: []
    trap_exit: true
    status: running
    heap_size: 1597
    stack_size: 24
    reductions: 1574
  neighbours:

[error_logger:error,2014-11-12T5:55:24.005,babysitter_of_n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================SUPERVISOR REPORT=========================
     Supervisor: {local,ns_child_ports_sup}
     Context: child_terminated
     Reason: {abnormal,2}
     Offender: [{pid,<0.95.0>},
                  {name,
                      {indexer,
                          "/root/work/source/couchbase/install/bin/indexer",
                          ["-vbuckets=1024","-projector=127.0.0.1:9999",
                           '-log=2'],
                          [use_stdio,exit_status,stderr_to_stdout,stream]}},
                  {mfargs,
                      {supervisor_cushion,start_link,
                          [indexer,5000,infinity,ns_port_server,start_link,
                           [#Fun<ns_child_ports_sup.2.120517221>]]}},
                  {restart_type,permanent},
                  {shutdown,86400000},
                  {child_type,worker}]


[error_logger:info,2014-11-12T5:55:24.010,babysitter_of_n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
          supervisor: {local,ns_child_ports_sup}
             started: [{pid,<0.100.0>},
                       {name,
                           {indexer,
                               "/root/work/source/couchbase/install/bin/indexer",
                               ["-vbuckets=1024","-projector=127.0.0.1:9999",
                                '-log=2'],
                               [use_stdio,exit_status,stderr_to_stdout,
                                stream]}},
                       {mfargs,
                           {supervisor_cushion,start_link,
                               [indexer,5000,infinity,ns_port_server,
                                start_link,
                                [#Fun<ns_child_ports_sup.2.120517221>]]}},
                       {restart_type,permanent},
                       {shutdown,86400000},
                       {child_type,worker}]

[ns_server:info,2014-11-12T5:55:24.199,babysitter_of_n_0@127.0.0.1:<0.94.0>:ns_port_server:log:181]projector<0.94.0>: 05:55:23.997198 [ERROR] dataport wrote only 0 bytes for header
projector<0.94.0>: 05:55:23.997218 [ERROR] [DataportClient:"localhost:8100"] transport "127.0.0.1:43050" `dataport.packetWrite`
projector<0.94.0>: 05:55:23.998272 [ERROR] [endpc(1415789491150104523) localhost:9999:MAINT_STREAM_TOPIC:localhost:8100] flushBuffers() secondary.closed

[ns_server:info,2014-11-12T5:55:24.260,babysitter_of_n_0@127.0.0.1:<0.101.0>:ns_port_server:log:181]indexer<0.101.0>: 05:55:24.059147 [INFO] Indexer::NewIndexer Status INIT
indexer<0.101.0>: 05:55:24.059226 [INFO] Indexer::NewIndexer Starting with Vbuckets 1024
indexer<0.101.0>: 05:55:24.059487 [INFO] Indexer::NewIndexer Status ACTIVE
indexer<0.101.0>: 05:55:24.059529 [INFO] CbqBridge::initCbqBridge Listening on :9101

Full logs (from start of cluster_run) attached.





[MB-12497] If index creation fails with "Duplicate Index Name", further index creations are blocked on same bucket Created: 29/Oct/14  Updated: 22/Nov/14

Status: In Progress
Project: Couchbase Server
Component/s: secondary-index
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Critical
Reporter: Prathibha Bisarahalli Assignee: Deepkaran Salooja
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: File MultipleInitialBuild1    
Triage: Untriaged
Is this a Regression?: Unknown

 Description   
Summary:

In the case where secondary index creation fails due to error "Duplicate Index Name", creation of any subsequent index fails with error: "Index Build Already In Progress. Multiple Initial Builds On A Bucket Are Not Supported By Indexer.".

Repro:
A user hits "Duplicate index name" error when the index creation fails and he retries creation with same index name.

Lets say creation of an index for a user hangs or fails with an internal error. If the user retries to create the index with same name, he gets "Duplicate Index Name" error.

# cbq> create index id1 on default(personal_deatils.age) using forestdb;
<hung, CTRL+C>
# ./cbq
# cbq> create index id1 on default(personal_deatils.age) using forestdb;
{
    "error":
        {
            "caller": "couchbase:348",
            "cause": "Duplicate Index Name",
            "code": 5000,
            "key": "Internal Error",
            "message": "Error creating index: id1"
        }
}

Post this error, creation of any secondary index (with different name) on same bucket will fail with error:
# cbq> create index id2 on default(personal_deatils.state) using forestdb;
{
    "error":
        {
            "caller": "couchbase:348",
            "cause": "Index Build Already In Progress. Multiple Initial Builds On A Bucket Are Not Supported By Indexer.",
            "code": 5000,
            "key": "Internal Error",
            "message": "Error creating index: id2"
        }
}



Log file of indexer and projector component attached.

 Comments   
Comment by Deepkaran Salooja [ 22/Nov/14 ]
The problem happens because of the first index creation failure.

Even without "Duplicate Index Name", you would have got the error:
"Index Build Already In Progress. Multiple Initial Builds On A Bucket Are Not Supported By Indexer."

If an index build fails, right now it needs to be explicitly dropped and recreated.
We'll have better handling in future.

I have improved error handling if initial index build request fails. It will be marked as Error state
and next index build requests can be processed:
http://review.couchbase.org/#/c/43526/

Will keep this ticket open for now, as this needs to be fully addressed when coordinator integration is done.
 




[MB-12788] JSON versions and encodings supported by Couchbase Server need to be defined Created: 16/Jun/14  Updated: 26/Nov/14

Status: Open
Project: Couchbase Server
Component/s: None
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Critical
Reporter: Matt Ingenthron Assignee: Cihan Biyikoglu
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
While JSON is a standard, there are multiple unicode encodings and the definition of how to interact with this encoding has changed over the course of time. Also, our dependencies (mochiweb, view engine's JSON) may not actually conform to these standards.

Couchbase Server needs to define and document what it supports with respect to JSON.

See:
http://tools.ietf.org/html/draft-ietf-json-rfc4627bis-10 and
http://tools.ietf.org/html/rfc4627


 Comments   
Comment by Cihan Biyikoglu [ 16/Jun/14 ]
making this a documentation item - we should make this public.
Comment by Chiyoung Seo [ 24/Jun/14 ]
Moving this to post 3.0 as the datatype support is not supported in 3.0
Comment by Matt Ingenthron [ 11/Sep/14 ]
This isn't really datatype related, though it's not couchbase-bucket any more either. View engine and other parts of the server use JSON, what do they expect as input? It's also sort of documentation, but not strictly documentation since it should either be defined and validated, or determined based on what our dependencies actually do and verified. In either case, there's probably research and writing of unit tests I think.
Comment by Chiyoung Seo [ 12/Sep/14 ]
Assigning to the PM team to figure out the appropriate steps to be taken.
Comment by Amy Kurtzman [ 26/Nov/14 ]
This issue got moved to the DOC project during the mass migration of tickets. I'm moving it back to MB because there isn't anything to document yet. After the work is done to define what Couchbase Server supports, create a new DOC ticket that links to this one, supply us with the information, and then we can document it.




[MB-12902] Network partition leading to dataloss Created: 12/Dec/14  Updated: 12/Dec/14

Status: Open
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 2.5.1
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Critical
Reporter: Abhishek Singh Assignee: Cihan Biyikoglu
Resolution: Unresolved Votes: 0
Labels: net-split, split-brain
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Relates to
Triage: Untriaged
Operating System: Centos 64-bit
Is this a Regression?: Unknown

 Description   
Production cluster scenario that lead to data loss:

* 4 node cluster running on CB 2.5.1.
* node A, node B, node C failed-over node-D but failover message didn't reach node-D and it doesn't know it was failed over.
* SDK clients unfortunately were connecting to node-D for the vb-map config(which is incorrect).
* At this stage a set of vbuckets are active in 2 different places and once rebalance is initiated mutations mapping to active vbuckets on node-D are very likely going to get lost.

Sample data from logs to back it up:

World according to *10.100.0.46*(node-D, which was failed over):

```
      {incoming_replications_conf_hashes,
          [{"services.z3",
            [{'ns_1@10.100.0.41',115455469},
             {'ns_1@10.100.0.42',74343578},
             {'ns_1@10.100.0.43',66117516}]},
           {"default",
            [{'ns_1@10.100.0.41',115455469},
             {'ns_1@10.100.0.42',74343578},
             {'ns_1@10.100.0.43',66117516}]},
           {"indigo-session",
            [{'ns_1@10.100.0.41',124324038},
             {'ns_1@10.100.0.42',42364148},
             {'ns_1@10.100.0.43',122578952}]},
           {"services.z2",
            [{'ns_1@10.100.0.41',124324038},
             {'ns_1@10.100.0.42',42364148},
             {'ns_1@10.100.0.43',122578952}]},
           {"indigo",
            [{'ns_1@10.100.0.41',48623555},
             {'ns_1@10.100.0.42',124811468},
             {'ns_1@10.100.0.43',104172491}]},
           {"services",
            [{'ns_1@10.100.0.41',124324038},
             {'ns_1@10.100.0.42',42364148},
             {'ns_1@10.100.0.43',122578952}]}]},
```

World according to *10.100.0.42*(is similar for node-A, node-B and node-C)

```
      {incoming_replications_conf_hashes,
          [{"services.z3",
            [{'ns_1@10.100.0.41',63506121},{'ns_1@10.100.0.43',117606290}]},
           {"default",
            [{'ns_1@10.100.0.41',63506121},{'ns_1@10.100.0.43',117606290}]},
           {"indigo-session",
            [{'ns_1@10.100.0.41',101019889},{'ns_1@10.100.0.43',114028661}]},
           {"services.z2",
            [{'ns_1@10.100.0.41',101019889},{'ns_1@10.100.0.43',114028661}]},
           {"indigo",
            [{'ns_1@10.100.0.41',51937030},{'ns_1@10.100.0.43',131953060}]},
           {"services",
            [{'ns_1@10.100.0.41',101019889},{'ns_1@10.100.0.43',114028661}]}]},
```





[MB-12900] Integrate with cbauth Created: 11/Dec/14  Updated: 12/Dec/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Task Priority: Critical
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: sprint5_xdcr
Remaining Estimate: 32h
Time Spent: Not Specified
Original Estimate: 32h

Epic Link: XDCR next release

 Description   
1. call source\target Bucket API (go-couchbase)
- dcp related
        - topology related, i.e. ServerVBMap, etc.
seems like there is no real integration has done. Manik added an example to show how cbauth can work existing go-couchbase api to get the bucket - examples\cb-auth\example.go

user, bucket_password, err := cbauth.GetHTTPServiceAuth(hostPort)
if err != nil {
log.Printf("Failed %v", err)
return
}

log.Printf(" HTTP Servce username %s password %s", user, bucket_password)

client, err := couchbase.ConnectWithAuthCreds(*serverURL, user, bucket_password)
if err != nil {
log.Printf("Connect failed %v", err)
return
}

cbpool, err := client.GetPool("default")
if err != nil {
log.Printf("Failed to connect to default pool %v", err)
return
}

mUser, mPassword, err := cbauth.GetMemcachedServiceAuth(hostPort)
if err != nil {
log.Printf(" failed %v", err)
return
}

var cbbucket *couchbase.Bucket
cbbucket, err = cbpool.GetBucketWithAuth(*bucketName, mUser, mPassword)

if err != nil {
log.Printf("Failed to connect to bucket %v", err)
return
}

log.Printf(" Bucket name %s Bucket %v", *bucketName, cbbucket)

For us, it means our Utils.GetBucket would change along.

2. call source\target cluster rest API
A. use cbauth.GetHTTPServiceAuth to get the credential, then we can use it in rest api.
user, bucket_password, err := cbauth.GetHTTPServiceAuth(hostPort)
if err != nil {
log.Printf("Failed %v", err)
return
}
B. then use that credential to make http request. There are a few places
- Utils.QueryRestAPI
                - remote_cluster_services.go connectToRemoteClusterThroughHttp method
3. call memcached API on source and target cluster
A. use cbauth.GetMemcachedServiceAuth() to get the credential
mUser, mPassword, err := cbauth.GetMemcachedServiceAuth(hostPort)
if err != nil {
log.Printf(" failed %v", err)
return
}
   B.use that for connection
-connection_pool.go NewConn will change





[MB-12628] Investigate visualization discrepancies in perf framework as seen in MB-12375. Created: 11/Nov/14  Updated: 16/Dec/14

Status: Open
Project: Couchbase Server
Component/s: performance
Affects Version/s: 3.0.2
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Critical
Reporter: Venu Uppalapati Assignee: Venu Uppalapati
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Gantt: start-finish
is triggered by MB-12375 Query latency on Windows Reopened
Triage: Untriaged
Is this a Regression?: Unknown

 Description   
Investigate visualization discrepancies in perf framework as seen in MB-12375.




[MB-12412] Information on describing indexes for n1ql Created: 22/Oct/14  Updated: 22/Oct/14

Status: Open
Project: Couchbase Server
Component/s: query
Affects Version/s: 3.0
Fix Version/s: None
Security Level: Public

Type: Improvement Priority: Major
Reporter: Ketaki Gangal Assignee: Gerald Sangudi
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
If a user needs to find out information on indexes mapped to which column, is there a way for user to do so?

I could not find out information on this based on couchbaselabs/query docs.

This is what I was doing
select * from system:indexes;
      {
            "indexes": {
                "id": "#primary",
                "index_key": [],
                "index_type": "view",
                "keyspace_id": "beer-sample",
                "name": "#primary",
                "namespace_id": "default",
                "store_id": "http://172.23.107.102:8091"
            }
        },
      
      "indexes": {
                "id": "new_index",
                "index_key": [],
                "index_type": "view",
                "keyspace_id": "beer-sample",
                "name": "new_index",
                "namespace_id": "default",
                "store_id": "http://172.23.107.102:8091"
            }
        }
    ],
    "state": "success",
    "metrics": {
        "elapsedTime": "1.954704ms",
        "executionTime": "1.861035ms",
        "resultCount": 8
    }


The output does not contain information on "index_key" and I am not able to use this further for queries for example
- select * from table USE KEYS ...

  




[MB-12384] append and prepend commands to support TTL Created: 20/Oct/14  Updated: 23/Oct/14

Status: Open
Project: Couchbase Server
Component/s: memcached
Affects Version/s: 2.5.1
Fix Version/s: None
Security Level: Public

Type: Improvement Priority: Major
Reporter: Patrick Varley Assignee: Trond Norbye
Resolution: Unresolved Votes: 0
Labels: customer, greenstack
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Dependency

 Description   
Currently the memcached binary protocol does not support the TTL field on append and prepend operations.

https://code.google.com/p/memcached/wiki/BinaryProtocolRevamped#Append,_Prepend

It would be useful to update the TTL at the same time as executing the command.

 Comments   
Comment by Trond Norbye [ 23/Oct/14 ]
This won't happen until project Greenstack is implemented (post sherlock).




[MB-12344] Clearly document instructions for compiling Couchbase from source code on multiple platforms. Created: 13/Oct/14  Updated: 13/Oct/14

Status: Open
Project: Couchbase Server
Component/s: None
Affects Version/s: 3.0.1
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Aruna Piravi Assignee: Chiyoung Seo
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
Fresh compilation of Couchbase is time-consuming mainly because of incomplete instructions like

- Which version of gcc/cc/clang to use?
- What packages are needed? Any specific versions?

https://github.com/couchbase/tlm#macosx is almost right but it doesn't include gperftools and dependency on v8 version 3.18.5. I spent an entire day trying to compile couchbase until I stumbled upon this page.

Can we have this page updated? Also not many ppl know about this page. There's still copies of stale and incorrect instructions in many of our wiki pages (eg. - new hire page).

Let's have one correct set of instructions everywhere. It also makes sense for Dev to own it so it can be updated when 3rd party dependencies change.

I was not sure whom to reach. Please feel free to assign to anyone. Thanks.




[MB-12331] ep_num_locks fixed at 5, doesn't expand upon resize. Created: 13/Oct/14  Updated: 13/Oct/14

Status: Open
Project: Couchbase Server
Component/s: storage-engine
Affects Version/s: 2.5.0, 2.5.1, 3.0
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Dave Rigby Assignee: Chiyoung Seo
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: Debian 7


 Description   
The number of locks used for each HashTable (ep_num_locks) it set to 5 during initialisation:

    $ /opt/couchbase/bin/cbstats localhost:11210 all|grep " curr_items:\|ep_ht_locks:"
     ep_ht_locks: 5

Furthermore, it doesn't increase as part of the HashTable resize. This means that as bucket size and/or node count increases, there will be increased lock contention.

For example, a 42 node cluster with 7B items will have ~24 vBuckets per node, and 6.8M docs per vBucket. Hence each lock will be owning 6.8M/5 == 1.3M items.

We should consider increasing the default value of ep_ht_locks and/or dynamically expanding the lock count along with the HashBucket count when a HashTable is resized.




[MB-12362] Pillowfight performance negatively affected by specifying larger item amounts Created: 10/Oct/14  Updated: 15/Oct/14

Status: Open
Project: Couchbase Server
Component/s: performance
Affects Version/s: 3.0
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Perry Krug Assignee: Thomas Anderson
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Operating System: Centos 64-bit
Is this a Regression?: Unknown

 Description   
I can notice a major difference in pillowfight performance when running with '-I' set to a larger value than the default.

At the default of 1k, I can reach over 2.5M ops/sec...taking the same command with 1M items, it stops around 2M.

I'm not sure how much the difference would be noticed at lower thread counts and slower networking, but I think it does point to some bottleneck or inefficiency that could help across the board.

Mark, I think you mentioned something about some improvements coming to reduce shared object contention and other things? Is this related to that?



 Comments   
Comment by Mark Nunberg [ 10/Oct/14 ]
First of all, you'd need to make sure that the performance comparisons come about once the populate phase for the workload has been completed with the larger item amounts. Naturally creating more items will require a longer populate phase (the speedup of the populate phase is going to come in the next version).

Additionally you have the added load on the server side of potentially fetching items from disk or slower main memory, rather than the CPU's cache.
Comment by Mark Nunberg [ 13/Oct/14 ]
I don't think this is related to anything in the pillowfight code itself. There might be something I can do in the way of documentation though, if you can help me confirm the exact characteristics
Comment by Perry Krug [ 13/Oct/14 ]
Thanks Mark, I appreciate your help.

I ran these two commands one after another and noticed the performance difference:
cbc-pillowfight -U couchbase://&lt;host&gt;/&lt;bucket> -I 1000 -t 128 -B 10000 -c -1 -r 5 -n
cbc-pillowfight -U couchbase://&lt;host&gt;/&lt;bucket> -I 100000 -t 128 -B 10000 -c -1 -r 5 -n

(notice the high thread count and high batch size, this was necessary to push enough load...around 1M ops/sec. Also notice the "-n" to avoid any populate phase slowness)

If you can help me understand the difference as being expected, that's wonderful. If there's some work you can do to make them more consistent, even better ;-)
Comment by Mark Nunberg [ 15/Oct/14 ]
Perry: I've moved this as an MB and in the performance section; since I've seen the same results as you have, and have no reason to suspect anything in the client causing this
Thomas: I made the assumption you're the current assignee for "Performance" (Auto gave me Pavel..). Please reassign if this isn't correct




[MB-12320] [OS X] Check for updates incorrectly reports "A new version of CB is available" Created: 08/Oct/14  Updated: 08/Oct/14

Status: Open
Project: Couchbase Server
Component/s: installer
Affects Version/s: 3.0
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Dave Rigby Assignee: Bin Cui
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: PNG File Screen Shot 2014-10-08 at 18.13.01.png    
Triage: Untriaged
Operating System: MacOSX 64-bit
Is this a Regression?: Unknown

 Description   
Installed 3.0.0 GA (build 1209), yet "check for updates" claims "A new version of Couchbase is available".

See attached screenshot.





[MB-12316] Unable to set a directory for temporary files of cbcollect_info Created: 08/Oct/14  Updated: 08/Oct/14

Status: Open
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 2.5.1
Fix Version/s: None
Security Level: Public

Type: Improvement Priority: Major
Reporter: Cédric Delgehier Assignee: Aleksey Kondratenko
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: RHEL 6.5


 Description   
Hello,


The "r" option cbcollect_info is used to specify the destination path for the dump. However, the temporary files are placed in the /tmp directory.

I find it unfortunate to be able to specify a destination path (probably a problem of space) but to continue working in the /tmp

Would it be possible to add another option to specify a temporary working directory ?
Or use the target path as the working directory


[root@rh6 DUMP]# /opt/couchbase/bin/cbcollect_info -r /DUMP rh6.zip
uname (uname -a) - OK
Directory structure membase - previous versions (ls -lR /opt/membase /var/membase /var/opt/membase /etc/opt/membase) - Exit code 2
Process list snapshot (export TERM=''; top -Hb -n1 || top -H n1) - OK
Process list (ps -AwwL -o user,pid,lwp,ppid,nlwp,pcpu,maj_flt,min_flt,pri,nice,vsize,rss,tty,stat,wchan:12,start,bsdtime,command) - OK
Raw /proc/vmstat (cat /proc/vmstat) - OK
Raw /proc/mounts (cat /proc/mounts) - OK
Raw /proc/partitions (cat /proc/partitions) - OK
Raw /proc/diskstats (cat /proc/diskstats) - OK
Raw /proc/interrupts (cat /proc/interrupts) - OK
Swap configuration (free -t) - OK
Swap configuration (swapon -s) - OK
Kernel modules (lsmod) - OK
Distro version (cat /etc/redhat-release) - OK
Distro version (lsb_release -a) - Exit code 127
Installed software (rpm -qa) - OK
Installed software (COLUMNS=300 dpkg -l) - Exit code 127
Extended iostat (iostat -x -p ALL 1 10 || iostat -x 1 10) - OK
Core dump settings (find /proc/sys/kernel -type f -name '*core*' -print -exec cat '{}' ';') - OK
sysctl settings (sysctl -a) - OK
relevant lsof output (echo moxi memcached vbucketmigrator beam couch_compact godu sigar_port | xargs -n1 pgrep | xargs -n1 -r -- lsof -n -p) - OK
LVM info (lvdisplay) - OK
LVM info (vgdisplay) - OK
LVM info (pvdisplay) - OK
Network configuration (ifconfig -a) - OK
Taking sample 2 after 10.000000 seconds -
OK
Network configuration (echo link addr neigh rule route netns | xargs -n1 -- sh -x -c 'ip $1 list' --) - Exit code 124
Network status (netstat -anp || netstat -an) - OK
Network routing table (netstat -rn) - OK
Arp cache (arp -na) - OK
Filesystem (df -ha) - OK
System activity reporter (sar 1 10) - OK
System paging activity (vmstat 1 10) - OK
System uptime (uptime) - OK
couchbase user definition (getent passwd couchbase) - OK
couchbase user limits (su couchbase -c "ulimit -a") - OK
membase user definition (getent passwd membase) - Exit code 2
couchbase user limits (su couchbase -c "ulimit -a") - OK
membase user limits (su membase -c "ulimit -a") - Exit code 125
Interrupt status (intrstat 1 10) - Exit code 127
Processor status (mpstat 1 10) - OK
System log (cat /var/adm/messages) - Exit code 1
All logs (tar cz /var/log/syslog* /var/log/dmesg /var/log/messages* /var/log/daemon* /var/log/debug* /var/log/kern.log* 2>/dev/null) - Exit code 2
Relevant proc data ((pgrep moxi; pgrep beam.smp; pgrep memcached; pgrep couch_compact; pgrep sigar_port ; pgrep godu) | xargs -n1 -- sh -c 'echo $1; cat /proc/$1/status; cat /proc/$1/limits; cat /proc/$1/smaps; cat /proc/$1/numa_maps; cat /proc/$1/sched; echo' --) - OK
NUMA data (numactl --hardware) - OK
NUMA data (numactl --show) - OK
NUMA data (cat /sys/devices/system/node/node*/numastat) - OK
Kernel log buffer (dmesg) - OK
Transparent Huge Pages data (cat /sys/kernel/mm/transparent_hugepage/enabled) - OK
Transparent Huge Pages data (cat /sys/kernel/mm/transparent_hugepage/defrag) - OK
Transparent Huge Pages data (cat /sys/kernel/mm/redhat_transparent_hugepage/enabled) - OK
Transparent Huge Pages data (cat /sys/kernel/mm/redhat_transparent_hugepage/defrag) - OK
Network statistics (netstat -s) - OK
Full raw netstat (cat /proc/net/netstat) - OK
CPU throttling info (echo /sys/devices/system/cpu/cpu*/thermal_throttle/* | xargs -n1 -- sh -c 'echo $1; cat $1' --) - OK
Checking for server guts in /DUMP/var/lib/couchbase/initargs...
Failed to read initargs:
{error,enoent}
Checking for server guts in /opt/couchbase/var/lib/couchbase/initargs...
product diag header () - OK
Directory structure (['ls', '-lR', '/opt/couchbase']) - OK
Database directory structure (['ls', '-lR', '/DATA']) - OK
Index directory structure (['ls', '-lR', '/DATA']) - OK
Database directory filefrag info (['find', '/DATA', '-type', 'f', '-exec', 'filefrag', '-v', '{}', '+']) - OK
Index directory filefrag info (['find', '/DATA', '-type', 'f', '-exec', 'filefrag', '-v', '{}', '+']) - OK
Directory structure membase - previous versions (ls -lR /opt/membase /var/membase /var/opt/membase /etc/opt/membase) - Exit code 2
Version file (cat '/opt/couchbase/VERSION.txt') - OK
Manifest file (cat '/opt/couchbase/manifest.txt') - OK
Manifest file (cat '/opt/couchbase/manifest.xml') - OK
Couchbase config () - OK
Memcached logs (['sh', '-c', 'cd "$1"; for file in $(ls -tr memcached.log.*); do cat "$file"; done', '--', '/opt/couchbase/var/lib/couchbase/logs']) - OK
Ini files (['sh', '-c', 'for i in "$@"; do echo "file: $i"; cat "$i"; done', '--', '/opt/couchbase/etc/couchdb/default.ini', '/opt/couchbase/etc/couchdb/default.d/capi.ini', '/opt/couchbase/etc/couchdb/default.d/geocouch.ini', '/opt/couchbase/etc/couchdb/local.ini']) - OK
couchbase diags (curl -sS -u Administrator:***** http://127.0.0.1:8091/diag?noLogs=1) - OK
couchbase logs (debug) (cbbrowse_logs debug) - OK
couchbase logs (info) (cbbrowse_logs info) - OK
couchbase logs (error) (cbbrowse_logs error) - OK
couchbase logs (couchdb) (cbbrowse_logs couchdb) - OK
couchbase logs (xdcr) (cbbrowse_logs xdcr) - Traceback (most recent call last):
  File "/opt/couchbase/bin/cbcollect_info", line 557, in <module>
    main()
  File "/opt/couchbase/bin/cbcollect_info", line 533, in main
    runner.run(task)
  File "/opt/couchbase/bin/cbcollect_info", line 126, in run
    self.header(fp, task.description, command_to_print)
  File "/opt/couchbase/bin/cbcollect_info", line 98, in header
    fp.flush()
IOError: [Errno 28] No space left on device


 Comments   
Comment by Patrick Varley [ 08/Oct/14 ]
You can use the TMPDIR environment variable to change the the location of the temporary files.

TMPDIR=/alternate/tempdir /opt/couchbase/bin/cbcollect_info -r /DUMP rh6.zip
Comment by Cédric Delgehier [ 08/Oct/14 ]
mkdir /DUMP/tmp_cbcollect_info
export TMPDIR=/DUMP/tmp_cbcollect_info
/opt/couchbase/bin/cbcollect_info -r /DUMP rh6.zip

Thanks !





[MB-12315] Bucket deletion and recreation with same name causes projector to go down Created: 08/Oct/14  Updated: 22/Oct/14

Status: Open
Project: Couchbase Server
Component/s: secondary-index
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Prathibha Bisarahalli Assignee: Pratap Chakravarthy
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: File ProjectorLogs    
Triage: Untriaged
Operating System: MacOSX 64-bit
Is this a Regression?: No

 Description   
The test code does this:
• Setup: Delete “default” bucket and recreate it. Populate it with ~ 98k documents
• Execution: Create a secondary index on a field, run a query on it using cbq engine, validate the results
• Cleanup: Delete “default” bucket and drop the secondary index.

In the first run of the test, it succeeds. Running the test again leads to projector process going down.

A simpler repro is to:
1. Create a bucket. Create secondary index on it.
2. Delete the bucket.
3. Recreate the bucket with same name.

Projector logs: (full log file attached)

2014/10/07 20:20:46 go-couchbase: Error connecting to upr feed of 127.0.0.1:12000: EOF
2014/10/07 20:21:16 go-couchbase: UPR connection lost; reconnecting to bucket "default" in 30s
2014/10/07 20:21:16 Unable to refresh bucket HTTP error 404 Object Not Found getting "http://localhost:9000/pools/default/buckets/default?bucket_uuid=98a17c4554b38c477efa9de47357a2d2": Bucket uuid does not match the requested.
 
panic: runtime error: assignment to entry in nil map

goroutine 35 [running]:
runtime.panic(0x439ae0, 0x71fdf3)
/usr/local/Cellar/go/1.3.1/libexec/src/pkg/runtime/panic.c:279 +0xf5
github.com/couchbaselabs/go-couchbase.(*UprFeed).connectToNodes(0xc20801b180, 0x0, 0x0)
/Users/prathibha-mac/Documents/source/Go/src/github.com/couchbaselabs/go-couchbase/upr.go:225 +0x39c
github.com/couchbaselabs/go-couchbase.(*UprFeed).run(0xc20801b180)
/Users/prathibha-mac/Documents/source/Go/src/github.com/couchbaselabs/go-couchbase/upr.go:184 +0x295
created by github.com/couchbaselabs/go-couchbase.(*Bucket).StartUprFeed
/Users/prathibha-mac/Documents/source/Go/src/github.com/couchbaselabs/go-couchbase/upr.go:119 +0x269









[MB-12270] Cbbackup 2.5.1 Run Directly Against File System May Fail w/OS Error 2 If Files Are Concurrently Compacted Created: 29/Sep/14  Updated: 30/Sep/14

Status: Open
Project: Couchbase Server
Component/s: tools
Affects Version/s: 2.5.1
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Morrie Schreibman Assignee: Bin Cui
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: 40h
Time Spent: Not Specified
Original Estimate: 40h

Attachments: Text File cbbackup_error.out.txt     Text File ls_begin.out.txt     Text File ls_end.out.txt    
Issue Links:
Relates to
Triage: Untriaged
Operating System: Centos 64-bit
Is this a Regression?: No

 Description   
Customer is running cbbackup directly against the bucket datafiles in order to avoid TAP vbucket dumps; cbbackups are being executed this way in order to avoid significant changes to bucket residency percentages which negatively affect application performance. The general form of the command being used is:
.
    cbbackup couchstore-files:///&lt;bucket data path> <backup_dir> -u <admin> -p <password>
.

Customer is performing these backups against very large nodes. Customer reports that the cbbackups invariable fail on an OS error2 exception (file not found). Further investigation shows that the missing file is present when cbbackup begins but gets compacted at some point while the backup is running with the result that there is a different version of the data file in play when cbbackup attempts to back it up. Attached find a listing of the directory as it exists when cbbackup starts, the error message and a second listing of the directory taken after the missing file exception has been trapped. Clearly it shows that the missing file "166.couch.2029" exists when cbbackup starts but has been replaced by file "166.couch.2030" by the time cbbackup tries to read it.

While an apparent workaround might be to suspend compaction while they are doing backups, that might result in other performance problems and also cause file system exhaustion. So we would like to see if there can be a code fix for this.

Attached: directory listings before and after OS 2 missing file exception showing changed incarnation and the error message itself.
  

 Comments   
Comment by Steve Yen [ 30/Sep/14 ]
> Customer is running cbbackup directly against the bucket datafiles in order to avoid TAP vbucket
> dumps; cbbackups are being executed this way in order to avoid significant changes to bucket
> residency percentages which negatively affect application performance.

another idea for this issue is to improve DCP to avoid radically changing the residency percentages, so that customer can use DCP based cbbackup.




[MB-12262] More updater stats Created: 25/Sep/14  Updated: 25/Sep/14

Status: Open
Project: Couchbase Server
Component/s: view-engine
Affects Version/s: 2.5.1
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Patrick Varley Assignee: Sriram Melkote
Resolution: Unresolved Votes: 0
Labels: supportability
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
We have some great information about what the updater has updated, however we have no details on how much work it did to do those updates i.e: how many documents it read, how big are those documents were and how much data it wrote. As a result it is hard to tell if there is a performance issue with the updater itself or the hardware underneath.

Here are some suggestions on new stats that I think might be useful (I do not know the code so these might be silly suggestions):

time spent reading data from disk
time spent processing in the v8 engine
time spent writing data to disk
amount of data read from disk
amount of data written to disk
number of documents read/processed

What the current stats look like:

[couchdb:info,2014-09-20T21:04:07.105,ns_1@10.0.0.1:<0.17156.6231>:couch_log:info:39]Set view `Patrick`, main group `_design/dev_test`, updater finished
Indexing time: 0.119 seconds
Blocked time: 0.000 seconds
Inserted IDs: 1
Deleted IDs: 4
Inserted KVs: 1
Deleted KVs: 1
Cleaned KVs: 0
# seqs done: 5




[MB-12221] N1QL should return version information Created: 22/Sep/14  Updated: 22/Sep/14

Status: Open
Project: Couchbase Server
Component/s: query
Affects Version/s: cbq-DP4
Fix Version/s: None
Security Level: Public

Type: Improvement Priority: Major
Reporter: Cihan Biyikoglu Assignee: Gerald Sangudi
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
n1ql should have a version() function that return version information. this will be useful for cases where behavioral changes are implemented and apps can issue queries that is tuned to specific n1ql versions.

if n1ql_version()=1.0 query='...' else if n1ql_version()=2.0 query='+++'






[MB-12195] Update notifications does not seem to be working Created: 15/Sep/14  Updated: 17/Sep/14

Status: Open
Project: Couchbase Server
Component/s: UI
Affects Version/s: 2.5.0
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Raju Suravarjjala Assignee: Ian McCloy
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: Centos 5.8
2.5.0

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
I have installed 2.5.0 build and enabled Update Notifications
Even though I enabled "Enable software Update Notifications", I keep getting "No Updates available"
I thought I will be notified in the UI that there is a 2.5.1 is available.

I have consulted Tony to see if I have done something wrong but he also confirmed that this seems to be an issue and is a bug

 Comments   
Comment by Aleksey Kondratenko [ 15/Sep/14 ]
Based on dev tools we're getting "no new version" from phone home requests. So it's not UI bug.
Comment by Ian McCloy [ 17/Sep/14 ]
Added the missing available upgrade paths to the database,

2.5.0-1059-rel-enterprise -> 2.5.1-1083-rel-enterprise
2.2.0-837-rel-enterprise -> 2.5.1-1083-rel-enterprise
2.1.0-718-rel-enterprise -> 2.2.0-837-rel-enterprise

but it looks like the code that parses http://ph.couchbase.net/v2?callback=jQueryxxx isn't checking the database.




[MB-12194] [Windows] When you try to uninstall CB server it comes up with Installer wizard instead of uninstall Created: 15/Sep/14  Updated: 15/Sep/14

Status: Open
Project: Couchbase Server
Component/s: installer
Affects Version/s: 3.0.1
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Raju Suravarjjala Assignee: Bin Cui
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: Windows 7
Build: 3.0.1_1299

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
Install Windows 3.0.1_1299 build
Try to uninstall the CB server
You will see the CB InstallShield Installation Wizard and then it comes up with the prompt of removing the selected application and all of its features

Expected result: It would be nice to come up with Uninstall Wizard instead of confusing Installation wizard




[MB-12189] (misunderstanding) XDCR REST API "max-concurrency" only works for 1 of 3 documented end-points. Created: 15/Sep/14  Updated: 17/Sep/14

Status: Reopened
Project: Couchbase Server
Component/s: ns_server, RESTful-APIs
Affects Version/s: 2.5.1, 3.0-Beta
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Jim Walker Assignee: Aleksey Kondratenko
Resolution: Unresolved Votes: 0
Labels: supportability, xdcr
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: Couchbase Server 2.5.1
RHEL 6.4
VM (VirtualBox0
1 node "cluster"

Triage: Untriaged
Operating System: Centos 64-bit
Is this a Regression?: Unknown

 Description   
This defect relates to the following REST APIs:

* xdcrMaxConcurrentReps (default 32) http://localhost:8091/internalSettings/
* maxConcurrentReps (default 32) http://localhost:8091/settings/replications/
* maxConcurrentReps (default 32) http://localhost:8091/settings/replications/ <replication_id>

The documentation suggests these all do the same thing, but with the scope of change being different.

<docs>
/settings/replications/ — global settings applied to all replications for a cluster
settings/replications/<replication_id> — settings for specific replication for a bucket
/internalSettings - settings applied to all replications for a cluster. Endpoint exists in Couchbase 2.0 and onward.
</docs>

This defect is because only "settings/replications/<replication_id>" has any effect. The other REST endpoints have no effect.

Out of these APIs I can confirm that changing "/settings/replications/<replication_id>" has an effect. The XDCR code shows that the concurrent reps setting feeds into the concurreny throttle as the number of available tokens. I use xdcr log files where we print the concurrency throttle token data to observe that the setting has an effect.

For example, a cluster in the default configuration has a total tokens of 32. We can grep to see this.

[root@localhost logs]# grep "is done normally, total tokens:" xdcr.*
2014-09-15T13:09:03.886,ns_1@127.0.0.1:<0.32370.0>:concurrency_throttle:clean_concurr_throttle_state:275]rep <0.33.1> to node "192.168.69.102:8092" is done normally, total tokens: 32, available tokens: 32,(active reps: 0, waiting reps: 0)

Now changing the setting to 42 the log file shows the change take affect.

curl -u Administrator:password http://localhost:8091/settings/replications/01d38792865ba2d624edb4b2ad2bf07f%2fdefault%2fdefault -d maxConcurrentReps=42

[root@localhost logs]# grep "is done normally, total tokens:" xdcr.*
dcr.1:[xdcr:debug,2014-09-15T13:17:41.112,ns_1@127.0.0.1:<0.32370.0>:concurrency_throttle:clean_concurr_throttle_state:275]rep <0.2321.1> to node "192.168.69.102:8092" is done normally, total tokens: 42, available tokens: 42,(active reps: 0, waiting reps: 0)

Since this defect is that both of the other two REST end-points don't appear to have any affect here's an example changing "settings/replication". This example was on a clean cluster, i.e. no other settings have been changed. Only creating bucket and replication + client writes has been performed.

root@localhost logs]# curl -u Administrator:password http://localhost:8091/settings/replications/ -d maxConcurrentReps=48
{"maxConcurrentReps":48,"checkpointInterval":1800,"docBatchSizeKb":2048,"failureRestartInterval":30,"workerBatchSize":500,"connectionTimeout":180,"workerProcesses":4,"httpConnections":20,"retriesPerRequest":2,"optimisticReplicationThreshold":256,"socketOptions":{"keepalive":true,"nodelay":false},"supervisorMaxR":25,"supervisorMaxT":5,"traceDumpInvprob":1000}

Above shows that the JSON has acknowledged the value of 48 but the log files show no change. After much waiting and re-checking grep shows no evidence.

[root@localhost logs]# grep "is done normally, total tokens:" xdcr.* | grep "total tokens: 48" | wc -l
0
[root@localhost logs]# grep "is done normally, total tokens:" xdcr.* | grep "total tokens: 32" | wc -l
7713

The same was observed for /internalSettings/

Found on both 2.5.1 and 3.0.

 Comments   
Comment by Aleksey Kondratenko [ 15/Sep/14 ]
This is because global settings affect new replications or replications without per-replication settings defined. UI always defines all per-replication settings.
Comment by Jim Walker [ 16/Sep/14 ]
Have you pushed a documentation update for this?
Comment by Aleksey Kondratenko [ 16/Sep/14 ]
No. I don't own docs.
Comment by Jim Walker [ 17/Sep/14 ]
Then this issue is not resolved.

Closing/resolving this defect with breadcrumbs to the opening of an issue on a different project would suffice as a satisfactory resolution.

You can also very easily put a pull request into docs on github with the correct behaviour.

Can you please perform *one* of those task so that the REST api here is correctly documented with the behaviours you are aware of and this matter can be closed.
Comment by Jim Walker [ 17/Sep/14 ]
Resolution requires either:

* Corrected documentation pushed to documentation repository.
* Enough accurate API information placed into a documentation defect so docs-team can correct.





[MB-12166] Linux: Warnings on install are poorly formatted and unlikely to be read by a user. Created: 10/Sep/14  Updated: 10/Sep/14

Status: Open
Project: Couchbase Server
Component/s: installer
Affects Version/s: 3.0-Beta
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Dave Rigby Assignee: Bin Cui
Resolution: Unresolved Votes: 0
Labels: supportability
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: Centos 6

Attachments: PNG File Screen Shot 2014-09-10 at 15.21.55.png    
Triage: Untriaged
Operating System: Centos 64-bit
Is this a Regression?: Unknown

 Description   
When installing the 3.0 RPM, we check for various OS settings and print warnings if they don't meet our recommendations.

This is a great idea in principle, but the actual output isn't very well presented, meaning users are (IMHO) likely to not spot the issues which are being raised.

I've attached a screenshot to show this exactly as displayed in the console, but the verbatim text is:

---cut ---
$ sudo rpm -Uvh couchbase-server-enterprise_centos6_x86_64_3.0.0-1209-rel.rpm
Preparing... ########################################### [100%]
Warning: Transparent hugepages may be used. To disable the usage
of transparent hugepages, set the kernel settings at runtime with
echo never > /sys/kernel/mm/transparent_hugepage/enabled
Warning: Transparent hugepages may be used. To disable the usage
of transparent hugepages, set the kernel settings at runtime with
echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled
Warning: Swappiness is not 0.
You can set the swappiness at runtime with
sysctl vm.swappiness=0
Minimum RAM required : 4 GB
System RAM configured : 0.97 GB

Minimum number of processors required : 4 cores
Number of processors on the system : 1 cores

   1:couchbase-server ########################################### [100%]
Starting couchbase-server[ OK ]

You have successfully installed Couchbase Server.
Please browse to http://localhost.localdomain:8091/ to configure your server.
Please refer to http://couchbase.com for additional resources.

Please note that you have to update your firewall configuration to
allow connections to the following ports: 11211, 11210, 11209, 4369,
8091, 8092, 18091, 18092, 11214, 11215 and from 21100 to 21299.

By using this software you agree to the End User License Agreement.
See /opt/couchbase/LICENSE.txt.
$
---cut ---

A couple of observations:

1) Everything is run together, including informational things (Preparing, Installation successful), things the user should act on (Warning: Swappiness, THP, Firewall information).

2) It's not very clear how serious some of these messages are - Is the fact I'm running with 1/4 of the minimum RAM just a minor thing, or a showstopper? Similary with THP - Support have seen on many occasions this can can cause false-positive fail overs, but we just casually say here:

"Warning: Transparent hugepages may be used. To disable the usage of transparent hugepages, set the kernel settings at runtime with echo never > /sys/kernel/mm/transparent_hugepage/enabled"


Suggestions:

1) Make the Warnings more pronounced - e.g prefix with "[WARNING]" and add some blank lines between things

2) Make clearer why these things are listed - linking back to more detailed information in our install guide if necessary. For example: "THP may cause slowdown of the cluster manager and false positive fail overs. Couchbase recommend disabling it. See http://docs.couchbase.com/THP for more details."

3) For things like THP which we can actually fix, ask the user if they want them fixed - after all we are already root if we are installing - e.g. "THP bad.... Would you like to change system THP setting to be changed to the recommended value (madvise) (y/n)?"

4) For things we can't fix (low memory, low CPUs) make the user confirm their decision to continue - e.g. "CPUs below minimum. Couchbase recommends at least XXX for production systems. Please type "test system" to continue installation.



 Comments   
Comment by David Haikney [ 10/Sep/14 ]
+1 from me - we can clearly improve the presentation here. I expect making the install interactive ("should I fix THP?") could be difficult. Are there existing precedents we can refer to here to help consistency?
Comment by Dave Rigby [ 10/Sep/14 ]
@DaveH: Admittedly I don't think they use RPM, but VMware guest tools springs to mind - they present the user a number of questions when installing - "do you want to automatically update kernel modules?", "do you want to use printer sharing", etc.

Admittedly they don't have a secondary config stage unlike us with our GUI, *but* if we are going to fix things like THP, swappiness, then we need to be root to do so (and so install-time is the only option).




[MB-12114] Expose more cluster health information in REST Created: 02/Sep/14  Updated: 02/Sep/14

Status: Open
Project: Couchbase Server
Component/s: tools
Affects Version/s: 2.5.1
Fix Version/s: None
Security Level: Public

Type: Improvement Priority: Major
Reporter: Larry Liu Assignee: Bin Cui
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Relates to

 Description   
Currently, we don't have a REST way to get the cluster health information.

One option is to use cbhealthchecker. cbhealthchecker can provide a very comprehensive information about the cluster. Can we expose cbhealthchecker report via REST selectively?




[MB-12076] Internal moxi misconfiguration Created: 22/Aug/14  Updated: 26/Aug/14

Status: Open
Project: Couchbase Server
Component/s: moxi
Affects Version/s: 2.5.1
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: kay Assignee: Sergey Avseyev
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: centos 6.5

Attachments: Text File normal.log     Text File problem.log    
Triage: Untriaged
Operating System: Centos 64-bit
Is this a Regression?: Unknown

 Description   
I have 4 servers cluster. Four buckets. One of them is default couchbase bucket with replica=1.

On one server moxi behavior is very strange. Third server's moxi lives its own live.
I've telneted to 11211 moxi's port and tried to set test keys. This key apperared only on that server, not on whole cluster. Also couchbase monitoring tool doesn't show any activity on cluster.

I've noticed that problem moxi process listens only three TCP port's:
{code}
netstat -nlpt | grep 30070
tcp 0 0 0.0.0.0:11211 0.0.0.0:* LISTEN 30070/moxi
tcp 0 0 :::11211 :::* LISTEN 30070/moxi
tcp 0 0 :::6696 :::* LISTEN 30070/moxi
{code}

Other servers' moxies have four listen ports:
{code}
netstat -nltp | grep 2577
tcp 0 0 0.0.0.0:11211 0.0.0.0:* LISTEN 2577/moxi
tcp 0 0 0.0.0.0:60593 0.0.0.0:* LISTEN 2577/moxi
tcp 0 0 :::11211 :::* LISTEN 2577/moxi
tcp 0 0 :::18347 :::* LISTEN 2577/moxi

netstat -nlpt | grep 23001
tcp 0 0 0.0.0.0:11211 0.0.0.0:* LISTEN 23001/moxi
tcp 0 0 0.0.0.0:11339 0.0.0.0:* LISTEN 23001/moxi
tcp 0 0 :::11211 :::* LISTEN 23001/moxi
tcp 0 0 :::5191 :::* LISTEN 23001/moxi

netstat -nlpt | grep 31535
tcp 0 0 0.0.0.0:11211 0.0.0.0:* LISTEN 31535/moxi
tcp 0 0 0.0.0.0:33578 0.0.0.0:* LISTEN 31535/moxi
tcp 0 0 :::11211 :::* LISTEN 31535/moxi
tcp 0 0 :::53475 :::* LISTEN 31535/moxi
{code}

So it seems that moxi on problem server was not able to listen one TCP port.

I've attached debug logs for two servers: problem and normal server.

The problem process is still running. Please let me know which logs do you need for further problem investigation.

 Comments   
Comment by kay [ 22/Aug/14 ]
I use couchbase-server-2.5.1-1083.x86_64
Comment by kay [ 22/Aug/14 ]
please change subproject to moxi for this issue




[MB-12006] Weird Server state after vm wakeup with 3.0 beta Created: 19/Aug/14  Updated: 20/Aug/14

Status: Open
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 3.0-Beta
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Michael Nitschinger Assignee: Michael Nitschinger
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: Zip Archive cbcollect.zip     PNG File Screen Shot 2014-08-19 at 11.30.29.png    
Triage: Untriaged
Operating System: Ubuntu 32-bit
Is this a Regression?: Unknown

 Description   
I was experiencing odd behavior on vm wakeup. I had 2 virtualbox vms running that I always use, a 2 node cluster. I just woke up 1 VM and expected to see the other down (which was the case). I wanted to fail it over and continue work on a single node, but the failover box is missing and it its indicating to be in rebalance but it isn't.

I'll attach a screenshot and a cbcollectinfo

 Comments   
Comment by Michael Nitschinger [ 19/Aug/14 ]
Screenshot from the UI
Comment by Michael Nitschinger [ 19/Aug/14 ]
More notes:

- I just restarted the service, but it didn't change, so I suspect it is in a malicious state somehow?
- Then I started this 103 machine and it picked it up and all is green, but it is asking for a rebalance.
- The rebalance went through successfully.

Maybe you can pick some info out of the logs, I'm fairly certain though that I didn't shut down the machines during a rebalance. If it was during a rebalance, is this expected to happen?
Comment by Aleksey Kondratenko [ 19/Aug/14 ]
Have you by any chance attempted graceful failover ?
Comment by Aleksey Kondratenko [ 19/Aug/14 ]
And without logs it's going to be hard for me to investigate anything.
Comment by Michael Nitschinger [ 20/Aug/14 ]
You mean before shutting down the VMs? No, I didn't do any failover.

I've attached a cbcollectinfo, what else do you want me to collect?
Comment by Aleksey Kondratenko [ 20/Aug/14 ]
Ah. I misunderstood your description as saying that you did failover but saw rebalance instead. Will take a look at cbcollectinfo which should hopefully be enough to diagnose it.
Comment by Aleksey Kondratenko [ 20/Aug/14 ]
Sorry but cbcollectinfo you've provided is useless. It needs to be gathered by root or at least couchbase user.

And best way is to use our ui (Logs section tab Collect Logs).
Comment by Michael Nitschinger [ 20/Aug/14 ]
Woops, sorry about that. I had to reset the box, maybe I can reproduce it again.




[MB-12155] View query and index compaction failing on 1 node with error view_undefined Created: 09/Sep/14  Updated: 09/Sep/14

Status: Open
Project: Couchbase Server
Component/s: view-engine
Affects Version/s: 2.5.1
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Ian McCloy Assignee: Harsha Havanur
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Dependency
Triage: Untriaged
Operating System: Windows 64-bit
Is this a Regression?: Unknown

 Description   
Customer upgraded their 6 node cluster from 2.2 to 2.5.1 running on Microsoft Windows Server 2008 R2 Enterprise and one of their views stopped working.

It appears the indexing and index compaction stopped working on 1 node out of the 6. This appeared to only affect 1 design document.

snips from problem node -->>

[couchdb:error,2014-09-08T17:20:31.840,ns_1@HOST:<0.23288.321>:couch_log:error:42]Uncaught error in HTTP request: {throw,view_undefined}

Stacktrace: [{couch_set_view,get_group,3},
             {couch_set_view,get_map_view,4},
             {couch_view_merger,get_set_view,5},
             {couch_view_merger,simple_set_view_query,3},
             {couch_httpd,handle_request,6},
             {mochiweb_http,headers,5},
             {proc_lib,init_p_do_apply,3}]
[couchdb:info,2014-09-08T17:20:31.840,ns_1@HOST:<0.23288.321>:couch_log:info:39]10.7.43.229 - - POST /_view_merge/?stale=false 500

=====

[ns_server:warn,2014-09-08T17:25:10.506,ns_1@HOST:<0.14357.327>:compaction_daemon:do_chain_compactors:725]Compactor for view `Bucket/_design/DDOC/main` (pid [{type,view},
                                                {important,true},
                                                {name,
                                                  <<"Bucket/_design/DDoc/main">>},
                                                {fa,
                                                  {#Fun<compaction_daemon.16.22390493>,
                                                  [<<"Bucket">>,
                                                    <<"_design/DDoc">>,main,
                                                    {config,
                                                    {30,18446744073709551616},
                                                    {30,18446744073709551616},
                                                    undefined,false,false,
                                                    {daemon_config,30,
                                                      131072}},
                                                    false,
                                                    {[{type,bucket}]}]}}]) terminated unexpectedly: {error,
                                                                                                    view_undefined}
[ns_server:warn,2014-09-08T17:25:10.506,ns_1@HOST:<0.14267.327>:compaction_daemon:do_chain_compactors:730]Compactor for view `Bucket/_design/DDoc` (pid [{type,view},
                                            {name,<<"Bucket/_design/DDoc">>},
                                            {important,false},
                                            {fa,
                                            {#Fun<compaction_daemon.20.107749383>,
                                              [<<"Bucket">>,<<"_design/DDoc">>,
                                              {config,
                                                {30,18446744073709551616},
                                                {30,18446744073709551616},
                                                undefined,false,false,
                                                {daemon_config,30,131072}},
                                              false,
                                              {[{type,bucket}]}]}}]) terminated unexpectedly (ignoring this): {error,
                                                                                                                view_undefined}
[ns_server:debug,2014-09-08T17:25:10.506,ns_1@HOST:compaction_daemon<0.480.0>:compaction_daemon:handle_info:505]Finished compaction iteration.




[MB-11936] "IP address seems to have changed. Unable to listen on 'xxx.xxx.xxx.xxx'." Error message misleading Created: 12/Aug/14  Updated: 12/Aug/14

Status: Open
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 2.0, 2.0.1, 2.1.0, 2.2.0, 2.1.1, 2.5.0, 2.5.1
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Mark Woosey Assignee: Aleksey Kondratenko
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: ALL

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
"IP address seems to have changed. Unable to listen" message appears for a variety of errors, more often than not ones internal and unrelated to external facing IP that is listed within the message.

More accurate/precise error messages would allow for quicker diagnoses by support and (potentially) less confusion by users.




[MB-10266] Testrunner does not mark tests failed if imports fail Created: 20/Feb/14  Updated: 13/Aug/14

Status: Open
Project: Couchbase Server
Component/s: test-execution
Affects Version/s: 3.0
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Volker Mische Assignee: Ketaki Gangal
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged

 Description   
The testrunner test run [1] was marked as success although the tests weren't actually run. It failed with

Traceback (most recent call last):
  File "./testrunner", line 326, in <module>
    suite = unittest.TestLoader().loadTestsFromName(before_suite_name)
  File "/usr/lib/python2.7/unittest/loader.py", line 91, in loadTestsFromName
    module = __import__('.'.join(parts_copy))
ImportError: Import by filename is not supported.

Probably due to not finding the configuration file (it was moved into another directory). This should make the build fail, not pass.

[1] http://factory.couchbase.com/job/couchdb-gerrit-views-upr/52/consoleFull

 Comments   
Comment by Wayne Siu [ 09/Jun/14 ]
Ketaki,
a. can you review if testrunner still marks the job succeeded if it does not get run?
b. do we still run this job? just check the Jenkins, last time this job was run was in March?
Comment by Volker Mische [ 11/Jun/14 ]
b: we don't run this job anymore. It was for the UPR work on the view-engine which has now been merged. The job can be removed (and also all others that work with the couchdb upr branch). Though "a" might still be there.




[MB-6923] Please rename lib/couchbase module so that it doesn't conflict with public sdk Created: 15/Oct/12  Updated: 13/Aug/14

Status: Open
Project: Couchbase Server
Component/s: test-execution
Affects Version/s: None
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Tommie McAfee Assignee: Deepkaran Salooja
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
Hey Deep, believe you have many tests relying on the couchbase.py module. Problem is whenever I have public couchbase sdk installed and try to use anything from testrunner that relies on rest_client, I get the following error:

  File "../lib/membase/api/rest_client.py", line 8, in <module>
    from couchbase.document import DesignDocument, View
ImportError: No module named document


In the past I've done workaround by modifying sys.path to import local couchbase module first. But now I have a complicated situation where this is no longer working.
Will be great to find another name for this: 'lib/couchbaseinternal' or 'lib/rest_couchbase' ?



 Comments   
Comment by Thuan Nguyen [ 16/Oct/12 ]
Integrated in single-node-2.0.x-windows7-64-view #18 (See [http://qa.hq.northscale.net/job/single-node-2.0.x-windows7-64-view/18/])
    MB-6923: workaround for couchbase module confilct (Revision a81142c9617483e325e4eab98e8fc92ecae68b5a)

     Result = UNSTABLE
tmcafee :
Files :
* lib/membase/api/rest_client.py
Comment by Thuan Nguyen [ 16/Oct/12 ]
Integrated in multi-nodes-2.0.x-windows-64-backup-cli #18 (See [http://qa.hq.northscale.net/job/multi-nodes-2.0.x-windows-64-backup-cli/18/])
    MB-6923: workaround for couchbase module confilct (Revision a81142c9617483e325e4eab98e8fc92ecae68b5a)

     Result = UNSTABLE
tmcafee :
Files :
* lib/membase/api/rest_client.py
Comment by Thuan Nguyen [ 16/Oct/12 ]
Integrated in single-node-windows-64-install #372 (See [http://qa.hq.northscale.net/job/single-node-windows-64-install/372/])
    MB-6923: workaround for couchbase module confilct (Revision a81142c9617483e325e4eab98e8fc92ecae68b5a)

     Result = SUCCESS
tmcafee :
Files :
* lib/membase/api/rest_client.py
Comment by Thuan Nguyen [ 17/Oct/12 ]
Integrated in multi-nodes-windows-64-viewtest #20 (See [http://qa.hq.northscale.net/job/multi-nodes-windows-64-viewtest/20/])
    MB-6923: workaround for couchbase module confilct (Revision a81142c9617483e325e4eab98e8fc92ecae68b5a)

     Result = SUCCESS
tmcafee :
Files :
* lib/membase/api/rest_client.py
Comment by Thuan Nguyen [ 17/Oct/12 ]
Integrated in multi-nodes-2.0.x-windows-64-install #17 (See [http://qa.hq.northscale.net/job/multi-nodes-2.0.x-windows-64-install/17/])
    MB-6923: workaround for couchbase module confilct (Revision a81142c9617483e325e4eab98e8fc92ecae68b5a)

     Result = SUCCESS
tmcafee :
Files :
* lib/membase/api/rest_client.py
Comment by Maria McDuff (Inactive) [ 15/Apr/14 ]
Deep, Tommie,

is this still a valid request? or we can close this now?
Comment by Deepkaran Salooja [ 22/Apr/14 ]
Tommie,

Do we need this?
Comment by Tommie McAfee [ 23/Jun/14 ]
I'd still like it.

the workaround is to remove lib from the sys.path before import
https://github.com/couchbase/testrunner/blob/master/pysystests/app/sdk_client_tasks.py#L38




[MB-12424] Query command line shell Created: 23/Oct/14  Updated: 23/Oct/14

Status: Open
Project: Couchbase Server
Component/s: query
Affects Version/s: cbq-alpha, cbq-beta
Fix Version/s: None
Security Level: Public

Type: Epic Priority: Major
Reporter: Gerald Sangudi Assignee: Isha Kandaswamy
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Epic Name: Query command line shell
Epic Status: To Do

 Description   
Enhance the cbq command line shell into a best of breed database shell based on best practices in functionality and design across both traditional and emerging databases.

Initial design doc by @colm is here: https://docs.google.com/document/d/10UeZ5NMdHPhUoJjiefhDgLIL83IAU71M_IXXi6T_taQ/edit




[MB-12426] Simba ODBC and JDBC drivers for SQL-92 Created: 23/Oct/14  Updated: 23/Oct/14

Status: Open
Project: Couchbase Server
Component/s: query
Affects Version/s: cbq-DP4, cbq-alpha, cbq-beta
Fix Version/s: None
Security Level: Public

Type: Epic Priority: Major
Reporter: Gerald Sangudi Assignee: Gerald Sangudi
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Epic Name: Simba ODBC and JDBC drivers for SQL-92
Epic Status: To Do

 Description   
Provide APIs, releases, and support for Simba to build ODBC and JDBC drivers for SQL-92.

High-level terms: https://docs.google.com/document/d/1goSj-M7uh_JnF6TbMs1EDu3xnyJwt1YtxMTalMceCFc/edit#heading=h.b6dg10l9aocq




[MB-12428] N1QL tutorial Created: 23/Oct/14  Updated: 23/Oct/14

Status: Open
Project: Couchbase Server
Component/s: query
Affects Version/s: cbq-DP4, cbq-alpha, cbq-beta
Fix Version/s: None
Security Level: Public

Type: Epic Priority: Major
Reporter: Gerald Sangudi Assignee: Gerald Sangudi
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Epic Name: N1QL tutorial
Epic Status: To Do

 Description   
Interactive tutorial that is available online and installed locally with query.

Continue to improve and refine this tutorial with functionality, examples, flow, data sets, narrative, etc.




[MB-12427] N1QL reference manual Created: 23/Oct/14  Updated: 23/Oct/14

Status: Open
Project: Couchbase Server
Component/s: query
Affects Version/s: cbq-DP4, cbq-alpha, cbq-beta
Fix Version/s: None
Security Level: Public

Type: Epic Priority: Major
Reporter: Gerald Sangudi Assignee: Gerald Sangudi
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Epic Name: N1QL reference manual
Epic Status: To Do

 Description   
Work with PM and tech pubs to produce N1QL reference manual.




[MB-12429] N1QL error handling Created: 23/Oct/14  Updated: 23/Oct/14

Status: Open
Project: Couchbase Server
Component/s: query
Affects Version/s: cbq-DP4, cbq-alpha, cbq-beta
Fix Version/s: None
Security Level: Public

Type: Epic Priority: Major
Reporter: Gerald Sangudi Assignee: Gerald Sangudi
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Epic Name: N1QL error handling
Epic Status: To Do

 Description   
Define error codes, messages, levels, structure, policy, and APIs.

Implement same.




[MB-12437] UI warns setting replica count to zero requires rebalance. Rebalance is grayed out after saving changes Created: 24/Oct/14  Updated: 24/Oct/14

Status: Open
Project: Couchbase Server
Component/s: UI
Affects Version/s: 3.0
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Todd Greenstein Assignee: Aleksey Kondratenko
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: CentOS 7, 4 node cluster running 3.0 GA

Triage: Untriaged
Operating System: Centos 64-bit
Is this a Regression?: Unknown

 Description   
When the replica count is changes from 1 to 0, a warning message says "rebalance is required". Once the changes are saved, the rebalance option is grayed out.




[MB-12088] Memcached should return an uninitiated error code Created: 28/Aug/14  Updated: 25/Oct/14

Status: Open
Project: Couchbase Server
Component/s: memcached
Affects Version/s: 2.5.1
Fix Version/s: None
Security Level: Public

Type: Improvement Priority: Major
Reporter: Patrick Varley Assignee: Trond Norbye
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
This is a blow out from MB-11875 and CBSE-1370.

This is no way that CCCP can tell if a node is uninitiated versus auth error.

Currently LCB does not retry on auth error.

This can be painful when a user is doing maintenance as it mean they have to keep updating the bootstrap list when they have removed a node from the cluster which they plan to add back in.

If memcached returned an uninitiated error code LCB would know to try the other nodes in the bootstrap list.

 Comments   
Comment by Patrick Varley [ 28/Aug/14 ]
What I mean by "uninitiated" is the node has been removed from the cluster and is at the setup wizard.
Comment by Trond Norbye [ 23/Oct/14 ]
CCCP is an ep-engine extension and not handled by memcached in a generic fashion. You have to be connected to a bucket in order to run the CCCP request, which means that you have to perform the correct SASL AUTH in order to run CCCP (and since the bucket isn't defined, the SASL auth _will_ fail). We have code in memcached that opens up for authenticating as a user and then select the bucket, which could get you a step closer to what you want. With that you could be able to connect to the bucket, but selecting the bucket would fail due to the fact that the buckets won't exist).
Comment by Matt Ingenthron [ 23/Oct/14 ]
That's fair that CCCP happens at a lower level. Is it possible to distinguish "no buckets defined" from "ns_server hasn't told me anything at all yet"? I'd imagine that at the memcached/bucket-engine level, there could be a way to know if configuration for _admin at least has come in.

The concern here is that the order in which the configuration is published is non-deterministic with respect to when those services are ready to provide service for what the config says. That makes it impossible for a client to know the difference between 'the service will be there, but it starting up' and 'that service isn't there, so my configuration is out of date'.

Any creative thoughts on how we can get closer? Can we have the configuration happen over a different channel than 11210? We were able to determine this with HTTP, and I'm sure there must be a way to approach it with CCCP.
Comment by Trond Norbye [ 23/Oct/14 ]
To elaborate a bit more.. memcached does not persist any kind of state. During startup it reads its configuration file, and then awaits for ns_server to send individual commands to create the different buckets. Shutdown can happen in 3 different ways: a) ns_server can delete each individual bucket. That doesn't really delete the bucket, but tells the engine to shut down nicely, and then memcached makes the bucket "unavailable". b) ns_server can tell memcached to "die".. if that happens memcached willl call _exit() and terminate _immediately_ without trying to shut down the buckets (this will result in dataloss). c) ns_server can tell memcached to shutdown. memcached will stop serving clients, and now wait for each of the buckets to stop cleanly.

ns_server may choose to use any of these options in when it move a node to an uninitiated state (for b and c it could just restart memcached and it would not know of any of the earlier buckets). Option b would most likely be the fastest way if you don't care about the data anyway.
Comment by Trond Norbye [ 23/Oct/14 ]
matt: I would _assume_ that _admin exists before memcached is started, otherwise ns_server wouldn't be able to connect to memcached either. ns_server populates the "password file" and send a message to memcached to refresh the password database (which send the response back when the file is read, so that ns_server knows when memcached is ready to use the new file). I don't know if ns_server waits to publish the knowledge about the bucket before it receives the response or not (but I would expect it to await publishing the availability of the bucket until it is ready to be used)
Comment by Matt Ingenthron [ 24/Oct/14 ]
Could that same file based communication mechanism be used to configure memcached/bucket-engine/ep_engine with what buckets should be available?

Alternatively, there could be some level at which ns-server goes in as _admin and will take it out of "uninitialized" and into "providing service" after ns-server verifies all buckets are configured correctly on the node.

At the moment, the service will start listening on it's port number before it's ready, so it's not possible for a protocol client to determine the difference between not ready and not there.
Comment by Trond Norbye [ 25/Oct/14 ]
So you mean something like making cccp a first class citizen and not just tied to ep-engine? ;) With the optimizations done to ns_servers layer, have we done any performance tests in order to see if it is worth adding more logic in the layer that don't know what's going on?




[MB-12469] 'other data' for disk calculated differently than how OS shows free/used space Created: 28/Oct/14  Updated: 28/Oct/14

Status: Open
Project: Couchbase Server
Component/s: UI
Affects Version/s: 3.0
Fix Version/s: None
Security Level: Public

Type: Improvement Priority: Major
Reporter: Alex Ma Assignee: Aleksey Kondratenko
Resolution: Unresolved Votes: 0
Labels: customer
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: 3.0.0 Enterprise Edition (build-1209)
RHEL 6
Separate mount point for data/index volume.

Attachments: PNG File commandline.png     PNG File other data.png    

 Description   
Server nodes->detailed view shows 'other data' measured differently than how the OS shows this.

See attached screens.

From UI:
In use: 370mb
Other data: 385mb
Free space: 73gb

From command line:
Couchbase data directory: 374mb
Total used on mnt point(/mnt/ephemeral): 554mb
Available free space: 70gb







[MB-12470] cbstats reset - show how long stats have been collected for or when last stats reset was done. Created: 28/Oct/14  Updated: 28/Oct/14

Status: Open
Project: Couchbase Server
Component/s: memcached
Affects Version/s: 3.0
Fix Version/s: None
Security Level: Public

Type: Improvement Priority: Major
Reporter: Alex Ma Assignee: Trond Norbye
Resolution: Unresolved Votes: 0
Labels: customer
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:  
3.0.0 Enterprise Edition (build-1209)
RHEL 6


 Description   
Statistics shown through cbstats can be reset, however there is nothing to show how long statistics have been gathering for or when stats were last reset for this node.

Some of the statistics available are cumulative( cmd_get, ep_num_value_ejects......) so knowing the collection duration or reset time is helpful for some customers.

 Comments   
Comment by Trond Norbye [ 28/Oct/14 ]
People are normally better off never resetting the stats, and rather remember the previous value and calculate the delta for the period (not all stats may be reset so you sort of have to do this anyway).




[MB-12480] changing bucket allocation - misleading range of available values. Created: 28/Oct/14  Updated: 28/Oct/14

Status: Open
Project: Couchbase Server
Component/s: UI
Affects Version/s: 3.0
Fix Version/s: None
Security Level: Public

Type: Improvement Priority: Major
Reporter: Alex Ma Assignee: Aleksey Kondratenko
Resolution: Unresolved Votes: 0
Labels: customer
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: PNG File PastedGraphic-2.png    

 Description   
When changing ram quota for a given bucket 2 values are provided for the range of acceptable inputs.
(256mb - <Max Available ram>)

Where <Max available ram> is calculated dynamically based on remaining server quota.

If the bucket is already filled with data that would not fit within the 256mb allocation, then the text displayed to guide memory values simply misleads the user.

Text displayed in range should be the actual ranges that the server will allow the bucket to be sized within.




[MB-12467] Server does not start after error Created: 27/Oct/14  Updated: 28/Oct/14

Status: Open
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 3.0
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Hanju Kim Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Operating System: Centos 64-bit
Is this a Regression?: Unknown

 Description   
After unexpected server shutdown(power failure), Couchbase did not started after reboot.
However, if you restart it manually, it works fine.

I found following error from log.

[ns_server:error,2014-10-28T12:14:32.440,ns_1@127.0.0.1:compaction_new_daemon<0.450.0>:compaction_new_daemon:log_compactors_exit:1303]Compactor <0.459.0> exited unexpectedly: {noproc,
                                          {gen_server,call,
                                           [{'ns_memcached-XQ',
                                             'ns_1@127.0.0.1'},
                                            {raw_stats,<<"diskinfo">>,
                                             #Fun<compaction_new_daemon.18.43740339>,
                                             {<<"0">>,<<"0">>}},
                                            180000]}}. Moving to the next bucket.




[MB-12471] Warmup time - UI level indicator, better ETA in cbstats Created: 28/Oct/14  Updated: 28/Oct/14

Status: Open
Project: Couchbase Server
Component/s: ns_server, UI
Affects Version/s: 3.0
Fix Version/s: None
Security Level: Public

Type: Improvement Priority: Major
Reporter: Alex Ma Assignee: Aleksey Kondratenko
Resolution: Unresolved Votes: 0
Labels: customer
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:
3.0.0 Enterprise Edition (build-1209)
RHEL 6


 Description   
Improve warmup time ETA in product.

Currently the UI simply shows a yellow pending light with no indication of what is happening on server start/restart.

cbstats warmup does provide some metrics but manual calculation has to be done to determine actual warmup time.

Currently a user must run cbstats warmup twice to get a rate/time basis for how quickly keys/values are being loaded and then divide that by the estimated key/value count to determine when the server will be available.

Please add a measurement at both the cbstats warmup and 'Servers tab' in the UI so that Warmup time and progress can be determined more easily.

./cbstats localhost:11210 warmup
 ep_warmup: enabled
 ep_warmup_dups: 0
 ep_warmup_estimate_time: 69547
 ep_warmup_estimated_key_count: 94905836
 ep_warmup_estimated_value_count: 94905836
 ep_warmup_item_expired: 0
 ep_warmup_key_count: 94905836
 ep_warmup_keys_time: 121072894
 ep_warmup_min_item_threshold: 100
 ep_warmup_min_memory_threshold: 100
 ep_warmup_oom: 0
 ep_warmup_state: loading data
 ep_warmup_thread: running
 ep_warmup_value_count: 5388215





[MB-12229] unable to start cbq engine if there are no buckets in couchbase yet or there are only sasl buckets Created: 23/Sep/14  Updated: 29/Oct/14

Status: Open
Project: Couchbase Server
Component/s: query
Affects Version/s: cbq-DP4
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Iryna Mironava Assignee: Gerald Sangudi
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
[root@grape-001 main]# ./cbq-engine -datastore http://10.1.3.176:8091
_time="2014-09-23T08:42:15-07:00" _level="ERROR" _msg="Cannot connect to url http://10.1.3.176:8091 - cause: HTTP error 401 Unauthorized getting \"http://10.1.3.176:8091/pools\": "
"

 Comments   
Comment by Gerald Sangudi [ 23/Sep/14 ]
I will be providing a SASL authentication API, and Manik will implement that API for Couchbase buckets.




[MB-12023] XDCR@next release - Integrate with CLI Created: 19/Aug/14  Updated: 29/Oct/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: 32h
Time Spent: Not Specified
Original Estimate: 32h

Epic Link: XDCR next release




[MB-12409] n1ql error messages should be more verbose Created: 22/Oct/14  Updated: 23/Oct/14

Status: Open
Project: Couchbase Server
Component/s: query
Affects Version/s: 3.0
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Ketaki Gangal Assignee: Gerald Sangudi
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
For example, running different queries today, return a standard error " syntax error".

There is no way for user's to know the correct documentation or correct the syntactical query.

cbq> drop index idx1;
syntax error

correct usage: drop index s.idx1

the above is a trivial example, but this occurs across multiple places, where syntax errors is the only output that users can view.






[MB-12525] Rerun 95% GET latency test for 2.5.1 build, forcing quivalent behavior as STALE=UPDATE_AFTER Created: 30/Oct/14  Updated: 30/Oct/14  Due: 07/Nov/14

Status: Open
Project: Couchbase Server
Component/s: performance
Affects Version/s: 2.5.1
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Thomas Anderson Assignee: Thomas Anderson
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: 12h
Time Spent: Not Specified
Original Estimate: 12h
Environment: 4x4 cluster, 1 Bucket, 150M x 2K documents, DGM 20%, 6Kops with 20% cache miss rate


 Description   
modify kv_latency test to force STALE=UPDATE_AFTER behavior to allow comparison of performance 2.5.1 and 3.0.1-1440. use UPSERT before GET to force flush of buffer.




[MB-12342] Man page for couchbase-cli Created: 13/Oct/14  Updated: 31/Oct/14

Status: Open
Project: Couchbase Server
Component/s: tools
Affects Version/s: 3.0
Fix Version/s: None
Security Level: Public

Type: Improvement Priority: Major
Reporter: Perry Krug Assignee: Bin Cui
Resolution: Unresolved Votes: 0
Labels: supportability
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Relates to

 Description   
It would be extremely helpful to have a man page for couchbase-cli. Really would be great for all cli tools, but this is the main one that our customers use for automated deployments.






[MB-12537] Cleanup testrunner code Created: 03/Nov/14  Updated: 03/Nov/14

Status: Open
Project: Couchbase Server
Component/s: test-execution
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Volker Mische Assignee: Volker Mische
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
The testrunner code got bloated over the years. This is a bug you can refer to when you clean things up.




[MB-12568] Cannot connect to localhost on port 8091 to complete install Created: 06/Nov/14  Updated: 06/Nov/14

Status: Open
Project: Couchbase Server
Component/s: installer
Affects Version/s: 3.0
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: profhess Assignee: Bin Cui
Resolution: Unresolved Votes: 0
Labels: customer, usability, windows
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: Windows 7 64-bit Couchbase 3.0

Triage: Untriaged
Operating System: Windows 64-bit
Is this a Regression?: Unknown

 Description   
I went through the install process. When it opens my browser to have me complete the installation process, I cannot seem to connect to the admin via port 8091. I am receiving an error...ERR_CONN_REFUSED. I ran netstat and it does not even display. This is also referenced in the forum...https://forums.couchbase.com/t/cannot-connect-to-localhost-on-port-8091-to-complete-install/1867/3

 Comments   
Comment by profhess [ 06/Nov/14 ]
I ran cbcollect, but Jira is not allowing me to attach files. Ideas?
Comment by Bin Cui [ 06/Nov/14 ]
There are several ways that you can do to diagnosis what's going on here:
1. From Windows Task Manager, go to process tab and click "Show processes from all users". Check if there are any processes named as erl.exe, memcached.exe and moxi.exe.
2. go to your root installation directory, say, c:\test, then go to c:\test\etc\couchbase, and open static_config. check if all path_configs point to valid directory path.
3. run cbcollect_info.exe under c:\test\bin directory and upload the zip file for further analysis.
Comment by profhess [ 06/Nov/14 ]
Check Task Manager...

Found two erl.exe
no memcached.exe or moxi.exe

Checked static_config...no tmp directory or couchbase-server.node file

Ran cbcollect...could not upload due to Jira hindering.




[MB-12415] Convert _run.go xdcr integration test files to _test.go to perform code coverage Created: 22/Oct/14  Updated: 06/Nov/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Aruna Piravi Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: 24h
Time Spent: Not Specified
Original Estimate: 24h

Epic Link: XDCR next release

 Description   
Repo - Xiaomei-Zhang/couchbase_goxdcr_impl




[MB-12544] XDCR@next release - Pipeline live update Created: 03/Nov/14  Updated: 07/Nov/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: feature-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Yu Sui Assignee: Yu Sui
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: 32h
Time Spent: Not Specified
Original Estimate: 32h

Epic Link: XDCR next release




[MB-12610]  HTTP/1.1 400 error when creating replication across datacenters Created: 10/Nov/14  Updated: 13/Nov/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: 2.2.0
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: anthony Assignee: Aleksey Kondratenko
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: CentOS 6.5


 Description   
I'm using Couchbase 2.2 on CentOS 6.5. I have 4 clusters configured in Rackspace: 2 in one datacenter (called ORD), one in each of 3 others (DFW, IAD, LON).

Replication works when set up between the two clusters in the same datacenter, but not between differing datacenters. I've verified with Rackspace that the network settings are all correct: iptables firewalls are (currently) turned off, all ports open and listening. I can telnet on both 8091 and 8092 between machines in the clusters.

I've tried doing replication both with the browser gui and with the REST API. From the command line I can query other data centers and see all the connections I've set up, but not create replication. Here is the output from two failed attempts to connect ORD-1 to DFW, with two slightly different errors, and a successful connection from ORD-1 to ORD-2.

I've generated cbcollect_info zip files from both sites for your analysis, but haven't been successful attaching them to this ticket. Will keep trying.

Any suggestions/help is appreciated.

thx
anthony

{noformat}
[dev1@prod-ord-couch-01 ~]$ curl -v -X POST -u root: http://:8091/controller/createReplication -d 056cc0565912c179567668d907b95189 -d fromBucket=default -d toCluster=in-dfw-cluster -d toBucket=default -d replicationType=continuous
* About to connect() to port 8091 (#0)
* Trying ... connected
* Connected to () port 8091 (#0)
* Server auth using Basic with user 'root'

POST /controller/createReplication HTTP/1.1
Authorization: Basic cm9vdDpsYXN0Y2hhbmNl
User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.15.3 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
Host: 192.237.244.139:8091
Accept: /
Content-Length: 120
Content-Type: application/x-www-form-urlencoded

< HTTP/1.1 400 Bad Request
< Server: Couchbase Server 2.2.0-837-rel-community
< Pragma: no-cache
< Date: Fri, 07 Nov 2014 23:36:26 GMT
< Content-Type: application/json
< Content-Length: 71
< Cache-Control: no-cache
<
* Connection #0 to host left intact
* Closing connection #0
{"errors":{"_":"Timeout exceeded when trying to reach remote cluster"}}
{noformat}{noformat}[dev1@prod-ord-couch-01 ~]$ curl -v -X POST -u root: http://192.237.244.139:8091/controller/createReplication -d 056cc0565912c179567668d907b95189 -d fromBucket=default -d toCluster=in-dfw-cluster -d toBucket=default -d replicationType=continuous
* About to connect() to port 8091 (#0)
* Trying ... connected
* Connected to () port 8091 (#0)
* Server auth using Basic with user 'root'

POST /controller/createReplication HTTP/1.1
Authorization: Basic cm9vdDpsYXN0Y2hhbmNl
User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.15.3 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
Host: :8091
Accept: /
Content-Length: 120
Content-Type: application/x-www-form-urlencoded

< HTTP/1.1 400 Bad Request
< Server: Couchbase Server 2.2.0-837-rel-community
< Pragma: no-cache
< Date: Fri, 07 Nov 2014 23:37:23 GMT
< Content-Type: application/json
< Content-Length: 80
< Cache-Control: no-cache
<
* Connection #0 to host left intact
* Closing connection #0
{"errors":{"_":"Failed to grab remote bucket defaultfrom any of known nodes"}}
{noformat}
=============================================
And here's a successful connection from ORD-1 to ORD-2:
{noformat}
[dev1@prod-ord-couch-01 ~]$ curl -v -X POST -u root: http://:8091/controller/createReplication -d 056cc0565912c179567668d907b95189 -d fromBucket=default -d toCluster=ord-cluster-02 -d toBucket=default -d replicationType=continuous
* About to connect() to port 8091 (#0)
* Trying ... connected
* Connected to () port 8091 (#0)
* Server auth using Basic with user 'root'

POST /controller/createReplication HTTP/1.1
Authorization: Basic cm9vdDpsYXN0Y2hhbmNl
User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.15.3 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
Host: :8091
Accept: /
Content-Length: 120
Content-Type: application/x-www-form-urlencoded

< HTTP/1.1 200 OK
< Server: Couchbase Server 2.2.0-837-rel-community
< Pragma: no-cache
< Date: Fri, 07 Nov 2014 23:35:24 GMT
< Content-Type: application/json
< Content-Length: 109
< Cache-Control: no-cache
<
* Connection #0 to host left intact
* Closing connection #0
{"database":"http://:8092/_replicator","id":"f71fd537dabe8f3e31935a57f82ac6cf/default/default"}
{noformat}

 Comments   
Comment by anthony [ 10/Nov/14 ]
The file upload dialog says 'can't communicate with jira' when I try to attach the cbcollect zip files. I can email them, instead, if that's ok.

thx
anthony
Comment by Aleksey Kondratenko [ 10/Nov/14 ]
Error message suggests that there is trouble for nodes of source cluster to reach port 8091 of nodes in remote cluster.

In order to troubleshoot I advise you to ssh to one of nodes in source cluster and try to do:

curl http://&lt;remote-cluster-node&gt;:8091/pools

If that doesn't work, then something isn't right without your vpn or firewall setup.
Comment by anthony [ 10/Nov/14 ]
Oh, yeah, done that. After I found the instructions at http://docs.couchbase.com/couchbase-manual-2.5/cb-rest-api/#xdcr-rest-api I went through and ran all the commands.

I can query the pools and remoteclusters, and I can see the default buckets, get the uuids, etc. I'm able to create destination cluster references and delete them. It's *only* failing when I try to create replication.

And, as I said, it only fails on creating replication when I'm going from one data center to another; there's no issue when I'm inside the same data center.
Also, I'd like to reiterate that there are no firewall settings, no VPN, and I've confirmed with rackspace that there is no network setup interfering; I can telnet to both ports without a problem, in both directions.

thx
anthony
Comment by Aleksey Kondratenko [ 10/Nov/14 ]
Can you please elaborate what exactly you did? And from/to which exact nodes ?
Comment by Aleksey Kondratenko [ 10/Nov/14 ]
Also with collectinfos I might be able to help more. That jira doesn't work is unfortunate, but there are lots of ways to provide .zip files (e.g. dropbox, google drive, etc).
Comment by anthony [ 10/Nov/14 ]
sure. There are three nodes in each couchbase cluster; I've been using the one I first installed and configured the cluster on to connect to the other machines, both from the command line and from the browser page. I've tried it from the other machines, too, but didn't see any difference, so I've been sticking to the main one.

In the browser, I went to the xcdr tab in the couchbase control panel, created a new destination cluster reference for each one of the other four clusters (one in the same datacenter, three remote) and then tried to create replication between them. In all cases, in every direction, it either timed out or couldn't find the bucket called 'default'--except for the local datacenter case, of course, which works perfectly.

Then I tried doing the REST API stuff to see if I got better messages. I did:

curl -u root:<mypasswd> http://192.237.244.139:8091/pools/default/remoteClusters

which gave me something like this; this is from the DFW cluster, calling the ORD cluster but the response is basically the same in any direction, depending of course on which clusters it's calling:

[{"name":"in-lon-cluster","uri":"/pools/default/remoteClusters/in-lon-cluster","validateURI":"/pools/default/remoteClusters/in-lon-cluster?just_validate=1","hostname":"162.13.148.75:8091","username":"root","uuid":"17c9c6bf1cd5b60da7fc8f66b7e93365","deleted":true},{"name":"in-iad-cluster","uri":"/pools/default/remoteClusters/in-iad-cluster","validateURI":"/pools/default/remoteClusters/in-iad-cluster?just_validate=1","hostname":"162.242.224.154:8091","username":"root","uuid":"3881754d70f876e7e34fdb21abb14497","deleted":true},{"name":"in-dfw-cluster","uri":"/pools/default/remoteClusters/in-dfw-cluster","validateURI":"/pools/default/remoteClusters/in-dfw-cluster?just_validate=1","hostname":"162.242.150.16:8091","username":"root","uuid":"a65ceb007c1d704896bea78d1df3bbe5","deleted":false},{"name":"ord-cluster-02","uri":"/pools/default/remoteClusters/ord-cluster-02","validateURI":"/pools/default/remoteClusters/ord-cluster-02?just_validate=1","hostname":"23.253.171.32:8091","username":"root","uuid":"f71fd537dabe8f3e31935a57f82ac6cf","deleted":false}]

So I could see that there are destination clusters in the other pool. From there I would do the exact commands I posted when I opened this ticket.

I've uploaded the zip files into https://drive.google.com/folderview?id=0B4ylLLPWA_fTQVBoMVhuOUlrVUk&usp=sharing

thx
anthony
Comment by Aleksey Kondratenko [ 12/Nov/14 ]
Logs don't add much to what you have already report. Error message is saying that source cluster could not node reach port 8091 at remote cluster's node.

Have you actually tried verifying connectivity between clusters the way I've asked you to verify?

Also be aware that 2.5.1 has a number of xdcr fixes that are not part of 2.2.0. I strongly recommend you to pick that release. Although, it will likely not affect your specific issue you're having.
Comment by anthony [ 13/Nov/14 ]
If you mean running "curl http://&lt;remote-cluster-node&gt;:8091/pools" then yes, as I said, I ran it. It works perfectly. Just like every other command I've tried, excepting the actual replication commands. Example below.

Any reason we shouldn't just jump to 3.0?

thx
anthony

[dev1@prod-ord-couch-01 ~]$ curl http://162.242.150.16:8091/pools
{"pools":[{"name":"default","uri":"/pools/default?uuid=a65ceb007c1d704896bea78d1df3bbe5","streamingUri":"/poolsStreaming/default?uuid=a65ceb007c1d704896bea78d1df3bbe5"}],"isAdminCreds":false,"isROAdminCreds":false,"settings":{"maxParallelIndexers":"/settings/maxParallelIndexers?uuid=a65ceb007c1d704896bea78d1df3bbe5","viewUpdateDaemon":"/settings/viewUpdateDaemon?uuid=a65ceb007c1d704896bea78d1df3bbe5"},"uuid":"a65ceb007c1d704896bea78d1df3bbe5","implementationVersion":"2.2.0-837-rel-community","componentsVersion":{"public_key":"0.13","asn1":"1.6.18","lhttpc":"1.3.0","ale":"8ca6d2a","os_mon":"2.2.7","couch_set_view":"1.2.0a-01dda76-git","compiler":"4.7.5","inets":"5.7.1","couch":"1.2.0a-01dda76-git","mapreduce":"1.0.0","couch_index_merger":"1.2.0a-01dda76-git","kernel":"2.14.5","crypto":"2.0.4","ssl":"4.1.6","sasl":"2.1.10","couch_view_parser":"1.0.0","ns_server":"2.2.0-837-rel-community","mochiweb":"2.4.2","syntax_tools":"1.6.7.1","xmerl":"1.2.10","oauth":"7d85d3ef","stdlib":"1.17.5"}}
Comment by Aleksey Kondratenko [ 13/Nov/14 ]
Jumping to 3.0.1 is not a bad idea. But will likely not affect your case.

May I ask you to do that curl for all nodes at all and also not for /pools but for /pools/default/buckets/default ?
Comment by anthony [ 13/Nov/14 ]
yes, I'll start working on that right away.

thx
anthony




[MB-12713] json non-doc ints are not validated correctly in where clause Created: 19/Nov/14  Updated: 19/Nov/14

Status: Open
Project: Couchbase Server
Component/s: None
Affects Version/s: cbq-DP4
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Iryna Mironava Assignee: Gerald Sangudi
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
Have one item with int 20
cbq> select * from b1;
{
    "request_id": "02baa08c-93c4-4fa5-856d-36e93086135e",
    "signature": {
        "*": "*"
    },
    "results": [
        {
            "b1": 20
        }
    ],
    "status": "success",
    "metrics": {
        "elapsedTime": "30.451ms",
        "executionTime": "29.732ms",
        "resultCount": 1,
        "resultSize": 32
    }
}

try to use a where expression, it return empty result:

cbq> select v from b1 where v>10;
{
    "request_id": "54d432f4-748b-4850-be0a-6fa2eed8d68e",
    "signature": {
        "v": "json"
    },
    "results": [
    ],
    "status": "success",
    "metrics": {
        "elapsedTime": "7.899ms",
        "executionTime": "7.358ms",
        "resultCount": 0,
        "resultSize": 0
    }
}






[MB-12740] Improve autofailover for replica counts over 1 Created: 21/Nov/14  Updated: 21/Nov/14

Status: Open
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 2.5.1, 3.0
Fix Version/s: None
Security Level: Public

Type: Improvement Priority: Major
Reporter: Perry Krug Assignee: Aleksey Kondratenko
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
If all buckets in a cluster have more than 1 replica, we should be able to automatically sustain the loss of more than one node. I agree we still don't want to do anything if multiple nodes fail at one time, but if one node fails and is automatically failed over, a second node failure (or third) should also be automatically failed over if there are enough replicas.

We likely also want to add a setting for the cluster to limit the max number of nodes (extending the concept of autofailover quota we currently have).




[MB-12739] Improve Auto-failover for RZA Created: 21/Nov/14  Updated: 21/Nov/14

Status: Open
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 2.5.1, 3.0
Fix Version/s: None
Security Level: Public

Type: Improvement Priority: Major
Reporter: Perry Krug Assignee: Aleksey Kondratenko
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
If an entire zone fails, Couchbase should be able to automatically failover the entire group.

This should have similar split-brain precautions as our individual node autofailover in the sense that we only support autofailover of an entire group if there are 3 or more groups configured and so long as only one group appears down at a time.

 Comments   
Comment by Perry Krug [ 21/Nov/14 ]
The improvement of mb-12740 should also extend to multiple zones failing in succession when >1 replicas are configured




[MB-12630] Automatically run cbcollectinfo when node fails unexpectedly Created: 11/Nov/14  Updated: 11/Nov/14

Status: Open
Project: Couchbase Server
Component/s: tools
Affects Version/s: 2.2.0, 2.5.1, 3.0.1
Fix Version/s: None
Security Level: Public

Type: Improvement Priority: Major
Reporter: Gokul Krishnan Assignee: Aleksey Kondratenko
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
We should automatically capture collectinfo from a node (Alk prefers 'all' nodes in a non-obtrusive manner) in all node down/auto-failover scenario, this would ensure we have diagnostic logs captured immediately after such events preventing the 'logs rolled over, we don't know why it failed' scenarios that support repeatedly encounters in the field.

This was also suggested by customers as well.




[MB-12265] Enhance cbbackup test suite on performance and footprint measurement Created: 25/Sep/14  Updated: 25/Nov/14

Status: In Progress
Project: Couchbase Server
Component/s: performance
Affects Version/s: 3.0
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Bin Cui Assignee: Thomas Anderson
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
See CBSE-1407.

We missed the test coverage for cbbackup tool on the following areas:
1. On DGM scenario, watch the backup throughput performance
2. On DGM scenario, watch the memory consumption for cbbackup process.

 Comments   
Comment by Thomas Anderson [ 25/Nov/14 ]
testing latest release, 3.0.2-1560. 100M x 2K documents, 4 node cluster, 64G/node RAM. (considered baseline evaluation to determine what performance tests to conduct on each release build.
* not-DGM: default settings: create 12 concurrent streams consuming 15-18% cluster cpu resource, impact on 10Kops workload of ~10%- see MB-
* DGM of 20%: CBBACKUP performance impacted by concurrency.
* DGM of 5%: magnified contention of system resource,

comparison of data transfer using ftp on the bucket folder show CBBACKUP almost 3x slower; and creates a backup footprint almost 2x larger than just the files themselves. see MB-




[MB-12761] The server doesn't support ASCII protocol unless moxi is running Created: 25/Nov/14  Updated: 26/Nov/14

Status: Open
Project: Couchbase Server
Component/s: UI
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Trond Norbye Assignee: Aleksey Kondratenko
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: PNG File Couchbase_Console__3_0_0-1156_.png    

 Description   
Running Couchbase server as of 24 November moxi isn't enabled by default, but if you try to modify a bucket you're displayed the attached screenshot. The following things is wrong:

1) ASCII protocol is _only_ supported through moxi
2) The preferred port for our system is 11210, which is the data port only supporting the binary protocol. Using 11211 will cause an extra network hop between moxi and memcached.
3) A dedicated port is only supported through moxi




 Comments   
Comment by Matt Ingenthron [ 25/Nov/14 ]
I believe I filed a bug on this previously. In that bug, I mentioned the UI indicates moxi and SASL auth are mutually exclusive, but that's not our implementation IIRC. Perhaps going into Sherlock we should consider moxi like all of the other "services" at the UI/configuration level?
Comment by Trond Norbye [ 26/Nov/14 ]
Matt: In sherlock moxi is a service you can enable during the configuration phase and it's by default off (YAY!!!!)




[MB-12786] how json non-doc documents can be updated? Created: 26/Nov/14  Updated: 26/Nov/14

Status: Open
Project: Couchbase Server
Component/s: query
Affects Version/s: cbq-DP4
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Iryna Mironava Assignee: Gerald Sangudi
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
how json non-doc documents can be updated? set syntax is only for json docs




[MB-12787] select * shows an error right after delete statement run Created: 26/Nov/14  Updated: 26/Nov/14

Status: Open
Project: Couchbase Server
Component/s: query
Affects Version/s: cbq-DP4
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Iryna Mironava Assignee: Gerald Sangudi
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
cbq> delete from b0 where b0.default.VMs[0].RAM = 12 limit 1;
{
    "request_id": "11092959-2992-438f-800b-fb7f757b474b",
    "signature": null,
    "results": [
    ],
    "status": "success",
    "metrics": {
        "elapsedTime": "19.081ms",
        "executionTime": "18.566ms",
        "resultCount": 0,
        "resultSize": 0
    }
}

cbq> select * from b0;
{
    "request_id": "a578af07-e35c-4ecb-af3d-cdf85ab1f6fa",
    "signature": {
        "*": "*"
    },
    "results": [
    ]
    "errors": [
        {
            "code": 5000,
            "msg": "Error doing bulk get - cause: {1 errors, starting with MCResponse status=KEY_ENOENT, opcode=GET, opaque=0, msg: Not found}"
        }
    ],
    "status": "errors",
    "metrics": {
        "elapsedTime": "15.104ms",
        "executionTime": "14.758ms",
        "resultCount": 0,
        "resultSize": 0,
        "errorCount": 1
    }
}

cbq>





[MB-12789] Per-node disk paths not intuitive to users Created: 16/Jun/14  Updated: 26/Nov/14

Status: Open
Project: Couchbase Server
Component/s: None
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Perry Krug Assignee: Anil Kumar
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
As per some training feedback, it is not immediately intuitive to users that their non-default disk paths for indexes and data are not propagated to nodes as they are added to the cluster. This is somewhat by design so that the paths can be set per-node, but users are not familiar with this.

In order of ease, I would suggest a few possible improvements:
1 - Update the documentation simply to state the best practices here that you need to set the disk paths for a node before adding it to the cluster. This is minimal effort, but not necessarily solving the problem if users don't read every line of the docs.
2 - Add a message on the "add node" screen to say that any non-default disk paths will not be propagated to the node and that they must be set prior to it being added. This helps the most common case of adding a node through the UI, but does not help the CLI or REST API paths.
3 - Expand the "add node" UI interface to allow for applying a non-default disk path when adding the node. This also only covers the UI but not the CLI or REST API.
4 - Change the behavior of ns_server to propagate these disk paths as part of the global configuration. I'm less a fan of this since it changes the existing behavior and may make it harder to have per-node settings here which have been useful in the past.

For #2 and #3, I would argue that improving the UI interface should be the primary goal and that we can cover the difference in the CLI and REST API through documentation on "how to programmatically manage couchbase" (which we do not yet have: http://www.couchbase.com/issues/browse/MB-8105)

Assigning to Anil for triage and prioritization.

 Comments   
Comment by Amy Kurtzman [ 26/Nov/14 ]
This one got moved to DOC in the mass migration. Moving it back to MB because it is not primarily a doc issue, that was a suggestion for solving the stated problem.




[MB-12824] Indexer can panic with concurrent index build and query Created: 02/Dec/14  Updated: 02/Dec/14

Status: Open
Project: Couchbase Server
Component/s: secondary-index
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Deepkaran Salooja Assignee: Deepkaran Salooja
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   

Indexer can panic with concurrent index build and query

Needs to be investigated. Seems to be happening when new snapshot is created while a previous snapshot is being iterated.

18:36:25.950639 [DEBUG] ForestDBSlice::Snapshot
SliceId 0 IndexInstId 5478591343954277004 Created New Snapshot Index: 5478591343954277004 SliceId: 0 MainSeqNum: 135988 BackSeqNum: 135988 TS: &{default [17028 16946 16988 17017 17071 16971 16874 17093] [257875534504410 177457723786681 278987729258025 147875234258307 246905835951067 176347715256738 87249332395705 267991288171101] [[17028 17028] [16946 16946] [16988 16988] [17017 17017] [17071 17071] [16971 16971] [16874 16874] [17093 17093]]}
18:36:25.950726 [DEBUG] StorageMgr::handleCreateSnapshot
Added New Snapshot Index: 5478591343954277004 PartitionId: 0 SliceId: 0
fatal error: unexpected signal during runtime execution
[signal 0xb code=0x1 addr=0xc pc=0x4fcd0bf]

runtime stack:
runtime: unexpected return pc for runtime.sigpanic called from 0x4fcd0bf
runtime.throw(0x4e31e76)
/usr/local/go/src/pkg/runtime/panic.c:520 +0x69
runtime: unexpected return pc for runtime.sigpanic called from 0x4fcd0bf
runtime.sigpanic()
/usr/local/go/src/pkg/runtime/os_darwin.c:439 +0x3d

goroutine 1592 [syscall]:
runtime.cgocall(0x40017d0, 0x6e99bf0)
/usr/local/go/src/pkg/runtime/cgocall.c:143 +0xe5 fp=0x6e99bd8 sp=0x6e99b90
github.com/couchbaselabs/goforestdb._Cfunc_fdb_iterator_next(0x1428e660, 0xc20a058910, 0xc2089b1b00)
github.com/couchbaselabs/goforestdb/_obj/_cgo_defun.c:252 +0x31 fp=0x6e99bf0 sp=0x6e99bd8
github.com/couchbaselabs/goforestdb.(*Iterator).Next(0xc208adda10, 0xc208b963c0, 0x0, 0x0)
/Users/deepkaran/gocode/src/github.com/couchbaselabs/goforestdb/iterator.go:49 +0x68 fp=0x6e99c38 sp=0x6e99bf0
github.com/couchbase/indexing/secondary/indexer.(*ForestDBIterator).Next(0xc208ce5de0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/forestdb_iterator.go:64 +0x3d fp=0x6e99c70 sp=0x6e99c38
github.com/couchbase/indexing/secondary/indexer.(*fdbSnapshot).GetKeySetForKeyRange(0xc209a51bc0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/forestdb_snapshot_reader.go:143 +0x96b fp=0x6e99f20 sp=0x6e99c70
runtime.goexit()
/usr/local/go/src/pkg/runtime/proc.c:1445 fp=0x6e99f28 sp=0x6e99f20
created by github.com/couchbase/indexing/secondary/indexer.(*fdbSnapshot).KeySet
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/forestdb_snapshot_reader.go:70 +0x1b9

goroutine 16 [chan receive]:
github.com/couchbase/indexing/secondary/indexer.(*indexer).handleWorkerMsgs(0xc20803e3c0, 0x505fb90, 0xc20a4ca630)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/indexer.go:362 +0x560
github.com/couchbase/indexing/secondary/indexer.(*indexer).run(0xc20803e3c0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/indexer.go:246 +0xe5
github.com/couchbase/indexing/secondary/indexer.NewIndexer(0x8, 0x0, 0x0, 0x0, 0x0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/indexer.go:206 +0xfdb
main.main()
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/main/main.go:41 +0x96

goroutine 19 [finalizer wait]:
runtime.park(0x4014d10, 0x4e53740, 0x4e35e09)
/usr/local/go/src/pkg/runtime/proc.c:1369 +0x89
runtime.parkunlock(0x4e53740, 0x4e35e09)
/usr/local/go/src/pkg/runtime/proc.c:1385 +0x3b
runfinq()
/usr/local/go/src/pkg/runtime/mgc0.c:2644 +0xcf
runtime.goexit()
/usr/local/go/src/pkg/runtime/proc.c:1445

goroutine 20 [IO wait, 3 minutes]:
net.runtime_pollWait(0x50586d8, 0x72, 0x0)
/private/var/folders/00/0sdwh000h01000cxqpysvccm0035qk/T/makerelease530016500/go/src/pkg/runtime/netpoll.goc:146 +0x66
net.(*pollDesc).Wait(0xc20802c3e0, 0x72, 0x0, 0x0)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:84 +0x46
net.(*pollDesc).WaitRead(0xc20802c3e0, 0x0, 0x0)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:89 +0x42
net.(*netFD).accept(0xc20802c380, 0x4ae99e0, 0x0, 0x50503c8, 0x23)
/usr/local/go/src/pkg/net/fd_unix.go:409 +0x343
net.(*UnixListener).AcceptUnix(0xc20803cc80, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/net/unixsock_posix.go:293 +0x73
net.(*UnixListener).Accept(0xc20803cc80, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/net/unixsock_posix.go:304 +0x4b
github.com/couchbaselabs/retriever/logger.doHandleConnections(0xc2080540c0, 0x4956c10, 0xa)
/Users/deepkaran/gocode/src/github.com/couchbaselabs/retriever/logger/handle_connections.go:48 +0x286
github.com/couchbaselabs/retriever/logger.handleConnections(0xc2080540c0, 0x4956c10, 0xa)
/Users/deepkaran/gocode/src/github.com/couchbaselabs/retriever/logger/handle_connections.go:22 +0x3b
created by github.com/couchbaselabs/retriever/logger.NewLogger
/Users/deepkaran/gocode/src/github.com/couchbaselabs/retriever/logger/logger.go:107 +0x2a2

goroutine 22 [syscall, 3 minutes]:
os/signal.loop()
/usr/local/go/src/pkg/os/signal/signal_unix.go:21 +0x1e
created by os/signal.init·1
/usr/local/go/src/pkg/os/signal/signal_unix.go:27 +0x32

goroutine 26 [select]:
github.com/couchbase/indexing/secondary/indexer.(*mutationMgr).run(0xc2080503c0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/mutation_manager.go:107 +0x25a
created by github.com/couchbase/indexing/secondary/indexer.NewMutationManager
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/mutation_manager.go:89 +0x1e0

goroutine 24 [chan receive, 3 minutes]:
main.dumpOnSignal(0x50a5f78, 0x1, 0x1)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/main/main.go:52 +0xa0
main.dumpOnSignalForPlatform()
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/main/dump.go:19 +0x8e
created by main.main
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/main/main.go:33 +0x2b

goroutine 25 [syscall, 3 minutes]:
syscall.Syscall(0x3, 0x0, 0xc208000bbc, 0x4, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/syscall/asm_darwin_amd64.s:20 +0x5
syscall.read(0x0, 0xc208000bbc, 0x4, 0x4, 0x5040000, 0x0, 0x0)
/usr/local/go/src/pkg/syscall/zsyscall_darwin_amd64.go:952 +0x75
syscall.Read(0x0, 0xc208000bbc, 0x4, 0x4, 0x1, 0x0, 0x0)
/usr/local/go/src/pkg/syscall/syscall_unix.go:136 +0x5c
os.(*File).read(0xc20803a000, 0xc208000bbc, 0x4, 0x4, 0x4029ee3, 0x0, 0x0)
/usr/local/go/src/pkg/os/file_unix.go:190 +0x89
os.(*File).Read(0xc20803a000, 0xc208000bbc, 0x4, 0x4, 0x4, 0x0, 0x0)
/usr/local/go/src/pkg/os/file.go:95 +0x98
github.com/couchbase/indexing/secondary/common.ExitOnStdinClose()
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/common/util.go:289 +0x8d
created by main.main
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/main/main.go:34 +0x3a

goroutine 27 [chan receive]:
github.com/couchbase/indexing/secondary/indexer.(*kvSender).run(0xc2080b4840)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/kv_sender.go:79 +0x61
created by github.com/couchbase/indexing/secondary/indexer.NewKVSender
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/kv_sender.go:64 +0x28c

goroutine 28 [chan receive]:
github.com/couchbase/indexing/secondary/indexer.(*timekeeper).run(0xc20804e080)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/timekeeper.go:119 +0x61
created by github.com/couchbase/indexing/secondary/indexer.NewTimekeeper
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/timekeeper.go:104 +0x273

goroutine 29 [IO wait]:
net.runtime_pollWait(0x5058628, 0x72, 0x0)
/private/var/folders/00/0sdwh000h01000cxqpysvccm0035qk/T/makerelease530016500/go/src/pkg/runtime/netpoll.goc:146 +0x66
net.(*pollDesc).Wait(0xc20802c5a0, 0x72, 0x0, 0x0)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:84 +0x46
net.(*pollDesc).WaitRead(0xc20802c5a0, 0x0, 0x0)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:89 +0x42
net.(*netFD).accept(0xc20802c540, 0x4ae99c0, 0x0, 0x50503c8, 0x23)
/usr/local/go/src/pkg/net/fd_unix.go:409 +0x343
net.(*TCPListener).AcceptTCP(0xc20803a088, 0x18, 0x0, 0x0)
/usr/local/go/src/pkg/net/tcpsock_posix.go:234 +0x5d
net.(*TCPListener).Accept(0xc20803a088, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/net/tcpsock_posix.go:244 +0x4b
github.com/couchbase/indexing/secondary/queryport.(*Server).listener(0xc20802c2a0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/queryport/server.go:94 +0x84
created by github.com/couchbase/indexing/secondary/queryport.NewServer
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/queryport/server.go:56 +0x61e

goroutine 30 [chan receive]:
github.com/couchbase/indexing/secondary/indexer.(*scanCoordinator).run(0xc208004de0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/scan_coordinator.go:284 +0x61
created by github.com/couchbase/indexing/secondary/indexer.NewScanCoordinator
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/scan_coordinator.go:274 +0x20a

goroutine 17 [syscall, 3 minutes]:
runtime.goexit()
/usr/local/go/src/pkg/runtime/proc.c:1445

goroutine 31 [select]:
github.com/couchbase/indexing/secondary/indexer.(*mutationMgr).listenWorkerMsgs(0xc2080503c0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/mutation_manager.go:136 +0x3cd
created by github.com/couchbase/indexing/secondary/indexer.(*mutationMgr).run
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/mutation_manager.go:102 +0x5e

goroutine 33 [chan receive]:
github.com/couchbase/indexing/secondary/indexer.(*fdbSlice).Commit(0xc208d88d20, 0x0, 0x0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/forestdb_slice_writer.go:468 +0x9b
github.com/couchbase/indexing/secondary/indexer.(*storageMgr).handleCreateSnapshot(0xc20800f540, 0x505fb90, 0xc20a4ca630)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/storage_manager.go:212 +0x99e
github.com/couchbase/indexing/secondary/indexer.(*storageMgr).handleSupvervisorCommands(0xc20800f540, 0x505fb90, 0xc20a4ca630)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/storage_manager.go:146 +0x61
github.com/couchbase/indexing/secondary/indexer.(*storageMgr).run(0xc20800f540)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/storage_manager.go:131 +0x16d
created by github.com/couchbase/indexing/secondary/indexer.NewStorageManager
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/storage_manager.go:109 +0x310

goroutine 34 [IO wait, 3 minutes]:
net.runtime_pollWait(0x5058578, 0x72, 0x0)
/private/var/folders/00/0sdwh000h01000cxqpysvccm0035qk/T/makerelease530016500/go/src/pkg/runtime/netpoll.goc:146 +0x66
net.(*pollDesc).Wait(0xc20802c610, 0x72, 0x0, 0x0)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:84 +0x46
net.(*pollDesc).WaitRead(0xc20802c610, 0x0, 0x0)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:89 +0x42
net.(*netFD).accept(0xc20802c5b0, 0x4ae99c0, 0x0, 0x50503c8, 0x23)
/usr/local/go/src/pkg/net/fd_unix.go:409 +0x343
net.(*TCPListener).AcceptTCP(0xc20803a0c8, 0x413bc23, 0x0, 0x0)
/usr/local/go/src/pkg/net/tcpsock_posix.go:234 +0x5d
net/http.tcpKeepAliveListener.Accept(0xc20803a0c8, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/net/http/server.go:1947 +0x4b
net/http.(*Server).Serve(0xc208005020, 0x5058e78, 0xc20803a0c8, 0x0, 0x0)
/usr/local/go/src/pkg/net/http/server.go:1698 +0x91
net/http.(*Server).ListenAndServe(0xc208005020, 0x0, 0x0)
/usr/local/go/src/pkg/net/http/server.go:1688 +0x14d
net/http.ListenAndServe(0x4905bd0, 0x5, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/net/http/server.go:1778 +0x79
github.com/couchbase/indexing/secondary/indexer.(*cbqBridge).initCbqBridge(0xc20803d0e0, 0x0, 0x0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/cbq_bridge.go:90 +0x1c2
created by github.com/couchbase/indexing/secondary/indexer.NewCbqBridge
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/cbq_bridge.go:44 +0xbc

goroutine 35 [chan receive, 3 minutes]:
github.com/couchbase/indexing/secondary/indexer.(*cbqBridge).run(0xc20803d0e0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/cbq_bridge.go:62 +0x61
created by github.com/couchbase/indexing/secondary/indexer.NewCbqBridge
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/cbq_bridge.go:47 +0xd4

goroutine 36 [chan receive, 3 minutes]:
github.com/couchbase/indexing/secondary/indexer.(*adminMgr).run(0xc2080010c0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/admin_manager.go:51 +0x61
created by github.com/couchbase/indexing/secondary/indexer.NewAdminManager
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/admin_manager.go:36 +0x76

goroutine 37 [select]:
github.com/couchbase/indexing/secondary/indexer.(*indexer).listenWorkerMsgs(0xc20803e3c0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/indexer.go:274 +0x1e2
created by github.com/couchbase/indexing/secondary/indexer.(*indexer).run
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/indexer.go:238 +0x32

goroutine 38 [chan receive]:
github.com/couchbase/indexing/secondary/indexer.(*cbqBridge).handleCreate(0xc20803d0e0, 0x5059030, 0xc208a5cc80, 0xc20912c4e0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/cbq_bridge.go:145 +0x507
github.com/couchbase/indexing/secondary/indexer.*cbqBridge.(github.com/couchbase/indexing/secondary/indexer.handleCreate)·fm(0x5059030, 0xc208a5cc80, 0xc20912c4e0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/cbq_bridge.go:85 +0x44
net/http.HandlerFunc.ServeHTTP(0xc2080010d0, 0x5059030, 0xc208a5cc80, 0xc20912c4e0)
/usr/local/go/src/pkg/net/http/server.go:1235 +0x40
net/http.(*ServeMux).ServeHTTP(0xc208026780, 0x5059030, 0xc208a5cc80, 0xc20912c4e0)
/usr/local/go/src/pkg/net/http/server.go:1511 +0x1a3
net/http.serverHandler.ServeHTTP(0xc208005020, 0x5059030, 0xc208a5cc80, 0xc20912c4e0)
/usr/local/go/src/pkg/net/http/server.go:1673 +0x19f
net/http.(*conn).serve(0xc20804e280)
/usr/local/go/src/pkg/net/http/server.go:1174 +0xa7e
created by net/http.(*Server).Serve
/usr/local/go/src/pkg/net/http/server.go:1721 +0x313

goroutine 39 [select]:
github.com/couchbase/indexing/secondary/indexer.(*fdbSlice).handleCommandsWorker(0xc2080940f0, 0x0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/forestdb_slice_writer.go:158 +0x65b
created by github.com/couchbase/indexing/secondary/indexer.NewForestDBSlice
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/forestdb_slice_writer.go:78 +0x655

goroutine 40 [select]:
github.com/couchbase/indexing/secondary/indexer.(*fdbSlice).handleCommandsWorker(0xc2080940f0, 0x1)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/forestdb_slice_writer.go:158 +0x65b
created by github.com/couchbase/indexing/secondary/indexer.NewForestDBSlice
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/forestdb_slice_writer.go:78 +0x655

goroutine 46 [select, 3 minutes]:
github.com/couchbase/indexing/secondary/indexer.(*slabManager).releaseHandler(0xc208077220)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/slab_manager.go:173 +0x1c1
created by github.com/couchbase/indexing/secondary/indexer.NewSlabManager
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/slab_manager.go:82 +0x19d

goroutine 42 [IO wait]:
net.runtime_pollWait(0x5058418, 0x72, 0x0)
/private/var/folders/00/0sdwh000h01000cxqpysvccm0035qk/T/makerelease530016500/go/src/pkg/runtime/netpoll.goc:146 +0x66
net.(*pollDesc).Wait(0xc20802cc30, 0x72, 0x0, 0x0)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:84 +0x46
net.(*pollDesc).WaitRead(0xc20802cc30, 0x0, 0x0)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:89 +0x42
net.(*netFD).Read(0xc20802cbd0, 0xc2080d0000, 0x1000, 0x1000, 0x0, 0x50503c8, 0x23)
/usr/local/go/src/pkg/net/fd_unix.go:232 +0x34c
net.(*conn).Read(0xc20803a120, 0xc2080d0000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/net/net.go:122 +0xe7
net/http.noteEOFReader.Read(0x5058f10, 0xc20803a120, 0xc208040ec8, 0xc2080d0000, 0x1000, 0x1000, 0x4e456e0, 0x0, 0x0)
/usr/local/go/src/pkg/net/http/transport.go:1203 +0x72
net/http.(*noteEOFReader).Read(0xc20803d7e0, 0xc2080d0000, 0x1000, 0x1000, 0xc20804c440, 0x0, 0x0)
<autogenerated>:124 +0xca
bufio.(*Reader).fill(0xc208005680)
/usr/local/go/src/pkg/bufio/bufio.go:97 +0x1b3
bufio.(*Reader).Peek(0xc208005680, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/bufio/bufio.go:132 +0x101
net/http.(*persistConn).readLoop(0xc208040e70)
/usr/local/go/src/pkg/net/http/transport.go:782 +0x95
created by net/http.(*Transport).dialConn
/usr/local/go/src/pkg/net/http/transport.go:600 +0x93f

goroutine 43 [select]:
net/http.(*persistConn).writeLoop(0xc208040e70)
/usr/local/go/src/pkg/net/http/transport.go:885 +0x38f
created by net/http.(*Transport).dialConn
/usr/local/go/src/pkg/net/http/transport.go:601 +0x957

goroutine 47 [IO wait, 3 minutes]:
net.runtime_pollWait(0x50582b8, 0x72, 0x0)
/private/var/folders/00/0sdwh000h01000cxqpysvccm0035qk/T/makerelease530016500/go/src/pkg/runtime/netpoll.goc:146 +0x66
net.(*pollDesc).Wait(0xc2085cfdb0, 0x72, 0x0, 0x0)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:84 +0x46
net.(*pollDesc).WaitRead(0xc2085cfdb0, 0x0, 0x0)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:89 +0x42
net.(*netFD).accept(0xc2085cfd50, 0x4ae99c0, 0x0, 0x50503c8, 0x23)
/usr/local/go/src/pkg/net/fd_unix.go:409 +0x343
net.(*TCPListener).AcceptTCP(0xc20803a3a0, 0x465dae0, 0x0, 0x0)
/usr/local/go/src/pkg/net/tcpsock_posix.go:234 +0x5d
net.(*TCPListener).Accept(0xc20803a3a0, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/net/tcpsock_posix.go:244 +0x4b
github.com/couchbase/indexing/secondary/dataport.listener(0xc2085de480, 0x12, 0x5058d50, 0xc20803a3a0, 0xc2080dad00)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/dataport/server.go:408 +0x9d
created by github.com/couchbase/indexing/secondary/dataport.NewServer
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/dataport/server.go:156 +0x5ce

goroutine 48 [chan receive, 3 minutes]:
github.com/couchbase/indexing/secondary/dataport.(*Server).genServer(0xc2085cfce0, 0xc2080dad00)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/dataport/server.go:220 +0x120
created by github.com/couchbase/indexing/secondary/dataport.NewServer
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/dataport/server.go:157 +0x5f2

goroutine 49 [select]:
github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).run(0xc2085cfdc0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:122 +0x532
created by github.com/couchbase/indexing/secondary/indexer.CreateMutationStreamReader
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:87 +0x4d0

goroutine 50 [select]:
github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startMutationStreamWorker(0xc2085cfdc0, 0x0, 0xc2085cd440)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:295 +0x2d0
created by github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startWorkers
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:419 +0x9c

goroutine 51 [select]:
github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startMutationStreamWorker(0xc2085cfdc0, 0x1, 0xc2085cd4a0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:295 +0x2d0
created by github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startWorkers
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:419 +0x9c

goroutine 52 [select]:
github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startMutationStreamWorker(0xc2085cfdc0, 0x2, 0xc2085cd500)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:295 +0x2d0
created by github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startWorkers
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:419 +0x9c

goroutine 53 [select]:
github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startMutationStreamWorker(0xc2085cfdc0, 0x3, 0xc2085cd560)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:295 +0x2d0
created by github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startWorkers
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:419 +0x9c

goroutine 54 [select]:
github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startMutationStreamWorker(0xc2085cfdc0, 0x4, 0xc2085cd5c0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:295 +0x2d0
created by github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startWorkers
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:419 +0x9c

goroutine 55 [select]:
github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startMutationStreamWorker(0xc2085cfdc0, 0x5, 0xc2085cd620)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:295 +0x2d0
created by github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startWorkers
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:419 +0x9c

goroutine 56 [select]:
github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startMutationStreamWorker(0xc2085cfdc0, 0x6, 0xc2085cd680)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:295 +0x2d0
created by github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startWorkers
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:419 +0x9c

goroutine 57 [select]:
github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startMutationStreamWorker(0xc2085cfdc0, 0x7, 0xc2085cd6e0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:295 +0x2d0
created by github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startWorkers
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:419 +0x9c

goroutine 63 [runnable]:
net.runtime_pollWait(0x5058158, 0x72, 0x0)
/private/var/folders/00/0sdwh000h01000cxqpysvccm0035qk/T/makerelease530016500/go/src/pkg/runtime/netpoll.goc:146 +0x66
net.(*pollDesc).Wait(0xc208682a70, 0x72, 0x0, 0x0)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:84 +0x46
net.(*pollDesc).WaitRead(0xc208682a70, 0x0, 0x0)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:89 +0x42
net.(*netFD).Read(0xc208682a10, 0xc208896000, 0x6, 0xfa000, 0x0, 0x50503c8, 0x23)
/usr/local/go/src/pkg/net/fd_unix.go:232 +0x34c
net.(*conn).Read(0xc20803a580, 0xc208896000, 0x6, 0xfa000, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/net/net.go:122 +0xe7
github.com/couchbase/indexing/secondary/transport.fullRead(0x505f590, 0xc20803a580, 0xc208896000, 0x6, 0xfa000, 0x0, 0x0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/transport/transport.go:219 +0x134
github.com/couchbase/indexing/secondary/transport.(*TransportPacket).Receive(0xc208894300, 0x505f590, 0xc20803a580, 0x0, 0x0, 0x0, 0x0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/transport/transport.go:142 +0x16c
github.com/couchbase/indexing/secondary/dataport.doReceive(0xc2085de480, 0x12, 0xc2086679e0, 0xfa000, 0xfa0, 0xc2085cd380, 0xc2080dad00)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/dataport/server.go:476 +0x4a2
created by github.com/couchbase/indexing/secondary/dataport.(*Server).startWorker
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/dataport/server.go:314 +0x1ab

goroutine 59 [IO wait]:
net.runtime_pollWait(0x5058208, 0x72, 0x0)
/private/var/folders/00/0sdwh000h01000cxqpysvccm0035qk/T/makerelease530016500/go/src/pkg/runtime/netpoll.goc:146 +0x66
net.(*pollDesc).Wait(0xc20866a7d0, 0x72, 0x0, 0x0)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:84 +0x46
net.(*pollDesc).WaitRead(0xc20866a7d0, 0x0, 0x0)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:89 +0x42
net.(*netFD).Read(0xc20866a770, 0xc208673000, 0x1000, 0x1000, 0x0, 0x50503c8, 0x23)
/usr/local/go/src/pkg/net/fd_unix.go:232 +0x34c
net.(*conn).Read(0xc20803a540, 0xc208673000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/net/net.go:122 +0xe7
net/http.noteEOFReader.Read(0x5058f10, 0xc20803a540, 0xc208073398, 0xc208673000, 0x1000, 0x1000, 0x4e456e0, 0x0, 0x0)
/usr/local/go/src/pkg/net/http/transport.go:1203 +0x72
net/http.(*noteEOFReader).Read(0xc208667500, 0xc208673000, 0x1000, 0x1000, 0xc20804df40, 0x0, 0x0)
<autogenerated>:124 +0xca
bufio.(*Reader).fill(0xc208651020)
/usr/local/go/src/pkg/bufio/bufio.go:97 +0x1b3
bufio.(*Reader).Peek(0xc208651020, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/bufio/bufio.go:132 +0x101
net/http.(*persistConn).readLoop(0xc208073340)
/usr/local/go/src/pkg/net/http/transport.go:782 +0x95
created by net/http.(*Transport).dialConn
/usr/local/go/src/pkg/net/http/transport.go:600 +0x93f

goroutine 60 [select]:
net/http.(*persistConn).writeLoop(0xc208073340)
/usr/local/go/src/pkg/net/http/transport.go:885 +0x38f
created by net/http.(*Transport).dialConn
/usr/local/go/src/pkg/net/http/transport.go:601 +0x957

goroutine 241 [select]:
github.com/couchbase/indexing/secondary/indexer.(*fdbSlice).handleCommandsWorker(0xc208d88d20, 0x0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/forestdb_slice_writer.go:158 +0x65b
created by github.com/couchbase/indexing/secondary/indexer.NewForestDBSlice
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/forestdb_slice_writer.go:78 +0x655

goroutine 242 [select]:
github.com/couchbase/indexing/secondary/indexer.(*fdbSlice).handleCommandsWorker(0xc208d88d20, 0x1)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/forestdb_slice_writer.go:158 +0x65b
created by github.com/couchbase/indexing/secondary/indexer.NewForestDBSlice
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/forestdb_slice_writer.go:78 +0x655

goroutine 1478 [runnable]:
net.runtime_pollWait(0x5058368, 0x72, 0x0)
/private/var/folders/00/0sdwh000h01000cxqpysvccm0035qk/T/makerelease530016500/go/src/pkg/runtime/netpoll.goc:146 +0x66
net.(*pollDesc).Wait(0xc209d6ebc0, 0x72, 0x0, 0x0)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:84 +0x46
net.(*pollDesc).WaitRead(0xc209d6ebc0, 0x0, 0x0)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:89 +0x42
net.(*netFD).Read(0xc209d6eb60, 0xc209eda000, 0x6, 0xfa000, 0x0, 0x50503c8, 0x23)
/usr/local/go/src/pkg/net/fd_unix.go:232 +0x34c
net.(*conn).Read(0xc20803b070, 0xc209eda000, 0x6, 0xfa000, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/net/net.go:122 +0xe7
github.com/couchbase/indexing/secondary/transport.fullRead(0x505f590, 0xc20803b070, 0xc209eda000, 0x6, 0xfa000, 0x0, 0x0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/transport/transport.go:219 +0x134
github.com/couchbase/indexing/secondary/transport.(*TransportPacket).Receive(0xc208b2a6f0, 0x505f590, 0xc20803b070, 0x0, 0x0, 0x0, 0x0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/transport/transport.go:142 +0x16c
github.com/couchbase/indexing/secondary/dataport.doReceive(0xc2096e8b00, 0x12, 0xc208dafb20, 0xfa000, 0xfa0, 0xc209907260, 0xc2080da680)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/dataport/server.go:476 +0x4a2
created by github.com/couchbase/indexing/secondary/dataport.(*Server).startWorker
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/dataport/server.go:314 +0x1ab

goroutine 1474 [select]:
github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startMutationStreamWorker(0xc209db9ab0, 0x6, 0xc209907560)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:295 +0x2d0
created by github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startWorkers
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:419 +0x9c

goroutine 1465 [IO wait]:
net.runtime_pollWait(0x5057ff8, 0x72, 0x0)
/private/var/folders/00/0sdwh000h01000cxqpysvccm0035qk/T/makerelease530016500/go/src/pkg/runtime/netpoll.goc:146 +0x66
net.(*pollDesc).Wait(0xc209db9aa0, 0x72, 0x0, 0x0)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:84 +0x46
net.(*pollDesc).WaitRead(0xc209db9aa0, 0x0, 0x0)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:89 +0x42
net.(*netFD).accept(0xc209db9a40, 0x4ae99c0, 0x0, 0x50503c8, 0x23)
/usr/local/go/src/pkg/net/fd_unix.go:409 +0x343
net.(*TCPListener).AcceptTCP(0xc20803aba0, 0x465dae0, 0x0, 0x0)
/usr/local/go/src/pkg/net/tcpsock_posix.go:234 +0x5d
net.(*TCPListener).Accept(0xc20803aba0, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/net/tcpsock_posix.go:244 +0x4b
github.com/couchbase/indexing/secondary/dataport.listener(0xc2096e8b00, 0x12, 0x5058d50, 0xc20803aba0, 0xc2080da680)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/dataport/server.go:408 +0x9d
created by github.com/couchbase/indexing/secondary/dataport.NewServer
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/dataport/server.go:156 +0x5ce

goroutine 1467 [select]:
github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).run(0xc209db9ab0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:122 +0x532
created by github.com/couchbase/indexing/secondary/indexer.CreateMutationStreamReader
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:87 +0x4d0

goroutine 1503 [select]:
github.com/couchbase/indexing/secondary/indexer.(*flusher).flushQueue(0xc208713190, 0x505a7e0, 0xc209db9960, 0x2, 0xc209bd9c80, 0x8, 0x8, 0x1, 0xc2089f4300, 0xc2089f4360)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/flusher.go:202 +0x543
created by github.com/couchbase/indexing/secondary/indexer.(*flusher).PersistUptoTS
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/flusher.go:101 +0x207

goroutine 1464 [select]:
github.com/couchbase/indexing/secondary/indexer.(*slabManager).releaseHandler(0xc208f4ad70)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/slab_manager.go:173 +0x1c1
created by github.com/couchbase/indexing/secondary/indexer.NewSlabManager
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/slab_manager.go:82 +0x19d

goroutine 1472 [select]:
github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startMutationStreamWorker(0xc209db9ab0, 0x4, 0xc2099074a0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:295 +0x2d0
created by github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startWorkers
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:419 +0x9c

goroutine 1516 [chan receive]:
github.com/couchbase/indexing/secondary/indexer.(*atomicMutationQueue).dequeueUptoSeqno(0xc209db9960, 0x3, 0x3b1a, 0xc2089f45a0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/mutation_queue_atomic.go:150 +0x91
created by github.com/couchbase/indexing/secondary/indexer.(*atomicMutationQueue).DequeueUptoSeqno
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/mutation_queue_atomic.go:136 +0x78

goroutine 1519 [chan receive]:
github.com/couchbase/indexing/secondary/indexer.(*atomicMutationQueue).dequeueUptoSeqno(0xc209db9960, 0x6, 0x3a11, 0xc2089f46c0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/mutation_queue_atomic.go:150 +0x91
created by github.com/couchbase/indexing/secondary/indexer.(*atomicMutationQueue).DequeueUptoSeqno
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/mutation_queue_atomic.go:136 +0x78

goroutine 1471 [select]:
github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startMutationStreamWorker(0xc209db9ab0, 0x3, 0xc209907440)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:295 +0x2d0
created by github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startWorkers
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:419 +0x9c

goroutine 1473 [select]:
github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startMutationStreamWorker(0xc209db9ab0, 0x5, 0xc209907500)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:295 +0x2d0
created by github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startWorkers
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:419 +0x9c

goroutine 244 [select, 2 minutes]:
github.com/couchbase/indexing/secondary/indexer.(*slabManager).releaseHandler(0xc208d7bc20)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/slab_manager.go:173 +0x1c1
created by github.com/couchbase/indexing/secondary/indexer.NewSlabManager
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/slab_manager.go:82 +0x19d

goroutine 1466 [chan receive]:
github.com/couchbase/indexing/secondary/dataport.(*Server).genServer(0xc209db99d0, 0xc2080da680)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/dataport/server.go:220 +0x120
created by github.com/couchbase/indexing/secondary/dataport.NewServer
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/dataport/server.go:157 +0x5f2

goroutine 1506 [chan receive]:
github.com/couchbase/indexing/secondary/indexer.(*flusher).flushSingleVbucketUptoSeqno(0xc208713190, 0x505a7e0, 0xc209db9960, 0x209bd0002, 0x3afa, 0x1, 0x0, 0xc2089f43c0, 0xc208afef60)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/flusher.go:289 +0x203
created by github.com/couchbase/indexing/secondary/indexer.(*flusher).flushQueue
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/flusher.go:185 +0x5b5

goroutine 1470 [select]:
github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startMutationStreamWorker(0xc209db9ab0, 0x2, 0xc2099073e0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:295 +0x2d0
created by github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startWorkers
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:419 +0x9c

goroutine 1469 [select]:
github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startMutationStreamWorker(0xc209db9ab0, 0x1, 0xc209907380)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:295 +0x2d0
created by github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startWorkers
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:419 +0x9c

goroutine 1475 [select]:
github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startMutationStreamWorker(0xc209db9ab0, 0x7, 0xc2099075c0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:295 +0x2d0
created by github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startWorkers
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:419 +0x9c

goroutine 1462 [syscall]:
github.com/couchbaselabs/goforestdb._Cfunc_fdb_get_kv(0x5574490, 0xc20951f730, 0xf, 0xc208d868c0, 0xc2089e4ed0, 0x18)
github.com/couchbaselabs/goforestdb/_obj/_cgo_defun.c:198 +0x31
github.com/couchbaselabs/goforestdb.(*KVStore).GetKV(0xc208df1340, 0xc20951f730, 0xf, 0xf, 0x0, 0x0, 0x0, 0x0, 0x0)
/Users/deepkaran/gocode/src/github.com/couchbaselabs/goforestdb/kv.go:32 +0xf3
github.com/couchbase/indexing/secondary/indexer.(*fdbSlice).getBackIndexEntry(0xc20929c870, 0xc20951f730, 0xf, 0xf, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/forestdb_slice_writer.go:308 +0x224
github.com/couchbase/indexing/secondary/indexer.(*fdbSlice).insert(0xc20929c870, 0xc20a0ce030, 0x25, 0x25, 0xc2096c9500, 0x26, 0x30, 0xc20951f730, 0xf, 0xf, ...)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/forestdb_slice_writer.go:202 +0x1fb
github.com/couchbase/indexing/secondary/indexer.(*fdbSlice).handleCommandsWorker(0xc20929c870, 0x1)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/forestdb_slice_writer.go:164 +0x320
created by github.com/couchbase/indexing/secondary/indexer.NewForestDBSlice
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/forestdb_slice_writer.go:78 +0x655

goroutine 1512 [semacquire]:
sync.runtime_Semacquire(0xc208713174)
/private/var/folders/00/0sdwh000h01000cxqpysvccm0035qk/T/makerelease530016500/go/src/pkg/runtime/sema.goc:199 +0x30
sync.(*WaitGroup).Wait(0xc208afef60)
/usr/local/go/src/pkg/sync/waitgroup.go:129 +0x14b
github.com/couchbase/indexing/secondary/indexer.func·001()
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/flusher.go:194 +0xdd
created by github.com/couchbase/indexing/secondary/indexer.(*flusher).flushQueue
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/flusher.go:198 +0x27f

goroutine 1507 [chan receive]:
github.com/couchbase/indexing/secondary/indexer.(*flusher).flushSingleVbucketUptoSeqno(0xc208713190, 0x505a7e0, 0xc209db9960, 0x309bd0002, 0x3b1a, 0x1, 0x0, 0xc2089f43c0, 0xc208afef60)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/flusher.go:289 +0x203
created by github.com/couchbase/indexing/secondary/indexer.(*flusher).flushQueue
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/flusher.go:185 +0x5b5

goroutine 1518 [chan receive]:
github.com/couchbase/indexing/secondary/indexer.(*atomicMutationQueue).dequeueUptoSeqno(0xc209db9960, 0x5, 0x3aef, 0xc2089f4660)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/mutation_queue_atomic.go:150 +0x91
created by github.com/couchbase/indexing/secondary/indexer.(*atomicMutationQueue).DequeueUptoSeqno
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/mutation_queue_atomic.go:136 +0x78

goroutine 1505 [chan receive]:
github.com/couchbase/indexing/secondary/indexer.(*flusher).flushSingleVbucketUptoSeqno(0xc208713190, 0x505a7e0, 0xc209db9960, 0x109bd0002, 0x3b15, 0x1, 0x0, 0xc2089f43c0, 0xc208afef60)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/flusher.go:289 +0x203
created by github.com/couchbase/indexing/secondary/indexer.(*flusher).flushQueue
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/flusher.go:185 +0x5b5

goroutine 1468 [select]:
github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startMutationStreamWorker(0xc209db9ab0, 0x0, 0xc209907320)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:295 +0x2d0
created by github.com/couchbase/indexing/secondary/indexer.(*mutationStreamReader).startWorkers
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/stream_reader.go:419 +0x9c

goroutine 1502 [chan receive]:
github.com/couchbase/indexing/secondary/indexer.func·007()
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/mutation_manager.go:750 +0x1a2
created by github.com/couchbase/indexing/secondary/indexer.(*mutationMgr).persistMutationQueue
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/mutation_manager.go:770 +0x26a

goroutine 1509 [chan receive]:
github.com/couchbase/indexing/secondary/indexer.(*flusher).flushSingleVbucketUptoSeqno(0xc208713190, 0x505a7e0, 0xc209db9960, 0x509bd0002, 0x3aef, 0x1, 0x0, 0xc2089f43c0, 0xc208afef60)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/flusher.go:289 +0x203
created by github.com/couchbase/indexing/secondary/indexer.(*flusher).flushQueue
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/flusher.go:185 +0x5b5

goroutine 1510 [chan receive]:
github.com/couchbase/indexing/secondary/indexer.(*flusher).flushSingleVbucketUptoSeqno(0xc208713190, 0x505a7e0, 0xc209db9960, 0x609bd0002, 0x3a11, 0x1, 0x0, 0xc2089f43c0, 0xc208afef60)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/flusher.go:289 +0x203
created by github.com/couchbase/indexing/secondary/indexer.(*flusher).flushQueue
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/flusher.go:185 +0x5b5

goroutine 1461 [syscall]:
github.com/couchbaselabs/goforestdb._Cfunc_fdb_set_kv(0x5500200, 0xc208e63120, 0xd, 0xc2096c94a0, 0x22, 0x0)
github.com/couchbaselabs/goforestdb/_obj/_cgo_defun.c:342 +0x31
github.com/couchbaselabs/goforestdb.(*KVStore).SetKV(0xc208df1330, 0xc208e63120, 0xd, 0xd, 0xc2096c94a0, 0x22, 0x30, 0x0, 0x0)
/Users/deepkaran/gocode/src/github.com/couchbaselabs/goforestdb/kv.go:58 +0xa5
github.com/couchbase/indexing/secondary/indexer.(*fdbSlice).insert(0xc20929c870, 0xc209ff6120, 0x21, 0x21, 0xc2096c94a0, 0x22, 0x30, 0xc208e63120, 0xd, 0xd, ...)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/forestdb_slice_writer.go:237 +0x9c1
github.com/couchbase/indexing/secondary/indexer.(*fdbSlice).handleCommandsWorker(0xc20929c870, 0x0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/forestdb_slice_writer.go:164 +0x320
created by github.com/couchbase/indexing/secondary/indexer.NewForestDBSlice
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/forestdb_slice_writer.go:78 +0x655

goroutine 1588 [select]:
github.com/couchbase/indexing/secondary/indexer.(*scanCoordinator).monitorWorkers(0xc208004de0, 0xc208ce5c00, 0xc208a87da0, 0xc208add9f8, 0x1, 0x1, 0x4996bd0, 0x12)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/scan_coordinator.go:718 +0x381
github.com/couchbase/indexing/secondary/indexer.(*scanCoordinator).scanLocalPartitionEndpoint(0xc208004de0, 0xc209668f60, 0x0, 0xc208adc0a8, 0xc208a87da0, 0xc208ce5a00)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/scan_coordinator.go:765 +0x4ca
created by github.com/couchbase/indexing/secondary/indexer.(*scanCoordinator).scanPartitions
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/scan_coordinator.go:687 +0x45f

goroutine 1513 [chan receive]:
github.com/couchbase/indexing/secondary/indexer.(*atomicMutationQueue).dequeueUptoSeqno(0xc209db9960, 0x0, 0x3ae9, 0xc2089f4480)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/mutation_queue_atomic.go:150 +0x91
created by github.com/couchbase/indexing/secondary/indexer.(*atomicMutationQueue).DequeueUptoSeqno
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/mutation_queue_atomic.go:136 +0x78

goroutine 1514 [chan receive]:
github.com/couchbase/indexing/secondary/indexer.(*atomicMutationQueue).dequeueUptoSeqno(0xc209db9960, 0x1, 0x3b15, 0xc2089f44e0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/mutation_queue_atomic.go:150 +0x91
created by github.com/couchbase/indexing/secondary/indexer.(*atomicMutationQueue).DequeueUptoSeqno
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/mutation_queue_atomic.go:136 +0x78

goroutine 1515 [chan receive]:
github.com/couchbase/indexing/secondary/indexer.(*atomicMutationQueue).dequeueUptoSeqno(0xc209db9960, 0x2, 0x3afa, 0xc2089f4540)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/mutation_queue_atomic.go:150 +0x91
created by github.com/couchbase/indexing/secondary/indexer.(*atomicMutationQueue).DequeueUptoSeqno
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/mutation_queue_atomic.go:136 +0x78

goroutine 1504 [chan receive]:
github.com/couchbase/indexing/secondary/indexer.(*flusher).flushSingleVbucketUptoSeqno(0xc208713190, 0x505a7e0, 0xc209db9960, 0x9bd0002, 0x3ae9, 0x1, 0x0, 0xc2089f43c0, 0xc208afef60)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/flusher.go:289 +0x203
created by github.com/couchbase/indexing/secondary/indexer.(*flusher).flushQueue
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/flusher.go:185 +0x5b5

goroutine 1517 [chan receive]:
github.com/couchbase/indexing/secondary/indexer.(*atomicMutationQueue).dequeueUptoSeqno(0xc209db9960, 0x4, 0x3b32, 0xc2089f4600)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/mutation_queue_atomic.go:150 +0x91
created by github.com/couchbase/indexing/secondary/indexer.(*atomicMutationQueue).DequeueUptoSeqno
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/mutation_queue_atomic.go:136 +0x78

goroutine 1587 [select]:
github.com/couchbase/indexing/secondary/indexer.(*scanCoordinator).monitorWorkers(0xc208004de0, 0xc208ce5a00, 0xc2096638c0, 0xc208add9e0, 0x1, 0x1, 0x4952dd0, 0xe)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/scan_coordinator.go:718 +0x381
github.com/couchbase/indexing/secondary/indexer.(*scanCoordinator).scanPartitions(0xc208004de0, 0xc209668f60, 0xc2092a8000, 0x1, 0x1, 0xc208adc0a8)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/scan_coordinator.go:694 +0x50c
created by github.com/couchbase/indexing/secondary/indexer.(*scanCoordinator).requestHandler
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/scan_coordinator.go:469 +0x95a

goroutine 1584 [runnable]:
github.com/couchbase/indexing/secondary/indexer.(*scanStreamReader).ReadKeyBatch(0xc209a4f680, 0x0, 0x12095d00, 0x0, 0x0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/scan_coordinator.go:148 +0xb3
github.com/couchbase/indexing/secondary/indexer.(*scanCoordinator).requestHandler(0xc208004de0, 0x48845a0, 0xc20962d940, 0xc2086802c0, 0xc208680840)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/scan_coordinator.go:498 +0xa8c
github.com/couchbase/indexing/secondary/indexer.*scanCoordinator.(github.com/couchbase/indexing/secondary/indexer.requestHandler)·fm(0x48845a0, 0xc20962d940, 0xc2086802c0, 0xc208680840)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/scan_coordinator.go:261 +0x4e
github.com/couchbase/indexing/secondary/queryport.(*Server).handleConnection(0xc20802c2a0, 0x5058eb0, 0xc208afbce0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/queryport/server.go:137 +0x5cb
created by github.com/couchbase/indexing/secondary/queryport.(*Server).listener
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/queryport/server.go:95 +0xd5

goroutine 1585 [IO wait]:
net.runtime_pollWait(0x50580a8, 0x72, 0x0)
/private/var/folders/00/0sdwh000h01000cxqpysvccm0035qk/T/makerelease530016500/go/src/pkg/runtime/netpoll.goc:146 +0x66
net.(*pollDesc).Wait(0xc209872fb0, 0x72, 0x0, 0x0)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:84 +0x46
net.(*pollDesc).WaitRead(0xc209872fb0, 0x0, 0x0)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:89 +0x42
net.(*netFD).Read(0xc209872f50, 0xc20a20a000, 0x6, 0xfa000, 0x0, 0x50503c8, 0x23)
/usr/local/go/src/pkg/net/fd_unix.go:232 +0x34c
net.(*conn).Read(0xc208afbce0, 0xc20a20a000, 0x6, 0xfa000, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/net/net.go:122 +0xe7
github.com/couchbase/indexing/secondary/transport.fullRead(0x505f590, 0xc208afbce0, 0xc20a20a000, 0x6, 0xfa000, 0x0, 0x0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/transport/transport.go:219 +0x134
github.com/couchbase/indexing/secondary/transport.(*TransportPacket).Receive(0xc20877cc00, 0x505f590, 0xc208afbce0, 0x0, 0x0, 0x0, 0x0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/transport/transport.go:142 +0x16c
github.com/couchbase/indexing/secondary/queryport.(*Server).doReceive(0xc20802c2a0, 0x5058eb0, 0xc208afbce0, 0xc208680160)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/queryport/server.go:216 +0x337
created by github.com/couchbase/indexing/secondary/queryport.(*Server).handleConnection
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/queryport/server.go:116 +0x12f

goroutine 1589 [semacquire]:
sync.runtime_Semacquire(0xc2092a8128)
/private/var/folders/00/0sdwh000h01000cxqpysvccm0035qk/T/makerelease530016500/go/src/pkg/runtime/sema.goc:199 +0x30
sync.(*WaitGroup).Wait(0xc208ce5a00)
/usr/local/go/src/pkg/sync/waitgroup.go:129 +0x14b
github.com/couchbase/indexing/secondary/indexer.func·016()
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/scan_coordinator.go:711 +0xe8
created by github.com/couchbase/indexing/secondary/indexer.(*scanCoordinator).monitorWorkers
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/scan_coordinator.go:715 +0xe6

goroutine 1508 [chan receive]:
github.com/couchbase/indexing/secondary/indexer.(*flusher).flushSingleVbucketUptoSeqno(0xc208713190, 0x505a7e0, 0xc209db9960, 0x409bd0002, 0x3b32, 0x1, 0x0, 0xc2089f43c0, 0xc208afef60)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/flusher.go:289 +0x203
created by github.com/couchbase/indexing/secondary/indexer.(*flusher).flushQueue
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/flusher.go:185 +0x5b5

goroutine 1586 [runnable]:
syscall.Syscall(0x4, 0x12, 0xc20a7fc300, 0x2c, 0x2c, 0x0, 0x0)
/usr/local/go/src/pkg/syscall/asm_darwin_amd64.s:20 +0x5
syscall.write(0x12, 0xc20a7fc300, 0x2c, 0x60, 0x77, 0x0, 0x0)
/usr/local/go/src/pkg/syscall/zsyscall_darwin_amd64.go:1326 +0x75
syscall.Write(0x12, 0xc20a7fc300, 0x2c, 0x60, 0x6, 0x0, 0x0)
/usr/local/go/src/pkg/syscall/syscall_unix.go:152 +0x5c
net.(*netFD).Write(0xc209872f50, 0xc20a7fc300, 0x2c, 0x60, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/net/fd_unix.go:317 +0x2ab
net.(*conn).Write(0xc208afbce0, 0xc20a7fc300, 0x2c, 0x60, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/net/net.go:130 +0xe7
github.com/couchbase/indexing/secondary/transport.(*TransportPacket).Send(0xc209ab5410, 0x505f590, 0xc208afbce0, 0x4868d20, 0xc2087ade80, 0x0, 0x0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/transport/transport.go:122 +0x7ed
github.com/couchbase/indexing/secondary/queryport.func·005(0x4868d20, 0xc2087ade80, 0x0, 0x0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/queryport/server.go:155 +0x16a
github.com/couchbase/indexing/secondary/queryport.(*Server).handleRequest(0xc20802c2a0, 0x5058eb0, 0xc208afbce0, 0xc209ab5410, 0xc2086802c0, 0xc208680160, 0xc208680840)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/queryport/server.go:176 +0x356
created by github.com/couchbase/indexing/secondary/queryport.(*Server).handleConnection
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/queryport/server.go:136 +0x58b

goroutine 1590 [select]:
github.com/couchbase/indexing/secondary/indexer.(*scanCoordinator).receiveKeys(0xc208004de0, 0xc209668f60, 0xc208b963c0, 0xc208b96420)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/scan_coordinator.go:855 +0x3b2
github.com/couchbase/indexing/secondary/indexer.(*scanCoordinator).scanAllQuery(0xc208004de0, 0xc209668f60, 0x505fca8, 0xc209a51bc0, 0xc208b96300)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/scan_coordinator.go:844 +0x63
github.com/couchbase/indexing/secondary/indexer.(*scanCoordinator).executeLocalScan(0xc208004de0, 0xc209668f60, 0x505fca8, 0xc209a51bc0, 0xc208b96300)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/scan_coordinator.go:812 +0x12e
github.com/couchbase/indexing/secondary/indexer.(*scanCoordinator).scanLocalSlice(0xc208004de0, 0xc209668f60, 0x5059b38, 0xc2080940f0, 0xc208b96300, 0xc208ce5c00)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/scan_coordinator.go:795 +0x20e
created by github.com/couchbase/indexing/secondary/indexer.(*scanCoordinator).scanLocalPartitionEndpoint
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/scan_coordinator.go:762 +0x443

goroutine 1591 [semacquire]:
sync.runtime_Semacquire(0xc2092a812c)
/private/var/folders/00/0sdwh000h01000cxqpysvccm0035qk/T/makerelease530016500/go/src/pkg/runtime/sema.goc:199 +0x30
sync.(*WaitGroup).Wait(0xc208ce5c00)
/usr/local/go/src/pkg/sync/waitgroup.go:129 +0x14b
github.com/couchbase/indexing/secondary/indexer.func·016()
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/scan_coordinator.go:711 +0xe8
created by github.com/couchbase/indexing/secondary/indexer.(*scanCoordinator).monitorWorkers
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/indexer/scan_coordinator.go:715 +0xe6




[MB-12826] Projector panics if couchbase server endpoint is not available Created: 02/Dec/14  Updated: 02/Dec/14

Status: Open
Project: Couchbase Server
Component/s: secondary-index
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Deepkaran Salooja Assignee: Pratap Chakravarthy
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   

Projector panics if couchbase server endpoint is not available:

23:47:09.684173 [INFO] [localhost:9999->MAINT_STREAM_TOPIC] back channel *projector.controlStreamRequest &{default 127.0.0.1:12000 15832 0 6 277682785531251 0}
23:47:09.684317 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->7] vrCmdAddEngines
23:47:09.684327 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->7] AddEngine 5478591343954277004
23:47:09.684332 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->7] knows engine 5478591343954277004
23:47:09.684338 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->7] UpdateEndpoints localhost:8100
23:47:09.684342 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->7] knows endpoint localhost:8100
23:47:09.685944 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->7] heartbeat (500ns) loaded ...
23:47:09.685998 [INFO] [localhost:9999->MAINT_STREAM_TOPIC] back channel *projector.controlStreamRequest &{default 127.0.0.1:12000 15832 0 7 43727881282788 0}
23:47:09.686024 [INFO] [localhost:9999->MAINT_STREAM_TOPIC] stream-request rollback: <nil>, success: vbnos <nil> #3dd8
23:47:09.686054 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->1] received snapshot 2 0 (type 4f7)
23:47:09.686072 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->2] received snapshot 2 0 (type 4d4)
23:47:09.686081 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->3] received snapshot 2 0 (type 4e7)
23:47:09.688215 [INFO] [projector([127.0.0.1:12000])] "MAINT_STREAM_TOPIC" feed added ...
23:47:09.688433 [DEBUG] [projector-adminport:localhost:9999] connState for "127.0.0.1:63390" : idle
23:47:09.702946 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->4] received snapshot 2 0 (type 4cf)
23:47:09.703477 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->5] received snapshot 2 0 (type 4d6)
23:47:09.704913 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->6] received snapshot 2 0 (type 4e3)
23:47:09.719175 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->7] received snapshot 2 0 (type 4fa)
00:01:30.726186 Default None Error in receive EOF
2014/12/03 00:01:30 dcp-client: Upr feed from 127.0.0.1:12000 failed: EOF
2014/12/03 00:01:30 been asked to close ...
00:01:30.726228 Default None Exiting send command go routine ...
00:01:30.726235 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000] StreamEnd UPR_STREAMEND
00:01:30.726245 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000] StreamEnd UPR_STREAMEND
00:01:30.726250 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000] StreamEnd UPR_STREAMEND
00:01:30.726255 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000] StreamEnd UPR_STREAMEND
00:01:30.726259 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000] StreamEnd UPR_STREAMEND
00:01:30.726264 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000] StreamEnd UPR_STREAMEND
00:01:30.726270 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000] StreamEnd UPR_STREAMEND
00:01:30.726274 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000] StreamEnd UPR_STREAMEND
00:01:30.726283 [INFO] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000] for "127.0.0.1:12000" ... stopped
2014/12/03 00:01:30 dcp-client: UPR connection lost; reconnecting to bucket "default" in 1s
00:01:30.726326 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->0] StreamEnd for vbucket 0
00:01:30.726332 [INFO] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->0] ... stopped
00:01:30.726741 [INFO] [localhost:9999->MAINT_STREAM_TOPIC] back channel flushed *projector.controlStreamEnd &{default 127.0.0.1:12000 0 0 0}
00:01:30.726905 [INFO] [localhost:9999->MAINT_STREAM_TOPIC] back channel flushed *projector.controlStreamEnd &{default 127.0.0.1:12000 0 0 1}
00:01:30.726919 [INFO] [localhost:9999->MAINT_STREAM_TOPIC] back channel flushed *projector.controlStreamEnd &{default 127.0.0.1:12000 0 0 2}
00:01:30.726931 [INFO] [localhost:9999->MAINT_STREAM_TOPIC] back channel flushed *projector.controlStreamEnd &{default 127.0.0.1:12000 0 0 3}
00:01:30.726942 [INFO] [localhost:9999->MAINT_STREAM_TOPIC] back channel flushed *projector.controlStreamEnd &{default 127.0.0.1:12000 0 0 4}
00:01:30.726954 [INFO] [localhost:9999->MAINT_STREAM_TOPIC] back channel flushed *projector.controlStreamEnd &{default 127.0.0.1:12000 0 0 5}
00:01:30.726966 [INFO] [localhost:9999->MAINT_STREAM_TOPIC] back channel flushed *projector.controlStreamEnd &{default 127.0.0.1:12000 0 0 6}
00:01:30.726978 [INFO] [localhost:9999->MAINT_STREAM_TOPIC] back channel flushed *projector.controlStreamEnd &{default 127.0.0.1:12000 0 0 7}
00:01:30.726992 [INFO] [localhost:9999->MAINT_STREAM_TOPIC] self deleting bucket default
00:01:30.726752 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->1] StreamEnd for vbucket 1
00:01:30.727337 [INFO] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->1] ... stopped
00:01:30.726756 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->2] StreamEnd for vbucket 2
00:01:30.727349 [INFO] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->2] ... stopped
00:01:30.726760 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->3] StreamEnd for vbucket 3
00:01:30.727373 [INFO] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->3] ... stopped
00:01:30.726763 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->4] StreamEnd for vbucket 4
00:01:30.727383 [INFO] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->4] ... stopped
00:01:30.726766 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->5] StreamEnd for vbucket 5
00:01:30.727404 [INFO] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->5] ... stopped
00:01:30.726768 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->6] StreamEnd for vbucket 6
00:01:30.727413 [INFO] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->6] ... stopped
00:01:30.726771 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->7] StreamEnd for vbucket 7
00:01:30.727432 [INFO] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->7] ... stopped
00:01:30.727468 [INFO] [DataportClient:"localhost:8100"] {default,1} ended
00:01:30.727479 [INFO] [DataportClient:"localhost:8100"] {default,2} ended
00:01:30.727486 [INFO] [DataportClient:"localhost:8100"] {default,3} ended
00:01:30.727663 [INFO] [DataportClient:"localhost:8100"] {default,4} ended
00:01:30.727672 [INFO] [DataportClient:"localhost:8100"] {default,5} ended
00:01:30.727699 [INFO] [DataportClient:"localhost:8100"] {default,6} ended
00:01:30.727711 [INFO] [DataportClient:"localhost:8100"] {default,7} ended
00:01:30.727722 [INFO] [DataportClient:"localhost:8100"] {default,0} ended
2014/12/03 00:01:30 Unable to refresh bucket Get http://127.0.0.1:9000/pools/default/buckets/default?bucket_uuid=a63c9d46d038c91a83d565ca55083aed: EOF
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x0 pc=0xe8761]

goroutine 40 [running]:
runtime.panic(0x692060, 0xb2f024)
/usr/local/go/src/pkg/runtime/panic.c:279 +0xf5
github.com/couchbase/indexing/secondary/dcp.Bucket.getConnPools(0xc2080e5050, 0x4, 0xc20809c420, 0x6, 0x6, 0x0, 0x0, 0xc2080e5047, 0x7, 0xc2080e5040, ...)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/dcp/pools.go:170 +0xc1
github.com/couchbase/indexing/secondary/dcp.(*UprFeed).connectToNodes(0xc20803a5b0, 0x0, 0x0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/dcp/upr.go:233 +0x88
github.com/couchbase/indexing/secondary/dcp.(*UprFeed).run(0xc20803a5b0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/dcp/upr.go:218 +0x37e
created by github.com/couchbase/indexing/secondary/dcp.(*Bucket).StartUprFeed
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/dcp/upr.go:120 +0x356

goroutine 16 [chan receive, 14 minutes]:
main.main()
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/projector/main/main.go:80 +0x9d3

goroutine 19 [finalizer wait, 14 minutes]:
runtime.park(0x159f0, 0xb331b8, 0xb310e9)
/usr/local/go/src/pkg/runtime/proc.c:1369 +0x89
runtime.parkunlock(0xb331b8, 0xb310e9)
/usr/local/go/src/pkg/runtime/proc.c:1385 +0x3b
runfinq()
/usr/local/go/src/pkg/runtime/mgc0.c:2644 +0xcf
runtime.goexit()
/usr/local/go/src/pkg/runtime/proc.c:1445

goroutine 20 [IO wait, 14 minutes]:
net.runtime_pollWait(0xcb8840, 0x72, 0x0)
/private/var/folders/00/0sdwh000h01000cxqpysvccm0035qk/T/makerelease530016500/go/src/pkg/runtime/netpoll.goc:146 +0x66
net.(*pollDesc).Wait(0xc20803a220, 0x72, 0x0, 0x0)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:84 +0x46
net.(*pollDesc).WaitRead(0xc20803a220, 0x0, 0x0)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:89 +0x42
net.(*netFD).accept(0xc20803a1c0, 0x887868, 0x0, 0xcb7548, 0x23)
/usr/local/go/src/pkg/net/fd_unix.go:409 +0x343
net.(*UnixListener).AcceptUnix(0xc20804c5e0, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/net/unixsock_posix.go:293 +0x73
net.(*UnixListener).Accept(0xc20804c5e0, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/net/unixsock_posix.go:304 +0x4b
github.com/couchbaselabs/retriever/logger.doHandleConnections(0xc20806a0c0, 0x754d70, 0xa)
/Users/deepkaran/gocode/src/github.com/couchbaselabs/retriever/logger/handle_connections.go:48 +0x286
github.com/couchbaselabs/retriever/logger.handleConnections(0xc20806a0c0, 0x754d70, 0xa)
/Users/deepkaran/gocode/src/github.com/couchbaselabs/retriever/logger/handle_connections.go:22 +0x3b
created by github.com/couchbaselabs/retriever/logger.NewLogger
/Users/deepkaran/gocode/src/github.com/couchbaselabs/retriever/logger/logger.go:107 +0x2a2

goroutine 21 [syscall, 14 minutes]:
syscall.Syscall(0x3, 0x0, 0xf61f34, 0x4, 0xc208000930, 0x71fd10, 0x0)
/usr/local/go/src/pkg/syscall/asm_darwin_amd64.s:20 +0x5
syscall.read(0x0, 0xf61f34, 0x4, 0x4, 0x1, 0x0, 0x0)
/usr/local/go/src/pkg/syscall/zsyscall_darwin_amd64.go:952 +0x75
syscall.Read(0x0, 0xf61f34, 0x4, 0x4, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/syscall/syscall_unix.go:136 +0x5c
os.(*File).read(0xc208048000, 0xf61f34, 0x4, 0x4, 0x19efd3, 0x0, 0x0)
/usr/local/go/src/pkg/os/file_unix.go:190 +0x89
os.(*File).Read(0xc208048000, 0xf61f34, 0x4, 0x4, 0xc208002480, 0x0, 0x0)
/usr/local/go/src/pkg/os/file.go:95 +0x98
github.com/couchbase/indexing/secondary/common.ExitOnStdinClose()
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/common/util.go:289 +0x8b
created by main.main
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/projector/main/main.go:78 +0x993

goroutine 22 [chan receive, 14 minutes]:
github.com/couchbase/indexing/secondary/projector.(*Projector).mainAdminPort(0xc20805a140, 0xc2080049c0)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/projector/adminport.go:41 +0x4fc
created by github.com/couchbase/indexing/secondary/projector.NewProjector
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/projector/projector.go:56 +0x795

goroutine 17 [syscall, 14 minutes]:
runtime.goexit()
/usr/local/go/src/pkg/runtime/proc.c:1445

goroutine 23 [IO wait, 14 minutes]:
net.runtime_pollWait(0xcb8790, 0x72, 0x0)
/private/var/folders/00/0sdwh000h01000cxqpysvccm0035qk/T/makerelease530016500/go/src/pkg/runtime/netpoll.goc:146 +0x66
net.(*pollDesc).Wait(0xc20803a370, 0x72, 0x0, 0x0)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:84 +0x46
net.(*pollDesc).WaitRead(0xc20803a370, 0x0, 0x0)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:89 +0x42
net.(*netFD).accept(0xc20803a310, 0x887848, 0x0, 0xcb7548, 0x23)
/usr/local/go/src/pkg/net/fd_unix.go:409 +0x343
net.(*TCPListener).AcceptTCP(0xc208048040, 0x8, 0x0, 0x0)
/usr/local/go/src/pkg/net/tcpsock_posix.go:234 +0x5d
net.(*TCPListener).Accept(0xc208048040, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/net/tcpsock_posix.go:244 +0x4b
net/http.(*Server).Serve(0xc208004a80, 0xcbfc38, 0xc208048040, 0x0, 0x0)
/usr/local/go/src/pkg/net/http/server.go:1698 +0x91
github.com/couchbase/indexing/secondary/adminport.func·002()
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/adminport/admin_httpd.go:169 +0x120
created by github.com/couchbase/indexing/secondary/adminport.(*httpServer).Start
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/adminport/admin_httpd.go:174 +0x356

goroutine 24 [IO wait, 14 minutes]:
net.runtime_pollWait(0xcb86e0, 0x72, 0x0)
/private/var/folders/00/0sdwh000h01000cxqpysvccm0035qk/T/makerelease530016500/go/src/pkg/runtime/netpoll.goc:146 +0x66
net.(*pollDesc).Wait(0xc20803a3e0, 0x72, 0x0, 0x0)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:84 +0x46
net.(*pollDesc).WaitRead(0xc20803a3e0, 0x0, 0x0)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:89 +0x42
net.(*netFD).Read(0xc20803a380, 0xc2080d2000, 0x1000, 0x1000, 0x0, 0xcb7548, 0x23)
/usr/local/go/src/pkg/net/fd_unix.go:232 +0x34c
net.(*conn).Read(0xc208048048, 0xc2080d2000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/net/net.go:122 +0xe7
net/http.(*liveSwitchReader).Read(0xc20805c3a8, 0xc2080d2000, 0x1000, 0x1000, 0x73b290, 0x0, 0x0)
/usr/local/go/src/pkg/net/http/server.go:206 +0xaf
io.(*LimitedReader).Read(0xc208060960, 0xc2080d2000, 0x1000, 0x1000, 0xd43ca, 0x0, 0x0)
/usr/local/go/src/pkg/io/io.go:399 +0xd0
bufio.(*Reader).fill(0xc208004f60)
/usr/local/go/src/pkg/bufio/bufio.go:97 +0x1b3
bufio.(*Reader).ReadSlice(0xc208004f60, 0xa, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/bufio/bufio.go:298 +0x22c
bufio.(*Reader).ReadLine(0xc208004f60, 0x0, 0x0, 0x0, 0x23d00, 0x0, 0x0)
/usr/local/go/src/pkg/bufio/bufio.go:326 +0x69
net/textproto.(*Reader).readLineSlice(0xc2080b50e0, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/net/textproto/reader.go:55 +0x9d
net/textproto.(*Reader).ReadLine(0xc2080b50e0, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/net/textproto/reader.go:36 +0x4e
net/http.ReadRequest(0xc208004f60, 0xc208039790, 0x0, 0x0)
/usr/local/go/src/pkg/net/http/request.go:556 +0xc7
net/http.(*conn).readRequest(0xc20805c380, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/net/http/server.go:577 +0x276
net/http.(*conn).serve(0xc20805c380)
/usr/local/go/src/pkg/net/http/server.go:1132 +0x61e
created by net/http.(*Server).Serve
/usr/local/go/src/pkg/net/http/server.go:1721 +0x313

goroutine 31 [select]:
github.com/couchbase/indexing/secondary/projector.(*Feed).genServer(0xc20808e680)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/projector/feed.go:345 +0x132b
created by github.com/couchbase/indexing/secondary/projector.NewFeed
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/projector/feed.go:138 +0xc6d

goroutine 34 [select]:
github.com/couchbase/indexing/secondary/dataport.(*Client).runTransmitter(0xc208090d90, 0xcbfcc0, 0xc208048020, 0xc208050010, 0xc208116000, 0xc208058870)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/dataport/client.go:386 +0x82d
created by github.com/couchbase/indexing/secondary/dataport.NewClient
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/dataport/client.go:114 +0xaf5

goroutine 35 [select]:
github.com/couchbase/indexing/secondary/dataport.(*Client).genServer(0xc208090d90, 0xc208142000, 0xc208058870)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/dataport/client.go:208 +0x5fb
created by github.com/couchbase/indexing/secondary/dataport.NewClient
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/dataport/client.go:116 +0xb4b

goroutine 36 [select]:
github.com/couchbase/indexing/secondary/dataport.(*RouterEndpoint).run(0xc208058000, 0xc20803fa00, 0xc20817e000)
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/dataport/endpoint.go:182 +0x11e2
created by github.com/couchbase/indexing/secondary/dataport.NewRouterEndpoint
/Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/dataport/endpoint.go:85 +0x7c5




[MB-12831] cbbackup - better handling of invalid metadata .json files Created: 02/Dec/14  Updated: 02/Dec/14

Status: Open
Project: Couchbase Server
Component/s: tools
Affects Version/s: 3.0.1
Fix Version/s: None
Security Level: Public

Type: Improvement Priority: Major
Reporter: Brian Williams Assignee: Bin Cui
Resolution: Unresolved Votes: 0
Labels: customer
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
If a full-mode backup is interrupted, while writing .json files to disk, the .json files may not be well-formed. If cbbackup is run after that, with accu mode, it will try to read those files and fail. A Python stack trace is generated and the process hangs until you press Control-C. An improvement would be to handle the case of malformed .json files and to exit with an error message instead of hanging.





[MB-12835] ScanAll can give incorrect results with concurrent data load and query Created: 03/Dec/14  Updated: 03/Dec/14

Status: Open
Project: Couchbase Server
Component/s: secondary-index
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Deepkaran Salooja Assignee: Deepkaran Salooja
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   

Steps to repro:

1. Create Bucket default and load 100k docs
2. Create Index idx1

3. querycmd > ./querycmd -type scanAll -index idx1 -limit 0 | wc -l
  100000

4. Load another 50k docs to bucket

5. While load is in progress, query

querycmd > ./querycmd -type scanAll -index idx1 -limit 0 | wc -l
   54412
The count returned is incorrect as it should be > 100k.

6. Once the load finishes, query again

querycmd > ./querycmd -type scanAll -index idx1 -limit 0 | wc -l
  150000

Now the scan results are correct.

A copy of forestdb taken while load is in progress shows that the storage has > 100k items while the scan returns 54k. This may be forestdb snapshot iterator issue while data is being added to the storage.




[MB-12845] XDCR Built-in time sync Created: 03/Dec/14  Updated: 03/Dec/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: 120h
Time Spent: Not Specified
Original Estimate: 120h

Epic Link: XDCR next release

 Description   
1. Add time information in replication message
2. Logic to start\stop time synchronization
3. Supportability
   - statistics and logging
4. Dev testing




[MB-12332] Top impacting keys Created: 13/Oct/14  Updated: 08/Dec/14

Status: Open
Project: Couchbase Server
Component/s: memcached
Affects Version/s: 2.5.1, 3.0
Fix Version/s: None
Security Level: Public

Type: Improvement Priority: Major
Reporter: Patrick Varley Assignee: Trond Norbye
Resolution: Unresolved Votes: 0
Labels: customer, supportability
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Dependency

 Description   
We have seen cases where a few extremely large documents can have a big impact on a node even with a low number of operations. It would be good if we had a list like the "top keys" that tracked how impacting a key was on the node. It could use a very simple calculation such as: size of the document * the number of operations.

We could call it the ""Top impacting Keys". This will allow Admins to track down rouge documents in the database.







[MB-12750] XDCR ns_server Integration -- ns_server side Created: 21/Nov/14  Updated: 08/Dec/14

Status: Open
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Yu Sui Assignee: Artem Stemkovski
Resolution: Unresolved Votes: 0
Labels: None
Σ Remaining Estimate: Not Specified Remaining Estimate: Not Specified
Σ Time Spent: Not Specified Time Spent: Not Specified
Σ Original Estimate: Not Specified Original Estimate: Not Specified

Sub-Tasks:
Key
Summary
Type
Status
Assignee
MB-12841 XDCR ns_server Integration -- XDCR s... Technical task Open Aleksey Kondratenko  
MB-12842 XDCR ns_server Integration -- Forwar... Technical task Open Aleksey Kondratenko  
MB-12843 XDCR ns_server Integration -- XDCR s... Technical task Open Aleksey Kondratenko  
MB-12844 XDCR ns_server Integration -- XDCR r... Technical task Open Aleksey Kondratenko  

 Description   
New XDCR needs to be integrated with ns_server. This work item tracks the work that needs to do done on the ns_server side. Design doc is at follows:
https://drive.google.com/open?id=1ZbAtUWOZgBW1MXBDwQVHkuJDdKIotgFNjS7H6kZ_Nrs&authuser=0

The design doc for the work on the XDCR side is at follows:
https://drive.google.com/open?id=1TCmoBQQWiwn8qIvo2NDm-ySJfmo5JT9Zp7kvAghQz3c&authuser=0

Follows are the work items on the XDCR side:
https://www.couchbase.com/issues/browse/MB-12586
https://www.couchbase.com/issues/browse/MB-12720
https://www.couchbase.com/issues/browse/MB-12022




XDCR ns_server Integration -- ns_server side (MB-12750)

[MB-12841] XDCR ns_server Integration -- XDCR startup and babysitting Created: 03/Dec/14  Updated: 03/Dec/14

Status: Open
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Technical task Priority: Major
Reporter: Yu Sui Assignee: Aleksey Kondratenko
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
ns_server needs to:
1. call goxdcr to convert pre-sherlock metadata during upgrade
2. start goxdcr after conversion is done
3. babysit goxdcr once it is started

Details can be found in section I of the following design doc:
https://docs.google.com/document/d/1ZbAtUWOZgBW1MXBDwQVHkuJDdKIotgFNjS7H6kZ_Nrs/edit

The corresponding work on XDCR side has been completed.




XDCR ns_server Integration -- ns_server side (MB-12750)

[MB-12844] XDCR ns_server Integration -- XDCR replications in task list Created: 03/Dec/14  Updated: 03/Dec/14

Status: Open
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Technical task Priority: Major
Reporter: Yu Sui Assignee: Aleksey Kondratenko
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
When building task list for rest requests on /pools/default/tasks, ns_server needs to retrieve xdcr replications from gometa service. Details can be found in section IV of the following design doc:
https://docs.google.com/document/d/1ZbAtUWOZgBW1MXBDwQVHkuJDdKIotgFNjS7H6kZ_Nrs/edit




XDCR ns_server Integration -- ns_server side (MB-12750)

[MB-12843] XDCR ns_server Integration -- XDCR statistics collection Created: 03/Dec/14  Updated: 03/Dec/14

Status: Open
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Technical task Priority: Major
Reporter: Yu Sui Assignee: Aleksey Kondratenko
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
ns_server needs to collect xdcr statistics by calling goxdcr rest API, and process them. Details can be found in section III of the following design doc:
https://docs.google.com/document/d/1ZbAtUWOZgBW1MXBDwQVHkuJDdKIotgFNjS7H6kZ_Nrs/edit




XDCR ns_server Integration -- ns_server side (MB-12750)

[MB-12842] XDCR ns_server Integration -- Forwarding of rest requests to goxdcr Created: 03/Dec/14  Updated: 03/Dec/14

Status: Open
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Technical task Priority: Major
Reporter: Yu Sui Assignee: Aleksey Kondratenko
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
ns_server needs to forward XDCR rest requests to goxdcr. Details can be found in section II of the following design doc:
https://docs.google.com/document/d/1ZbAtUWOZgBW1MXBDwQVHkuJDdKIotgFNjS7H6kZ_Nrs/edit




[MB-12301] what is expected behaviour for insert statement with select which returns multiple keys? Created: 06/Oct/14  Updated: 08/Dec/14

Status: Open
Project: Couchbase Server
Component/s: query
Affects Version/s: cbq-DP4
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Iryna Mironava Assignee: Gerald Sangudi
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: 2h
Time Spent: Not Specified
Original Estimate: 2h

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
for example query
insert into b0 key 'd51' select `1` from b0 keys ['d0', 'd3'];

select returns 2 items, what is expected to be in d51 doc? only value of first select result? both of them? it is a little bit confusing

actually: it creates d51 with value of first key returned




[MB-12879] cluster_run command failing with "Invalid cluster info" in projector Created: 09/Dec/14  Updated: 09/Dec/14

Status: Open
Project: Couchbase Server
Component/s: secondary-index
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Prathibha Bisarahalli Assignee: Pratap Chakravarthy
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Yes

 Description   
After syncing to latest code as of 12/9, doing ns_server setup by running cluster_run leads to following crash -

[user:info,2014-12-09T13:32:14.165,n_0@127.0.0.1:<0.158.0>:ns_log:crash_consumption_loop:70]Port server projector on node 'babysitter_of_n_0@127.0.0.1' exited with status 2. Restarting. Messages: goroutine 24 [select]:
net/http.(*persistConn).writeLoop(0xc2080429a0)
/usr/local/Cellar/go/1.3.1/libexec/src/pkg/net/http/transport.go:885 +0x38f
created by net/http.(*Transport).dialConn
/usr/local/Cellar/go/1.3.1/libexec/src/pkg/net/http/transport.go:601 +0x957
[ns_server:info,2014-12-09T13:32:14.165,babysitter_of_n_0@127.0.0.1:<0.293.0>:ns_port_server:log:181]projector<0.293.0>: 13:32:14.163916 [ERROR] [projector([])] cluster-info: No pool named default
projector<0.293.0>: panic: Invalid cluster info
projector<0.293.0>:
projector<0.293.0>: goroutine 16 [running]:
projector<0.293.0>: runtime.panic(0x4c0120, 0xc20802af60)
projector<0.293.0>: /usr/local/Cellar/go/1.3.1/libexec/src/pkg/runtime/panic.c:279 +0xf5
projector<0.293.0>: github.com/couchbase/indexing/secondary/common.ClusterInfoCache.GetCurrentNode(0xc20806a300, 0x15, 0x6d97b0, 0x7, 0x0, 0x0, 0x0, 0xc208026700, 0x0, 0x0, ...)
projector<0.293.0>: /Users/prathibha-mac/Documents/source/cb/goproj/src/github.com/couchbase/indexing/secondary/common/cluster_info.go:117 +0xf6
projector<0.293.0>: github.com/couchbase/indexing/secondary/projector.NewProjector(0x400, 0xc208023b30, 0xc20808fd78)
projector<0.293.0>: /Users/prathibha-mac/Documents/source/cb/goproj/src/github.com/couchbase/indexing/secondary/projector/projector.go:58 +0x745
projector<0.293.0>: main.main()
projector<0.293.0>: /Users/prathibha-mac/Documents/source/cb/goproj/src/github.com/couchbase/indexing/secondary/projector/main/main.go:78 +0x823




[MB-12881] Config file for consistently applying environment variables Created: 09/Dec/14  Updated: 09/Dec/14

Status: Open
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 2.5.1, 3.0.1
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Brent Woodruff Assignee: Aleksey Kondratenko
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
See MB-10246 for similar thoughts.

It is, at times, necessary to modify Couchbase by setting environment variables before start or restarting the couchbase-server service. In some environments it can be challenging to appropriately configure the environment variables, since simply performing `export VAR=value` and restarting the service will not apply the variable. Why this is, I am not certain, but I believe it has to do with the daemon forking and 'couchbase' user account used.

Instead, it is necessary to instruct users requiring the editing of environment variables to edit the `/opt/couchbase/bin/couchbase-server` script and place the `export` statements in, for example, the `_start` function itself.

Customers have requested that we provide a configuration file for managing the environment variables. This is analogous to an /etc/default file for the couchbase service.

This ticket differs from MB-10246 in that that ticket has a wider scope, beyond environment variables and things that could be done simply by sourcing a shell file.




[MB-12880] Tests using go-couchbase fail with "bind: address already in use" Created: 09/Dec/14  Updated: 09/Dec/14

Status: Open
Project: Couchbase Server
Component/s: secondary-index
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Prathibha Bisarahalli Assignee: Prathibha Bisarahalli
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
When running 2i tests, they intermittently fail with error: “Failed to listen listen unix /tmp/log_upr_client.sock: bind: address already in use"

I am currently running "rm -rf /tmp/log_upr_client.sock" in test setup before execution.

Please also re-assign it to the right Component/s.

 Comments   
Comment by Sriram Melkote [ 09/Dec/14 ]
Prathibha, can you please include:

(a) Steps to reproduce the issue (even if intermittent)
(b) Logs or console output showing the problem (and the history leading to it)

Thanks!




[MB-12014] XDCR@next release - Pipeline Runtime: Checkpoint Manager Created: 19/Aug/14  Updated: 11/Dec/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: sprint5_xdcr
Remaining Estimate: 56h
Time Spent: Not Specified
Original Estimate: 56h

Epic Link: XDCR next release

 Comments   
Comment by Xiaomei Zhang [ 11/Dec/14 ]
1. checkpoint manager
2. related statistics
https://docs.google.com/spreadsheets/d/12VIymLoYjaXVYWLJPcthcR4zpmCiW4hotYAgoOdJzQ4/edit#gid=0&fvid=1913057




[MB-12011] XDCR@next release - Parts: CAPI nozzle Created: 19/Aug/14  Updated: 11/Dec/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: sprint5_xdcr
Remaining Estimate: 40h
Time Spent: Not Specified
Original Estimate: 40h

Epic Link: XDCR next release




[MB-12590] Metadata (Replication specification and checkpoint) migration Created: 07/Nov/14  Updated: 11/Dec/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: sprint5_xdcr
Remaining Estimate: 32h
Time Spent: Not Specified
Original Estimate: 32h

Epic Link: XDCR next release

 Description   
If replication specification is created in earlier version, its metadatas (replication specification, setting, and checkpoint document) are saved in local db. Those metadata need to be migrated to new format and saved in distribute metadata store. The following is the migration process.
1. xdc_rdoc_replication_srv (the erlang code which manages the replication specification and settings) still in erlang and running in ns_server
2. When Replication Manager starts, it calls Erlang XDCR Rest interface to find all the old replication metadatas. It migrates them to new format and save them in distributed metadata store.
3. Replication Manager remembers the migration has done, so the next time it restarts, the migration would be skipped.





[MB-12587] Optimistic Replication Support in Xmem Created: 07/Nov/14  Updated: 11/Dec/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: sprint5_xdcr
Remaining Estimate: 32h
Time Spent: Not Specified
Original Estimate: 32h

Epic Link: XDCR next release

 Description   
Support Optimistic Replication and Pessimistic Replication in Xmem Nozzle

 Comments   
Comment by Xiaomei Zhang [ 11/Dec/14 ]
This also includes to implement the related statistics - https://docs.google.com/spreadsheets/d/12VIymLoYjaXVYWLJPcthcR4zpmCiW4hotYAgoOdJzQ4/edit#gid=0&fvid=1913057




[MB-12025] XDCR@next release - Integration Testing and Performance Tuning #2 Created: 19/Aug/14  Updated: 11/Dec/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: sprint5_xdcr
Remaining Estimate: 40h
Time Spent: Not Specified
Original Estimate: 40h

Epic Link: XDCR next release




[MB-12856] Audit XDCR replication specification and settings changes Created: 04/Dec/14  Updated: 11/Dec/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: sprint5_xdcr
Remaining Estimate: 24h
Time Spent: Not Specified
Original Estimate: 24h

Epic Link: XDCR next release

 Description   
Audit design spec at
https://docs.google.com/document/d/1av0CX08HGNW5-CBVWEykFiSooaXiGw6jdhHr3OUEzyI
https://docs.google.com/document/d/1Dsi3v6CIv-e7pQ5Kqth5PkTWzmG1o3vYU2pijmOJlS8

On High level
1. ns_server forward the rest request for CreateReplication, PauseReplication, CancelReplication, Replication Setting changes to goxdcr
2. goxdcr call cbauth to find out the information about the user who made this request
3. goxdcr call cbauth to get the memcached user (who should have the privilege to write to the audit log)
4. goxdcr call audit memcached command to audit the action





[MB-12828] Projector unable to process dcp rollback message Created: 02/Dec/14  Updated: 12/Dec/14

Status: Open
Project: Couchbase Server
Component/s: secondary-index
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Deepkaran Salooja Assignee: Pratap Chakravarthy
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   

Below error is logged from go-couchbase in for rollback message:


12:08:41.863724 Default None UPR_STREAMREQ for vb 1 successful
12:08:41.894581 Default None UPR_STREAMREQ for vb 2 successful
12:08:41.894596 Default None UPR_STREAMREQ for vb 3 successful
12:08:41.894616 Default None UPR_STREAMREQ with rollback 0 for vb 4 Failed: invalid rollback []
12:08:41.894627 Default None UPR_STREAMREQ for vb 5 successful
12:08:41.894637 Default None UPR_STREAMREQ for vb 6 successful
12:08:41.894651 Default None UPR_STREAMREQ for vb 7 successful

and following panic can be seen in projector logs:

12:08:41.894906 [INFO] [localhost:9999->MAINT_STREAM_TOPIC] back channel *projector.controlStreamRequest &{default 127.0.0.1:12000 12928 0 2 165261236543092 0}
12:08:41.894913 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->3] vrCmdAddEngines
12:08:41.894917 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->3] AddEngine 5478591343954277004
12:08:41.894920 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->3] knows engine 5478591343954277004
12:08:41.894924 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->3] UpdateEndpoints localhost:8100
12:08:41.894927 [DEBUG] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000->3] knows endpoint localhost:8100
12:08:41.894943 [ERROR] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000] runScatter() crashed: couchbase.errorInvalidLog
12:08:41.895931 [ERROR] /Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/projector/kvdata.go:139 (0x9319f)
12:08:41.895943 [ERROR] func.006: c.StackTrace(string(debug.Stack()))
12:08:41.895948 [ERROR] /usr/local/go/src/pkg/runtime/panic.c:248 (0x112ed)
12:08:41.895953 [ERROR] panic: runtime·newstackcall(d->fn, (byte*)d->args, d->siz);
12:08:41.895957 [ERROR] /Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/projector/kvdata.go:240 (0x87030)
12:08:41.895962 [ERROR] (*KVData).scatterMutation: panic(err)
12:08:41.895968 [ERROR] /Users/deepkaran/gocode/src/github.com/couchbase/indexing/secondary/projector/kvdata.go:156 (0x85de7)
12:08:41.895971 [ERROR] (*KVData).runScatter: kvdata.scatterMutation(m, ts)
12:08:41.895974 [ERROR] /usr/local/go/src/pkg/runtime/proc.c:1445 (0x15b10)
12:08:41.895977 [ERROR] goexit: runtime·goexit(void)
12:08:41.895980 [ERROR]
12:08:41.895996 [INFO] [MAINT_STREAM_TOPIC->default->127.0.0.1:12000] for "127.0.0.1:12000" ... stopped



 Comments   
Comment by Deepkaran Salooja [ 12/Dec/14 ]
Projector should use the vbbuid from successful stream request to compose the mutation messages. Right now it is sending the vbuuid from the MutationStreamRequest sent by the indexer and in case of rollback this is incorrect behaviour.




[MB-12888] XDCR replication to ES stops with CB 2.5 patched Created: 10/Dec/14  Updated: 12/Dec/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: 2.5.1
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Olivier Bierlaire Assignee: Aleksey Kondratenko
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: Amazon Linux

Attachments: Text File manifest.txt     Text File manifest_update.txt     XML File manifest.xml     File xdcr.14    
Triage: Untriaged
Is this a Regression?: Unknown

 Description   
I'm using Couchbase 2.5.1-1083-rel-enterprise with the Maintenance Patch 1.
I'm trying to use XDCR to replicate my bucket to Elasticsearch 1.3.6 with ES transport plugin 2.0.
Before the Maintenance patched, the replication was going well but with the patch and the same data it's failing.

I have 46943 documents in my couchbase bucket, and only 31951 have been replicated to Elasticsearch, with a lot of error logs in XDCR console:
"2014-12-10 20:16:46 [Vb Rep] Error replicating vbucket 643. Please see logs for details."
No logs has been generated in xdcr_error.log, nothing suspicious in other

I also did a couple of of other tests:
- Couchbase Patched to Elasticsearch: replication stops after 3/4 docs
- Couchbase Patched to an other Couchbase patched cluster: replication stops after 15000 docs (1/4 docs)
- Couchbase Patched to UNPATCHED couchbase: all docs replicated
- Couchbase Unpatched to Elasticsearch: all document replicated

I replicated those examples several time with exactly the same results (same number of documents replicated)




 Comments   
Comment by Aleksey Kondratenko [ 10/Dec/14 ]
Without logs I cannot say anything.

Passing this to ES plugin maintainer. This is likely easily reproducible.
Comment by Marty Schoch [ 10/Dec/14 ]
1. How do I find out more information about the patch. I have no knowledge of such a thing.
2. It is odd that no error is in the xdcr_error.log. In my experience anytime we see vbucket error in UI there is explanatory message in logs.
3. Can you provide elasticsearch logs?
4. How many nodes are in the couchbase and elasticsearch clusters? The observation of 3/4 and 1/4 documents replicating in some cases would make sense if one or both of the clusters has 4 nodes.

Comment by Marty Schoch [ 10/Dec/14 ]
OK, I have read the changelog for the patch and don't see anything obviously.

Also, Aleksey, did you see that it also claims to be reproducible without Elastisearch (2nd bullet in list is one patched couchbase cluster to another)

Obviously in all cases logs are needed to move this forward.
Comment by Aleksey Kondratenko [ 10/Dec/14 ]
My bad Marty. I assumed that it's ES-only bug. But words above appears to indicate that it is not the case.

We'll need collectinfos to deal with that case.
Comment by Olivier Bierlaire [ 10/Dec/14 ]
The environement is the following:
- the main Couchbase cluster has 3 nodes, all of them patched 2.5.1
- the Elasticsearch Cluster has 3 nodes with Elasticsearch 1.3 and Couchbase Transport plugin 2.0
- a cluster of one couchbase node "not patched" 2.5.1
- a cluster of one couchbase node "patched" 2.5.1

And yes I can confirmed that I tried many time to reproduce the issue between two patched Couchbase cluster. We discovered the issue between Couchbase and Elasticsearch but we definitely can reproduce it between Couchbase clusters, and the issue does not occurs between not-patched cluster and ES.

Unfortunately I needed to delete my test machines and I cannot have more time allocated to that issue due to very tight project constraints for the moment. So I don't have logs anymore for those.
For the 2.5.1 patched cluster, I do have logs, but definitely NO xdcr_error logs at all. Before, after and during the replication this log was not changing at all (any of xdcr_error.X)
For the xdcr logs, i attached a sample where a replication has been set up between the cluster and ES and then between two CB cluster. As you can see, there is abolutely no error.

I think you can definitely reproduce this issue as I reproduced it yesterday many time.

Sorry I cannot help further, I now rolledback the pached cluster to the not patched version.

Comment by Aleksey Kondratenko [ 10/Dec/14 ]
If that's issue that unconditionally happens between any 2.5.1 + MP1 instances I'd be highly surprised given that we were supposed to test it.

In order to double check I need _at least_ precise version of code you're running (that can be communicated by attaching copy of manifest.xml file). And ideally cbcollectinfos. In fact in MB-land I have every reason to bounce tickets that don't have any diagnostic info attached. Because who knows, maybe error is due to some specific setup that you have and not due to code.
Comment by Olivier Bierlaire [ 10/Dec/14 ]
No problem. I attached the manifest.xml.
For the collectinfo output, i uploaded there:
 https://www.dropbox.com/s/2lmu4tpkqjn9w77/collectinfo.zip?dl=0
Password: satr6qUw

Please not that we are currently rollbacking our cluster, which means for now we have 6 nodes (3 patched, 3 not patched) currently rebalancing.

Comment by Aleksey Kondratenko [ 10/Dec/14 ]
I'll need collectinfo from node .85. Other nodes seem to have no issues with xdcr (reported xdcr status is complete, idle and no errors).
Comment by Olivier Bierlaire [ 11/Dec/14 ]
Sorry for the late reply.
I just pushed the logs of node .85 on dropbox at this address:
https://www.dropbox.com/s/quckduld16zuzhm/collectinfo_10-200-158-85.zip?dl=0
Password: cVkcRnRw
Comment by Aleksey Kondratenko [ 12/Dec/14 ]
It appears that node .85 is thinking that remote cluster has different set of nodes than it should. Possibly due to MB-11295.

Here's how you can fix the problem. Stop service on node .85. Delete /opt/couchbase/var/lib/couchbase/remote_clusters_cache_v3. Restart service. XDCR should recover.




[MB-12024] XDCR@next release - Logging and diagnostics Created: 19/Aug/14  Updated: 12/Dec/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: 40h
Time Spent: Not Specified
Original Estimate: 40h

Epic Link: XDCR next release




[MB-12585] ns_server integration testing Created: 07/Nov/14  Updated: 12/Dec/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: sprint5_xdcr
Remaining Estimate: 32h
Time Spent: Not Specified
Original Estimate: 32h

Epic Link: XDCR next release




[MB-12910] XMEM SSL support Created: 12/Dec/14  Updated: 12/Dec/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: sprint5_xdcr
Remaining Estimate: 16h
Time Spent: Not Specified
Original Estimate: 16h


 Description   
establish_ssl_proxy_connection(Socket, Host, Port, ProxyPort, CertPEM, Bucket, Password) ->
    Payload0 = [{proxyHost, list_to_binary(Host)},
                {proxyPort, ProxyPort},
                {port, Port}],
    Payload = case Bucket of
                  undefined ->
                      Payload0;
                  _ ->
                      [{bucket, list_to_binary(Bucket)},
                       {password, list_to_binary(Password)}
                       | Payload0]
              end,
    send_json(Socket, {Payload}),
    send_cert(Socket, CertPEM),
    Reply = receive_json(Socket, infinity),
    <<"ok">> = proplists:get_value(<<"type">>, Reply),
    ok.





[MB-12370] No reloc.sh in opt/couchbase/bin/install/reloc.sh Created: 17/Oct/14  Updated: 10/Dec/14

Status: Open
Project: Couchbase Server
Component/s: build
Affects Version/s: 3.0
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Juan Luis Pérez Assignee: Steve Yen
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Operating System: Ubuntu 64-bit
Is this a Regression?: No

 Description   
I downloaded the couchbase from the downloads page. Couchbase 3.0.0 for ubuntu 64bits and there isn't the reloc.sh file to install as non-root

 Comments   
Comment by Chris Hillery [ 10/Dec/14 ]
Steve - it looks like you own the couchbase-reloc script (at least you were the last committer for couchbaselabs/couchbase-reloc). Can you comment on whether it should work for 3.0.x, and whether it was ever part of our official deliverables?




[MB-12341] couchbase-cli needs to be able to set cluster name Created: 13/Oct/14  Updated: 15/Dec/14

Status: Open
Project: Couchbase Server
Component/s: tools
Affects Version/s: 3.0.1, 3.0, 3.0.2
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Perry Krug Assignee: Bin Cui
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Dependency
blocks DOC-106 Docs updated with CLI/REST API instru... Open
Triage: Untriaged
Is this a Regression?: Unknown

 Description   
With the new ability to set a cluster name, we need that ability extended to the couchbase-cli and then documented as such.




[MB-12929] Using Gometa as a local service Created: 15/Dec/14  Updated: 15/Dec/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Yu Sui Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: sprint5_xdcr
Remaining Estimate: 40h
Time Spent: Not Specified
Original Estimate: 40h


 Description   
goxdcr is currently using gometa service to store the following metadata:
1. remote cluster reference
2. default replication settings
3. replication specifications
4. checkpoint files

Gometa may not be available as a distributed service in sherlock, and may be available as a local service instead. goxdcr needs to make the following changes to accommodate this change:
1. Change metadata service to invoke local gometa APIs instead. ETA 1 day
2. For rest apis that could potentially change metadata, forward rest requests to other nodes. Always apply changes to local metadata when such requests are received. ETA 2 days
(1) CreateRemoteCluster
(2) ChangeRemoteCluster
(3) DeleteRemoteCluster

(4) ChangeInternalSettings
(5) ChangeDefaultSettings

(6) CreateReplication
(7) ChangeReplicationSettings
(8) DeleteReplication
3. For checkpoint files, create a new rest api for CreateCheckpoint. Create rest requests and forward them to other nodes when checkpoint files are created locally. ETA 2 days

Overall ETA 5 days




[MB-12932] Lookup API failing with "ScanRequest.Span.Range" not set Created: 16/Dec/14  Updated: 16/Dec/14

Status: Open
Project: Couchbase Server
Component/s: secondary-index
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Prathibha Bisarahalli Assignee: Pratap Chakravarthy
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
A test doing Basic Lookup API test failing with :

In TestBasicLookup()
Checking for presence of index index_company
2014/12/16 10:47:33 Error from Lookup API: proto: required field "ScanRequest.Span.Range" not set
--- FAIL: TestBasicLookup (1.00 seconds)
panic: Error from Lookup API: proto: required field "ScanRequest.Span.Range" not set

Lookup params provides were:
secondaryindex.Lookup("index_company", "default", []interface{} {"BIOSPAN"}, true, 10000000)







[MB-12933] alias and attribute name cannot be the same Created: 16/Dec/14  Updated: 16/Dec/14

Status: Open
Project: Couchbase Server
Component/s: query
Affects Version/s: cbq-DP4, sherlock
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Iryna Mironava Assignee: Gerald Sangudi
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Yes

 Description   
SELECT email as name, name FROM default
I expect this to be failed because alias and attribute are same

earlier we had error 'alias name is defined more than once'





[MB-12923] forestdb - unreachable codepaths Created: 15/Dec/14  Updated: 16/Dec/14

Status: Open
Project: Couchbase Server
Component/s: forestdb
Affects Version/s: .master
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Tommie McAfee Assignee: Chiyoung Seo
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: No

 Description   
The following methods are either unreachable due to blocking logic are not called at all:

_bcache_free_bcache_item
https://github.com/couchbaselabs/forestdb/blob/master/src/blockcache.cc#L1138
_bcache_free_fnamedic
https://github.com/couchbaselabs/forestdb/blob/master/src/blockcache.cc#L1146

     * fnamedic is being cleaned up within filemgr_shutdown() before bcache_shutdown() is called which is making the following functions are not reach able:


_get_fast_str_kv_size
https://github.com/couchbaselabs/forestdb/blob/master/src/btree_fast_str_kv.cc#L323
    * not being used for this op type


from btreeblock.cc: is_subblock
https://github.com/couchbaselabs/forestdb/blob/master/src/btreeblock.cc#L116:

    * there are 2 is_subblock methods, I suspect the alternative one is actually being called as the one is this file has no coverage

filemgr_remove_file


These methods are declared only and have no possible codepaths:

btree.cc -> btree_operation_end
btree_str_kv.cc -> btree_str_kv_is_inf_key
btree_str_kv.cc -> btree_str_kv_set_inf_key
filemgr.cc -> filemgr_get_filename_ptr
hash.cc -> hash_uint_modular
hash.cc -> hash_shuffle_2uint (in two_hash_test but commented out)
list.cc -> list_insert_after (and remove list_test.cc which has no make target)




 Comments   
Comment by Tommie McAfee [ 16/Dec/14 ]
This branch in forestdb.cc is always true:
if ( item->action == WAL_ACT_INSERT ||
        item->action == WAL_ACT_LOGICAL_REMOVE )

https://github.com/couchbaselabs/forestdb/blob/master/src/forestdb.cc#L1542

Doesn't look like item action can be WAL_ACT_REMOVE




[MB-12952] Creation of 2i of after failed index create & drop causes indexer crash Created: 17/Dec/14  Updated: 17/Dec/14

Status: Open
Project: Couchbase Server
Component/s: secondary-index
Affects Version/s: sherlock
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Prathibha Bisarahalli Assignee: Deepkaran Salooja
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
Environment: AWS EC2 instance, single node setup. host name default - 127.0.0.1

Steps:
1. Create an index : index creation is stuck (due to another issue)
2. Drop the above attempted index with its instance id.
3. Create another index

Indexer fails with: panic: runtime error: assignment to entry in nil map

Full log: https://drive.google.com/file/d/0B-2n-TmVRNyDVFVNOW9hS1ByRUE/view?usp=sharing

Stack trace below -

indexer<0.94.0>: 05:33:02.151634 [INFO ] Timekeeper::handleAddIndextoStream
indexer<0.94.0>: Message: MsgStreamUpdate
indexer<0.94.0>: Type: ADD_INDEX_LIST_TO_STREAM
indexer<0.94.0>: Stream: MAINT_STREAM
indexer<0.94.0>: BuildTS: []
indexer<0.94.0>: IndexList: [
indexer<0.94.0>: InstId: 5178409459866416439
indexer<0.94.0>: Defn: DefnId: 5178409459866416439 Name: idx12 Using: ForestDB Bucket: beer-sample IsPrimary: false
indexer<0.94.0>: SecExprs: [`type`]
indexer<0.94.0>: PartitionScheme: SINGLE PartitionKey:
indexer<0.94.0>: State: INDEX_STATE_CATCHUP
indexer<0.94.0>: Stream: INIT_STREAM
indexer<0.94.0>: PartitionContainer: &{map[0:{0 [:9104]}] 1}]
indexer<0.94.0>: RestartTs: map[]
indexer<0.94.0>: panic: runtime error: assignment to entry in nil map
indexer<0.94.0>:
indexer<0.94.0>: goroutine 28 [running]:
indexer<0.94.0>: runtime.panic(0xc121c0, 0x141bb73)
indexer<0.94.0>: /usr/local/go/src/pkg/runtime/panic.c:279 +0xf5
indexer<0.94.0>: github.com/couchbase/indexing/secondary/indexer.(*timekeeper).addIndextoStream(0xc20800e1b0, 0x7faacbce30c8, 0xc208764000)
indexer<0.94.0>: /home/buildbot/jenkins/workspace/sherlock-testing/goproj/src/github.com/couchbase/indexing/secondary/indexer/timekeeper.go:325 +0x2c8
indexer<0.94.0>: github.com/couchbase/indexing/secondary/indexer.(*timekeeper).handleAddIndextoStream(0xc20800e1b0, 0x7faacbce30c8, 0xc208764000)
indexer<0.94.0>: /home/buildbot/jenkins/workspace/sherlock-testing/goproj/src/github.com/couchbase/indexing/secondary/indexer/timekeeper.go:311 +0xd8
indexer<0.94.0>: github.com/couchbase/indexing/secondary/indexer.(*timekeeper).handleSupervisorCommands(0xc20800e1b0, 0x7faacbce30c8, 0xc208764000)
indexer<0.94.0>: /home/buildbot/jenkins/workspace/sherlock-testing/goproj/src/github.com/couchbase/indexing/secondary/indexer/timekeeper.go:150 +0x26b
indexer<0.94.0>: github.com/couchbase/indexing/secondary/indexer.(*timekeeper).run(0xc20800e1b0)
indexer<0.94.0>: /home/buildbot/jenkins/workspace/sherlock-testing/goproj/src/github.com/couchbase/indexing/secondary/indexer/timekeeper.go:129 +0x16d
indexer<0.94.0>: created by github.com/couchbase/indexing/secondary/indexer.NewTimekeeper
indexer<0.94.0>: /home/buildbot/jenkins/workspace/sherlock-testing/goproj/src/github.com/couchbase/indexing/secondary/indexer/timekeeper.go:107 +0x284
indexer<0.94.0>:
indexer<0.94.0>: goroutine 16 [chan receive]:
indexer<0.94.0>: github.com/couchbase/indexing/secondary/indexer.(*indexer).sendStreamUpdateToWorker(0xc20803e500, 0x7faacbce30c8, 0xc208764000, 0xc208004ae0, 0xd12870, 0xa, 0xc208811080, 0x1)
indexer<0.94.0>: /home/buildbot/jenkins/workspace/sherlock-testing/goproj/src/github.com/couchbase/indexing/secondary/indexer/indexer.go:808 +0xaf
indexer<0.94.0>: github.com/couchbase/indexing/secondary/indexer.(*indexer).sendStreamUpdateForCreateIndex(0xc20803e500, 0x47dd67f7a4dc3537, 0x47dd67f7a4dc3537, 0xc208c499b0, 0x5, 0xd09230, 0x8, 0xc208c499b5, 0xb, 0x0, ...)
indexer<0.94.0>: /home/buildbot/jenkins/workspace/sherlock-testing/goproj/src/github.com/couchbase/indexing/secondary/indexer/indexer.go:781 +0x563
indexer<0.94.0>: github.com/couchbase/indexing/secondary/indexer.(*indexer).handleCreateIndex(0xc20803e500, 0x7faacbce8078, 0xc2092fcfc0)
indexer<0.94.0>: /home/buildbot/jenkins/workspace/sherlock-testing/goproj/src/github.com/couchbase/indexing/secondary/indexer/indexer.go:533 +0xb51
indexer<0.94.0>: github.com/couchbase/indexing/secondary/indexer.(*indexer).handleAdminMsgs(0xc20803e500, 0x7faacbce8078, 0xc2092fcfc0)
indexer<0.94.0>: /home/buildbot/jenkins/workspace/sherlock-testing/goproj/src/github.com/couchbase/indexing/secondary/indexer/indexer.go:406 +0x312
indexer<0.94.0>: github.com/couchbase/indexing/secondary/indexer.(*indexer).run(0xc20803e500)
indexer<0.94.0>: /home/buildbot/jenkins/workspace/sherlock-testing/goproj/src/github.com/couchbase/indexing/secondary/indexer/indexer.go:245 +0x15e
indexer<0.94.0>: github.com/couchbase/indexing/secondary/indexer.NewIndexer(0xc2080ca810, 0x0, 0x0, 0x0, 0x0)
indexer<0.94.0>: /home/buildbot/jenkins/workspace/sherlock-testing/goproj/src/github.com/couchbase/indexing/secondary/indexer/indexer.go:200 +0x117b
indexer<0.94.0>: main.main()
indexer<0.94.0>: /home/buildbot/jenkins/workspace/sherlock-testing/goproj/src/github.com/couchbase/indexing/secondary/indexer/main/main.go:55 +0xc3e
indexer<0.94.0>:
indexer<0.94.0>: goroutine 19 [finalizer wait]:
indexer<0.94.0>: runtime.park(0x416720, 0x143fe60, 0x1420269)
indexer<0.94.0>: /usr/local/go/src/pkg/runtime/proc.c:1369 +0x89
indexer<0.94.0>: runtime.parkunlock(0x143fe60, 0x1420269)

[ns_server:warn,2014-12-17T5:33:02.191,babysitter_of_ns_1@127.0.0.1:<0.94.0>:ns_port_server:log:186]Dropped 381 log lines from indexer
[ns_server:info,2014-12-17T5:33:02.193,babysitter_of_ns_1@127.0.0.1:<0.93.0>:supervisor_cushion:handle_info:58]Cushion managed supervisor for indexer failed: {abnormal,2}
[ns_server:debug,2014-12-17T5:33:02.193,babysitter_of_ns_1@127.0.0.1:<0.98.0>:supervisor_cushion:init:39]starting ns_port_server with delay of 5000
[error_logger:error,2014-12-17T5:33:02.196,babysitter_of_ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.94.0> terminating
** Last message in was {#Port<0.3454>,{exit_status,2}}
** When Server state == {state,#Port<0.3454>,indexer,
                               {["\t/home/buildbot/jenkins/workspace/sherlock-testing/goproj/src/github.com/couchbase/indexing/secondary/dataport/server.go:314 +0x1b5",
                                 "created by github.com/couchbase/indexing/secondary/dataport.(*Server).startWorker",
                                 "\t/home/buildbot/jenkins/workspace/sherlock-testing/goproj/src/github.com/couchbase/indexing/secondary/dataport/server.go:476 +0x4a2",
                                 "github.com/couchbase/indexing/secondary/dataport.doReceive(0xc208c50aa0, 0x18, 0xc208d0e100, 0xfa000, 0xfa0, 0xc208764a20, 0xc208620d80)"],
                                ["\t/home/buildbot/jenkins/workspace/sherlock-testing/goproj/src/github.com/couchbase/indexing/secondary/transport/transport.go:142 +0xe3"]},
                               {ok,{1418812382349,#Ref<0.0.0.1008>}},
                               ["runtime.parkunlock(0x143fe60, 0x1420269)",
                                "\t/usr/local/go/src/pkg/runtime/proc.c:1369 +0x89",
                                "runtime.park(0x416720, 0x143fe60, 0x1420269)",
                                "goroutine 19 [finalizer wait]:",[],
                                "\t/home/buildbot/jenkins/workspace/sherlock-testing/goproj/src/github.com/couchbase/indexing/secondary/indexer/main/main.go:55 +0xc3e",
                                "main.main()",
                                "\t/home/buildbot/jenkins/workspace/sherlock-testing/goproj/src/github.com/couchbase/indexing/secondary/indexer/indexer.go:200 +0x117b",
                                "github.com/couchbase/indexing/secondary/indexer.NewIndexer(0xc2080ca810, 0x0, 0x0, 0x0, 0x0)",
                                "\t/home/buildbot/jenkins/workspace/sherlock-testing/goproj/src/github.com/couchbase/indexing/secondary/indexer/indexer.go:245 +0x15e",
                                "github.com/couchbase/indexing/secondary/indexer.(*indexer).run(0xc20803e500)",
                                "\t/home/buildbot/jenkins/workspace/sherlock-testing/goproj/src/github.com/couchbase/indexing/secondary/indexer/indexer.go:406 +0x312",
                                "github.com/couchbase/indexing/secondary/indexer.(*indexer).handleAdminMsgs(0xc20803e500, 0x7faacbce8078, 0xc2092fcfc0)",
                                "\t/home/buildbot/jenkins/workspace/sherlock-testing/goproj/src/github.com/couchbase/indexing/secondary/indexer/indexer.go:533 +0xb51",
                                "github.com/couchbase/indexing/secondary/indexer.(*indexer).handleCreateIndex(0xc20803e500, 0x7faacbce8078, 0xc2092fcfc0)",
                                "\t/home/buildbot/jenkins/workspace/sherlock-testing/goproj/src/github.com/couchbase/indexing/secondary/indexer/indexer.go:781 +0x563",
                                "github.com/couchbase/indexing/secondary/indexer.(*indexer).sendStreamUpdateForCreateIndex(0xc20803e500, 0x47dd67f7a4dc3537, 0x47dd67f7a4dc3537, 0xc208c499b0, 0x5, 0xd09230, 0x8, 0xc208c499b5, 0xb, 0x0, ...)",
                                "\t/home/buildbot/jenkins/workspace/sherlock-testing/goproj/src/github.com/couchbase/indexing/secondary/indexer/indexer.go:808 +0xaf",
                                "github.com/couchbase/indexing/secondary/indexer.(*indexer).sendStreamUpdateToWorker(0xc20803e500, 0x7faacbce30c8, 0xc208764000, 0xc208004ae0, 0xd12870, 0xa, 0xc208811080, 0x1)",
                                "goroutine 16 [chan receive]:",[],
                                "\t/home/buildbot/jenkins/workspace/sherlock-testing/goproj/src/github.com/couchbase/indexing/secondary/indexer/timekeeper.go:107 +0x284",
                                "created by github.com/couchbase/indexing/secondary/indexer.NewTimekeeper",
                                "\t/home/buildbot/jenkins/workspace/sherlock-testing/goproj/src/github.com/couchbase/indexing/secondary/indexer/timekeeper.go:129 +0x16d",
                                "github.com/couchbase/indexing/secondary/indexer.(*timekeeper).run(0xc20800e1b0)",
                                "\t/home/buildbot/jenkins/workspace/sherlock-testing/goproj/src/github.com/couchbase/indexing/secondary/indexer/timekeeper.go:150 +0x26b",
                                "github.com/couchbase/indexing/secondary/indexer.(*timekeeper).handleSupervisorCommands(0xc20800e1b0, 0x7faacbce30c8, 0xc208764000)",
                                "\t/home/buildbot/jenkins/workspace/sherlock-testing/goproj/src/github.com/couchbase/indexing/secondary/indexer/timekeeper.go:311 +0xd8",
                                "github.com/couchbase/indexing/secondary/indexer.(*timekeeper).handleAddIndextoStream(0xc20800e1b0, 0x7faacbce30c8, 0xc208764000)",
                                "\t/home/buildbot/jenkins/workspace/sherlock-testing/goproj/src/github.com/couchbase/indexing/secondary/indexer/timekeeper.go:325 +0x2c8",
                                "github.com/couchbase/indexing/secondary/indexer.(*timekeeper).addIndextoStream(0xc20800e1b0, 0x7faacbce30c8, 0xc208764000)",
                                "\t/usr/local/go/src/pkg/runtime/panic.c:279 +0xf5",
                                "runtime.panic(0xc121c0, 0x141bb73)",
                                "goroutine 28 [running]:",[],
                                "panic: runtime error: assignment to entry in nil map",
                                "\tRestartTs: map[]",
                                "\tPartitionContainer: &{map[0:{0 [:9104]}] 1}]",
                                "\tStream: INIT_STREAM",
                                "\tState: INDEX_STATE_CATCHUP",
                                "\t\tPartitionScheme: SINGLE PartitionKey: ",
                                "\t\tSecExprs: [`type`] ",
                                "\tDefn: DefnId: 5178409459866416439 Name: idx12 Using: ForestDB Bucket: beer-sample IsPrimary: false ",
                                "\tInstId: 5178409459866416439",
                                "\tIndexList: [","\tBuildTS: []",
                                "\tStream: MAINT_STREAM",
                                "\tType: ADD_INDEX_LIST_TO_STREAM",
                                "\tMessage: MsgStreamUpdate",
                                "05:33:02.151634 [INFO ] Timekeeper::handleAddIndextoStream ",




[MB-12957] unnests are not using index Created: 17/Dec/14  Updated: 17/Dec/14

Status: Open
Project: Couchbase Server
Component/s: query
Affects Version/s: cbq-DP4
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Iryna Mironava Assignee: Gerald Sangudi
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
cbq> CREATE INDEX join_index0 ON default(tasks_ids);
{
    "request_id": "3efb1dcd-7b23-45d4-b694-d8c79815429e",
    "signature": null,
    "results": [
    ],
    "status": "success",
    "metrics": {
        "elapsedTime": "1.985781s",
        "executionTime": "1.985432s",
        "resultCount": 0,
        "resultSize": 0
    }
}

cbq> EXPLAIN SELECT emp.name, task FROM default emp UNNEST emp.tasks_ids task;
{
    "request_id": "69cd9e15-d624-4839-8da9-c1cc0ca6cfea",
    "signature": "json",
    "results": [
        {
            "#operator": "Sequence",
            "~children": [
                {
                    "#operator": "PrimaryScan",
                    "index": "#primary",
                    "keyspace": "default",
                    "namespace": "default"
                },
                {
                    "#operator": "Parallel",
                    "~child": {
                        "#operator": "Sequence",
                        "~children": [
                            {
                                "#operator": "Fetch",
                                "as": "emp",
                                "keyspace": "default",
                                "namespace": "default"
                            },
                            {
                                "#operator": "Unnest",
                                "as": "task",
                                "expr": "(`emp`.`tasks_ids`)"
                            },
                            {
                                "#operator": "InitialProject",
                                "result_terms": [
                                    {
                                        "expr": "(`emp`.`name`)"
                                    },
                                    {
                                        "expr": "`task`"
                                    }
                                ]
                            },
                            {
                                "#operator": "FinalProject"
                            }
                        ]
                    }
                }
            ]
        }
    ],
    "status": "success",
    "metrics": {
        "elapsedTime": "2.75ms",
        "executionTime": "2.3ms",
        "resultCount": 1,
        "resultSize": 1647
    }
}

cbq>





[MB-12958] subqueries are not using index Created: 17/Dec/14  Updated: 17/Dec/14

Status: Open
Project: Couchbase Server
Component/s: query
Affects Version/s: cbq-DP4, sherlock
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Iryna Mironava Assignee: Gerald Sangudi
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
cbq> CREATE INDEX join_index4 ON default(join_day) ;
{
    "request_id": "68e06f32-695e-4f03-bc69-6879fb20d4b1",
    "signature": null,
    "results": [
    ],
    "status": "success",
    "metrics": {
        "elapsedTime": "7.001841s",
        "executionTime": "7.001374s",
        "resultCount": 0,
        "resultSize": 0
    }
}


cbq> {t task_name, (select sum(test_rate) cn from default d use keys ['query-1'] where join_day>2) as names from}
{
    "request_id": "6a18431d-62ec-4669-9d81-3de70e243af5",
    "signature": "json",
    "results": [
        {
            "#operator": "Sequence",
            "~children": [
                {
                    "#operator": "PrimaryScan",
                    "index": "#primary",
                    "keyspace": "default",
                    "namespace": "default"
                },
                {
                    "#operator": "Parallel",
                    "~child": {
                        "#operator": "Sequence",
                        "~children": [
                            {
                                "#operator": "Fetch",
                                "keyspace": "default",
                                "namespace": "default"
                            },
                            {
                                "#operator": "InitialProject",
                                "result_terms": [
                                    {
                                        "expr": "(`default`.`task_name`)"
                                    },
                                    {
                                        "as": "names",
                                        "expr": "(select 1)"
                                    }
                                ]
                            },
                            {
                                "#operator": "FinalProject"
                            }
                        ]
                    }
                }
            ]
        }
    ],
    "status": "success",
    "metrics": {
        "elapsedTime": "3.528ms",
        "executionTime": "3.089ms",
        "resultCount": 1,
        "resultSize": 1446
    }
}

cbq>





[MB-12955] should the index be used for child attributes of object? Created: 17/Dec/14  Updated: 17/Dec/14

Status: Open
Project: Couchbase Server
Component/s: query
Affects Version/s: cbq-DP4
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Iryna Mironava Assignee: Gerald Sangudi
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
I have a doc like
{
  "VMs": [
    {
      "RAM": 12,
      "os": "ubuntu",
      "name": "vm_12",
      "memory": 12
    },
    {
      "RAM": 12,
      "os": "windows",
      "name": "vm_13",
      "memory": 12
    }
  ],
}

if i build index on VMs should this index be used for example for query like select * from default where VMs[0].RAM >5 ?




[MB-12956] can i create indexes for meta? Created: 17/Dec/14  Updated: 17/Dec/14

Status: Open
Project: Couchbase Server
Component/s: query
Affects Version/s: cbq-DP4
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Iryna Mironava Assignee: Gerald Sangudi
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
index like CREATE INDEX my_index_meta ON default(meta(default)) for example, or with ids only
should it be possible? in couchbase views we can create indexes for this




[MB-12937] count() is not using index Created: 16/Dec/14  Updated: 16/Dec/14

Status: Open
Project: Couchbase Server
Component/s: query
Affects Version/s: cbq-DP4, sherlock
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Iryna Mironava Assignee: Gerald Sangudi
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Yes

 Description   
query is
select count(VMs) FROM default
index on VMs is created
cbq> create index a1 on default(VMs);
{
    "request_id": "031c4c4a-e874-485b-ab3c-ee3453f84d11",
    "signature": null,
    "results": [
    ],
    "status": "success",
    "metrics": {
        "elapsedTime": "7.052043s",
        "executionTime": "7.051605s",
        "resultCount": 0,
        "resultSize": 0
    }
}

cbq> select * from system:indexes;
{
    "request_id": "6c3159fe-5dfc-4a58-b65c-720efa71209b",
    "signature": {
        "*": "*"
    },
    "results": [
        {
            "indexes": {
                "id": "a1",
                "index_key": [
                    "`VMs`"
                ],
                "index_type": "view",
                "keyspace_id": "default",
                "name": "a1",
                "namespace_id": "default",
                "store_id": "http://172.27.33.45:8091"
            }
        },


cbq> EXPLAIN SELECT count(VMs) FROM default;
{
    "request_id": "5909a400-7b1a-4b54-a058-a90ef8b86de9",
    "signature": "json",
    "results": [
        {
            "#operator": "Sequence",
            "~children": [
                {
                    "#operator": "PrimaryScan",
                    "index": "#primary",
                    "keyspace": "default",
                    "namespace": "default"
                },
                {
                    "#operator": "Parallel",
                    "~child": {
                        "#operator": "Sequence",
                        "~children": [
                            {
                                "#operator": "Fetch",
                                "keyspace": "default",
                                "namespace": "default"
                            },
                            {
                                "#operator": "InitialGroup",
                                "aggregates": [
                                    "count((`default`.`VMs`))"
                                ],
                                "group_keys": []
                            },
                            {
                                "#operator": "IntermediateGroup",
                                "aggregates": [
                                    "count((`default`.`VMs`))"
                                ],
                                "group_keys": []
                            }
                        ]
                    }
                },
                {
                    "#operator": "IntermediateGroup",
                    "aggregates": [
                        "count((`default`.`VMs`))"
                    ],
                    "group_keys": []
                },
                {
                    "#operator": "FinalGroup",
                    "aggregates": [
                        "count((`default`.`VMs`))"
                    ],
                    "group_keys": []
                },
                {
                    "#operator": "Parallel",
                    "~child": {
                        "#operator": "Sequence",
                        "~children": [
                            {
                                "#operator": "InitialProject",
                                "result_terms": [
                                    {
                                        "expr": "count((`default`.`VMs`))"
                                    }
                                ]
                            },
                            {
                                "#operator": "FinalProject"
                            }
                        ]
                    }
                }
            ]
        }
    ],
    "status": "success",
    "metrics": {
        "elapsedTime": "2.667ms",
        "executionTime": "2.351ms",
        "resultCount": 1,
        "resultSize": 2611
    }
}

cbq>





[MB-12939] index is not used for arrays Created: 16/Dec/14  Updated: 16/Dec/14

Status: Open
Project: Couchbase Server
Component/s: query
Affects Version/s: cbq-DP4, sherlock
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Iryna Mironava Assignee: Gerald Sangudi
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   

tasks_points is an array

cbq> CREATE INDEX a2 ON default(tasks_points);
{
    "request_id": "9517fc8d-9fcb-4590-8db2-ed36c482879c",
    "signature": null,
    "results": [
    ],
    "status": "success",
    "metrics": {
        "elapsedTime": "7.116211s",
        "executionTime": "7.1161s",
        "resultCount": 0,
        "resultSize": 0
    }
}


cbq> EXPLAIN SELECT tasks_points.task1 AS task from default WHERE task_points > [];
{
    "request_id": "9ff1755e-8050-4c9b-832e-f90ecece4727",
    "signature": "json",
    "results": [
        {
            "#operator": "Sequence",
            "~children": [
                {
                    "#operator": "PrimaryScan",
                    "index": "#primary",
                    "keyspace": "default",
                    "namespace": "default"
                },
                {
                    "#operator": "Parallel",
                    "~child": {
                        "#operator": "Sequence",
                        "~children": [
                            {
                                "#operator": "Fetch",
                                "keyspace": "default",
                                "namespace": "default"
                            },
                            {
                                "#operator": "Filter",
                                "condition": "(0 \u003c (`default`.`task_points`))"
                            },
                            {
                                "#operator": "InitialProject",
                                "result_terms": [
                   &nbs