[MB-12049] Doc get_misses is wrong Created: 22/Aug/14  Updated: 22/Aug/14

Status: Open
Project: Couchbase Server
Component/s: documentation
Affects Version/s: 2.5.1
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Patrick Varley Assignee: Ruth Harris
Resolution: Unresolved Votes: 0
Labels: supportability
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: http://docs.couchbase.com/couchbase-manual-2.2/#ongoing-monitoring-and-maintenance

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
A get_misses means couchbase does not have the doc.

Our documentation suggests that it means the doc is not in memory:

"""
Cache Misses ( get_misses )

Ideally this should be low, and certainly lower than get_hits. Increasing or high values mean that data that your application expects to be stored is not in memory.
"""

Instead of get_misses maybe this should be ep_bg_fetched?

It is worth explaining what get_misses mean as the name is misleading.




[MB-12048] Vew engine 2.5 to 3.0 index file upgrade script Created: 22/Aug/14  Updated: 22/Aug/14

Status: Open
Project: Couchbase Server
Component/s: view-engine
Affects Version/s: 3.0
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Critical
Reporter: Sarath Lakshman Assignee: Volker Mische
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Gantt: start-finish
Triage: Untriaged
Is this a Regression?: Unknown

 Description   
View engine 2.5 index files are not compatible with 3.0 index files. Hence, it requires index rebuild for 3.0.
We need a script that renames index files to new compatible filenames (with signature) and append new header.




[MB-12046] CBTransfer showing missing items when items are present after topology change with graceful failover+ full recovery with nodes crashing Created: 21/Aug/14  Updated: 21/Aug/14

Status: Open
Project: Couchbase Server
Component/s: tools
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Critical
Reporter: Parag Agarwal Assignee: Bin Cui
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: 3.0.0-1184

Triage: Untriaged
Operating System: Centos 64-bit
Link to Log File, atop/blg, CBCollectInfo, Core dump: https://s3.amazonaws.com/bugdb/jira/MB-12046/10.6.2.144-8212014-1657-diag.zip
https://s3.amazonaws.com/bugdb/jira/MB-12046/10.6.2.144-8212014-176-couch.tar.gz
https://s3.amazonaws.com/bugdb/jira/MB-12046/10.6.2.145-8212014-1658-diag.zip
https://s3.amazonaws.com/bugdb/jira/MB-12046/10.6.2.145-8212014-176-couch.tar.gz
https://s3.amazonaws.com/bugdb/jira/MB-12046/10.6.2.146-8212014-1659-diag.zip
https://s3.amazonaws.com/bugdb/jira/MB-12046/10.6.2.146-8212014-176-couch.tar.gz
https://s3.amazonaws.com/bugdb/jira/MB-12046/10.6.2.147-8212014-171-diag.zip
https://s3.amazonaws.com/bugdb/jira/MB-12046/10.6.2.147-8212014-176-couch.tar.gz
https://s3.amazonaws.com/bugdb/jira/MB-12046/10.6.2.148-8212014-172-diag.zip
https://s3.amazonaws.com/bugdb/jira/MB-12046/10.6.2.148-8212014-177-couch.tar.gz
https://s3.amazonaws.com/bugdb/jira/MB-12046/10.6.2.149-8212014-173-diag.zip
https://s3.amazonaws.com/bugdb/jira/MB-12046/10.6.2.149-8212014-177-couch.tar.gz
https://s3.amazonaws.com/bugdb/jira/MB-12046/10.6.2.150-8212014-174-diag.zip
https://s3.amazonaws.com/bugdb/jira/MB-12046/10.6.2.150-8212014-177-couch.tar.gz
Is this a Regression?: Unknown

 Description   
Scenario

1. Create 7 node cluster
2. Create default bucket and add 100K items
3. Graceful failover 1 node
4. During Graceful failover, kill memcached of 3 other nodes, this fails graceful failover
5. Restart Graceful failover and let it run to completion
6. Full recover the failed over node and rebalance
7. During rebalance, kill memcached of 3 other nodes, this fails rebalance
8. Restart Rebalance and run it to completion

After Step 8, we collect data using cbtransfer and compare it to the one we had in step 2. We see missing keys.

Note that there are no mutations running from step 3 to step 8. We always read from couch store after the queues have been drained and replication is complete. Also, before we run cbtransfer, we verified item counts and verified data items as well

This seems like a bug in cbtransfer

Missing keys

failover97727
 failover96541
 failover19942
 failover72566
 failover98994
 failover21107
 failover17597
 failover47535
 failover58469
 failover47247
 failover79250
 failover95182
 failover48606
 failover885
 failover98366
 failover72214
 failover24016
 failover74124
 failover51288
 failover41177
 failover47925
 failover19220
 failover6008
 failover40281
 failover94916
 failover20361
 failover29410
 failover29800
 failover61528
 failover90103
 failover73072
 failover17817
 failover46753
 failover27955
 failover91997
 failover25502
 failover99672
 failover32149
 failover19552
 failover34279
 failover26723
 failover16113
 failover79522
 failover96951
 failover11737
 failover15332
 failover70253
 failover78036
 failover20413
 failover45200
 failover13192
 failover14154
 failover31368
 failover88099
 failover44684
 failover49460
 failover25882
 failover62699
 failover12486
 failover81678
 failover23632
 failover15850
 failover27237
 failover505
 failover11045
 failover49312
 failover94496
 failover95760
 failover24186
 failover10941
 failover84769
 failover72976
 failover77295
 failover20993
 failover15440
 failover12516
 failover277
 failover38589
 failover92636
 failover22844
 failover72384
 failover73700
 failover95012
 failover82459
 failover22326
 failover87548
 failover87958
 failover98414
 failover78744
 failover23140
 failover27545
 failover10223
 failover61938
 failover3119
 failover14626
 failover79932
 failover74656
 failover12896
 failover41605
 failover93322
 failover42424
 failover92144
 failover99090
 failover94274
 failover91365
 failover77867
 failover44066
 failover24764
 failover40311
 failover38809
 failover20803
 failover68259
 failover54209
 failover76163
 failover70931
 failover6198
 failover44714
 failover18734
 failover51318
 failover43642
 failover98584
 failover49870
 failover43130
 failover82849
 failover41795
 failover28104
 failover13002
 failover10551
 failover76611
 failover91807
 failover17407
 failover22454
 failover91587
 failover67618
 failover29580
 failover16661
 failover915
 failover90671
 failover495
 failover12906
 failover9449
 failover42356
 failover97055
 failover98804
 failover77477
 failover29990
 failover40463
 failover42834
 failover45390
 failover29362
 failover9859
 failover57028
 failover38419
 failover77305
 failover17987
 failover71035
 failover76781
 failover45962
 failover71747
 failover75820
 failover18046
 failover91417
 failover20583
 failover12264
 failover25492
 failover64439
 failover94886
 failover70521
 failover28676
 failover48796
 failover45572
 failover16083
 failover25912
 failover49282
 failover64829
 failover96233
 failover26051
 failover38999
 failover99100
 failover3089
 failover48174
 failover5229
 failover21097
 failover93450
 failover37058
 failover25270
 failover46021
 failover93840
 failover62709
 failover28094
 failover40873
 failover13770
 failover58879
 failover90093
 failover88109
 failover73690
 failover67788
 failover17375
 failover94506
 failover75342
 failover54399
 failover52139
 failover75430
 failover21675a




[MB-12044] Enable on/off from the Couchbase UI where it pre-loads a random document while editing a view Created: 21/Aug/14  Updated: 21/Aug/14

Status: Open
Project: Couchbase Server
Component/s: UI
Affects Version/s: 2.5.1
Fix Version/s: None
Security Level: Public

Type: Improvement Priority: Major
Reporter: Larry Liu Assignee: Anil Kumar
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Dependency

 Description   
When I see a random document displayed initially at the top of the page used for editing a View, then I would think it might be a sampled document FROM THAT VIEW. But it’s not. It’s random and may have nothing to do with that view. Can that “initial load” of a random document be turned off if you don’t want to see it?

To be clear, I DO want that window to be visible after I click Show Results for the view and click on one of the resulting document links. Those documents are returned from the view, so it makes sense there, but it’s the immediate load of an unrelated document that doesn’t make sense.




[MB-12043] cbq crash after trying to delete a key Created: 21/Aug/14  Updated: 21/Aug/14

Status: Open
Project: Couchbase Server
Component/s: query
Affects Version/s: cbq-DP4
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Blocker
Reporter: Iryna Mironava Assignee: Gerald Sangudi
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
cbq> delete from my_bucket KEYS ['query-testa7480c4-0'];
PANIC: Expected plan.Operator instead of <nil>..




[MB-12042] Memcached crashes with Segmentation fault during bucket deletion Created: 21/Aug/14  Updated: 22/Aug/14

Status: Open
Project: Couchbase Server
Component/s: couchbase-bucket
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Critical
Reporter: Meenakshi Goel Assignee: Sundar Sridharan
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: 3.0.0-1174-rel, Debian 7.0

Attachments: Text File crash_log.txt     Text File test_log.txt    
Triage: Triaged
Is this a Regression?: Yes

 Description   
Jenkins Ref Link:
http://qa.hq.northscale.net/job/debian_x64--90_04--observe_tests-P0/1/consoleFull

Test Failed:
./testrunner -i yourfile.ini get-cbcollect-info=True,get-logs=False,stop-on-failure=False,GROUP=ALL -t observe.observetest.ObserveTests.test_observe_basic_data_load_delete,items=100,sasl_buckets=1,standard_buckets=1,rebalance=out,GROUP=P1

1. Note that failed test will not deterministically reproduce the crash.
2. Same test and some other tests found to be failed on CentOS and Ubuntu but found no core dumps.
http://qa.hq.northscale.net/job/centos_x64--44_04--observe-P0/125/console

[user:info,2014-08-21T5:17:57.340,ns_1@10.3.5.154:<0.750.0>:ns_orchestrator:handle_info:483]Rebalance exited with reason {unexpected_exit,
                              {'EXIT',<0.32255.122>,
                               {{{badmatch,{error,closed}},
                                 {gen_server,call,
                                  ['ns_memcached-standard_bucket0',
                                   {set_vbucket,610,replica},
                                   180000]}},
                                {gen_server,call,
                                 [{'janitor_agent-standard_bucket0',
                                   'ns_1@10.3.5.154'},
                                  {if_rebalance,<0.14541.122>,
                                   {dcp_takeover,'ns_1@10.3.5.155',634}},
                                  infinity]}}}}

[error_logger:error,2014-08-21T5:17:57.340,ns_1@10.3.5.154:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
  crasher:
    initial call: ns_single_vbucket_mover:-set_initial_vbucket_state/6-fun-0-/0
    pid: <0.32266.122>
    registered_name: []
    exception error: {bulk_set_vbucket_state_failed,
                      [{'ns_1@10.3.5.154',
                        {'EXIT',
                         {{{badmatch,{error,closed}},
                           {gen_server,call,
                            ['ns_memcached-standard_bucket0',
                             {set_vbucket,610,replica},
                             180000]}},
                          {gen_server,call,
                           [{'janitor_agent-standard_bucket0',
                             'ns_1@10.3.5.154'},
                            {if_rebalance,<0.14541.122>,
                             {update_vbucket_state,610,replica,passive,
                              'ns_1@10.3.5.155'}},
                            infinity]}}}}]}
      in function janitor_agent:bulk_set_vbucket_state/4 (src/janitor_agent.erl, line 411)
    ancestors: [<0.32268.122>,<0.14541.122>,<0.14465.122>,<0.750.0>,
                  mb_master_sup,mb_master,ns_server_sup,ns_server_cluster_sup,
                  <0.60.0>]
    messages: []
    links: [<0.32268.122>]
    dictionary: []
    trap_exit: false
    status: running
    heap_size: 2586
    stack_size: 27
    reductions: 3407
  neighbours:
[ns_server:info,2014-08-21T5:17:57.344,ns_1@10.3.5.154:<0.32684.122>:diag_handler:log_all_tap_and_checkpoint_stats:125]logging tap & checkpoint stats
[error_logger:error,2014-08-21T5:17:57.342,ns_1@10.3.5.154:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
  crasher:
    initial call: ns_memcached:init/1
    pid: <0.18706.119>
    registered_name: []
    exception exit: {badmatch,{error,closed}}
      in function gen_server:init_it/6 (gen_server.erl, line 328)
    ancestors: ['single_bucket_sup-default',<0.18679.119>]
    messages: []
    links: [<0.18720.119>,<0.18726.119>,<0.18744.119>,<0.279.0>,
                  <0.18696.119>]
    dictionary: []
    trap_exit: true
    status: running
    heap_size: 75113
    stack_size: 27
    reductions: 632988
  neighbours:
    neighbour: [{pid,<0.18744.119>},
                  {registered_name,[]},
                  {initial_call,{erlang,apply,['Argument__1','Argument__2']}},
                  {current_function,{gen,do_call,4}},
                  {ancestors,['ns_memcached-default',
                              'single_bucket_sup-default',<0.18679.119>]},
                  {messages,[]},
                  {links,[<0.18706.119>,#Port<0.174708>]},
                  {dictionary,[]},
                  {trap_exit,false},
                  {status,waiting},
                  {heap_size,6772},
                  {stack_size,23},
                  {reductions,525127}]
    neighbour: [{pid,<0.18726.119>},
                  {registered_name,[]},
                  {initial_call,{erlang,apply,['Argument__1','Argument__2']}},
                  {current_function,{gen,do_call,4}},
                  {ancestors,['ns_memcached-default',
                              'single_bucket_sup-default',<0.18679.119>]},
                  {messages,[]},
                  {links,[<0.18706.119>,#Port<0.174706>]},
                  {dictionary,[]},
                  {trap_exit,false},
                  {status,waiting},
                  {heap_size,28690},
                  {stack_size,23},
                  {reductions,3329}]
                  {reductions,3329}]
    neighbour: [{pid,<0.18720.119>},
                  {registered_name,[]},
                  {initial_call,{erlang,apply,['Argument__1','Argument__2']}},
                  {current_function,{gen,do_call,4}},
                  {ancestors,['ns_memcached-default',
                              'single_bucket_sup-default',<0.18679.119>]},
                  {messages,[]},
                  {links,[<0.18706.119>,#Port<0.174713>]},
                  {dictionary,[]},
                  {trap_exit,false},
                  {status,waiting},
                  {heap_size,46422},
                  {stack_size,23},
                  {reductions,2151481}]

Please refer attached crash_log.txt.
Core is available at 10.3.5.155:/tmp/backup_crash/21_08_2014_05_45
Uploading Logs

 Comments   
Comment by Meenakshi Goel [ 21/Aug/14 ]
https://s3.amazonaws.com/bugdb/jira/MB-12042/f9ad56ee/10.3.5.154-8212014-520-diag.zip
https://s3.amazonaws.com/bugdb/jira/MB-12042/c7d38635/10.3.5.155-8212014-522-diag.zip
https://s3.amazonaws.com/bugdb/jira/MB-12042/fd4f484f/10.3.2.146-8212014-524-diag.zip
Comment by Anil Kumar [ 21/Aug/14 ]
Venu please look at it and work with Sundar to verify the test case.
Comment by Sundar Sridharan [ 21/Aug/14 ]
Looks like the crash is in warmup callback in the midst of bucket creation and due to a missing vbucket entry. Can we have the list of exact steps for manual reproduction? thanks
Comment by Venu Uppalapati [ 21/Aug/14 ]
The test case invokes rebalance of nodes. Investigating if there is race between bucket creation and rebalance.
Comment by Meenakshi Goel [ 22/Aug/14 ]
Steps to Reproduce:
1. Create 3 buckets default, sasl_bucket and standard_bucket
2. Rebalance-in 2 nodes
3. Load data 100 items
4. Wait for item to get persist on disk
5. Run re-balance out and Observe in parallel
6. Create View and Run query on view with stale=false

While querying on standard_bucket test fails with timeout error wherein only 100 items are there and timeout is 120 seconds.
In tearDown below steps are occurring during which crash is occurring after rebalance failure (Please note that as mentioned in description crash doesn't happen every time):
1. Memcached is being killed using killall -9 memcached
2. Stop Rebalance
3. Bucket deletion

Attached is the test output. test_log.txt




[MB-12041] Disabling access.log on multiple buckets results in node failing to become available Created: 21/Aug/14  Updated: 21/Aug/14

Status: Open
Project: Couchbase Server
Component/s: couchbase-bucket
Affects Version/s: 2.5.1
Fix Version/s: 3.0.1
Security Level: Public

Type: Bug Priority: Major
Reporter: Brent Woodruff Assignee: Abhinav Dangeti
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Dependency
Triage: Untriaged
Is this a Regression?: Unknown

 Description   
On review of a customer ticket today during support discussions, this particular issue was brought up. It is unclear from subsequent discussions in that ticket whether this issue was addressed and fixed.

Steps to reproduce:

* Initialize a Couchbase node with more than one bucket

* Disable the access.log on *both* buckets using the following command for each bucket:

wget -O- --user=Administrator --password=password --post-data='ns_bucket:update_bucket_props("bucket1", [{extra_config_string, "alog_path="}]).' http://localhost:8091/diag/eval

wget -O- --user=Administrator --password=password --post-data='ns_bucket:update_bucket_props("bucket2", [{extra_config_string, "alog_path="}]).' http://localhost:8091/diag/eval

where 'bucket1' and 'bucket2' are the bucket names.

* Restart the node and observe the following errors in the logs:

memcached<0.89.0>: WARNING: Found duplicate entry for "alog_path"
memcached<0.89.0>: Unsupported key: <^A>

* Note that the node remains pending and never becomes available

 Comments   
Comment by Abhinav Dangeti [ 21/Aug/14 ]
I don't see the node failing to become available, but I do see the warning messages you're talking about.
I'll look in and see whats going on.




[MB-12040] cbrestore backing up linux and restoring on OSX does not Created: 21/Aug/14  Updated: 22/Aug/14  Resolved: 21/Aug/14

Status: Resolved
Project: Couchbase Server
Component/s: tools
Affects Version/s: 2.5.1
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Minor
Reporter: Facundo Farias Assignee: Bin Cui
Resolution: Won't Fix Votes: 0
Labels: backup, customer
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: OSX

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
I am using the build Version 2.5.1-1083-rel.

We do have daily backups of our cluster on production. I took one of those, and after a whole morning dealing with it, I realized that I was on OSX so I had to use this instructions: http://docs.couchbase.com/couchbase-manual-2.0/#backup-and-restore-between-mac-os-x-and-other-platforms.

So, basically I did what the example shows:

shell> cbrestore /linuxbackup/today memcached://mac:11211 -b test -B test

And magically, ir works. The thing is that, the data it is on the default bucket, instead of the one that I pointed on the command line option, so the documents were imported correctly, but on the wrong bucket.

{code}
  [####################] 100.0% (11067/estimated 11067 msgs)
bucket: test, msgs transferred...
       : total | last | per sec
 byte : 4464406 | 4464406 | 2203418.8
2014-08-21 13:18:58,718: mt warning: cannot restore bucket design on a memached destination
done
{code}


 Comments   
Comment by Bin Cui [ 21/Aug/14 ]
Since the default vbucket number is different between linux and osx, you have to use -x rehash=1 during restore.
Comment by Facundo Farias [ 22/Aug/14 ]
Okay. I will try, but can you also update the documentation about it!?
Thanks.




[MB-12039] Document the maximum length of a bucket name Created: 21/Aug/14  Updated: 21/Aug/14

Status: Open
Project: Couchbase Server
Component/s: documentation
Affects Version/s: 2.2.0, 2.5.0, 2.5.1, 3.0, 3.0-Beta
Fix Version/s: None
Security Level: Public

Type: Improvement Priority: Major
Reporter: Ian McCloy Assignee: Ruth Harris
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
As per MB-5641 since Couchbase Server 2.2 the bucket name has been limited to 100 characters in the UI to prevent problems.

Can you ensure the following documentation pages include the information,
"The bucket name can only contain characters in range A-Z, a-z, 0-9 as well as underscore, period, dash and percent symbols and can only be a maximum of 100 characters in length."

http://docs.couchbase.com/couchbase-manual-2.5/cb-admin/#couchbase-admin-web-console-data-buckets-createedit
http://docs.couchbase.com/prebuilt/couchbase-manual-3.0/UI/ui-data-buckets.html
http://docs.couchbase.com/prebuilt/couchbase-manual-3.0/Misc/limits.html
http://docs.couchbase.com/couchbase-manual-2.5/cb-rest-api/#creating-and-editing-buckets
http://docs.couchbase.com/prebuilt/couchbase-manual-3.0/REST/rest-bucket-create.html






[MB-12038] [windows] F-Secure flagging binary Created: 21/Aug/14  Updated: 21/Aug/14

Status: Open
Project: Couchbase Server
Component/s: installer, ns_server
Affects Version/s: 2.5.1
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Sriram Melkote Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
A popular virus scanner, F-Secure is deleting a file installed by us, generate_cert.exe.

We need to analyze why this is happening.

See: https://groups.google.com/forum/#!topic/couchbase/E3QvNolCknQ





[MB-12037] ns_server may lose replicas on stopped rebalance/graceful failover (was: {DCP} : Delta Recovery Impossible after re-try of graceful failover since in first attempt failed) Created: 21/Aug/14  Updated: 22/Aug/14  Resolved: 21/Aug/14

Status: Closed
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Blocker
Reporter: Parag Agarwal Assignee: Aleksey Kondratenko
Resolution: Fixed Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: 10.6.2.144-10.6.2.150
centos 6x
1174

Triage: Untriaged
Link to Log File, atop/blg, CBCollectInfo, Core dump: https://s3.amazonaws.com/bugdb/jira/MB-12037/10.6.2.144-8202014-2226-couch.tar.gz
https://s3.amazonaws.com/bugdb/jira/MB-12037/10.6.2.145-8202014-2227-couch.tar.gz
https://s3.amazonaws.com/bugdb/jira/MB-12037/10.6.2.146-8202014-2227-couch.tar.gz
https://s3.amazonaws.com/bugdb/jira/MB-12037/10.6.2.147-8202014-2227-couch.tar.gz
https://s3.amazonaws.com/bugdb/jira/MB-12037/10.6.2.148-8202014-2227-couch.tar.gz
https://s3.amazonaws.com/bugdb/jira/MB-12037/10.6.2.149-8202014-2228-couch.tar.gz
https://s3.amazonaws.com/bugdb/jira/MB-12037/10.6.2.150-8202014-2228-couch.tar.gz
https://s3.amazonaws.com/bugdb/jira/MB-12037/1174.log.tar.gz
Is this a Regression?: Unknown

 Description   
Scenario
1. Create a 7 Node cluster
2. Create default bucket with 100 K items
3. Graceful failover a node
4. Kill memcached of another node during graceful failover
5. Graceful failover the same node in step 3
6. Add-back the node with delta recovery
7. Hit Rebalance

In Step 7, Rebalance fails for delta recovery. Says delta recovery is not possible. Although we see nodes in the cluster are in healthy state.

We see the following warning:: "Fail Over Warning: Rebalance required, some data is not currently replicated!”

Seems like the delta recovery will not work in this condition, unless we rebalance the cluster. Also, I was able to cancel the delta recovery and do full recovery.

Opening the bug to follow-up on the issue. Attaching logs and data files



 Comments   
Comment by Aleksey Kondratenko [ 21/Aug/14 ]
I was able to reproduce it easily. There's indeed something wrong with restarting graceful failover which impacts delta recovery.
Comment by Aleksey Kondratenko [ 21/Aug/14 ]
And predictably happens with any stop/restart of graceful failover.
Comment by Parag Agarwal [ 21/Aug/14 ]
When the warning is showing (" Rebalance required, some data is not currently replicated!"), we don't expect delta recovery to succeed and this should be correct behavior? Asking since we will have to document it as well

Comment by Aleksey Kondratenko [ 21/Aug/14 ]
Warning has nothing to do with that. And warning is valid. Midway into graceful failover your're not balanced indeed.
Comment by Aleksey Kondratenko [ 21/Aug/14 ]
manifest updated here: http://review.couchbase.org/40811

fix merged here: http://review.couchbase.org/40803
Comment by Parag Agarwal [ 22/Aug/14 ]
Tested
Comment by Parag Agarwal [ 22/Aug/14 ]
Test Run:: http://qa.hq.northscale.net/job/centos_x64--02_01--Rebalance-In/6/console




[MB-12036] cbbackup throws error and exits with exception Created: 20/Aug/14  Updated: 21/Aug/14  Resolved: 20/Aug/14

Status: Resolved
Project: Couchbase Server
Component/s: tools
Affects Version/s: 3.0, 3.0-Beta
Fix Version/s: 3.0, 3.0-Beta
Security Level: Public

Type: Bug Priority: Test Blocker
Reporter: Ashvinder Singh Assignee: Bin Cui
Resolution: Fixed Votes: 0
Labels: rc2
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: All OSes

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
Tested with build: 3.0.0-1174.rel

Setup: 3-Node cluster with XDCR between another 3-node cluster.

The cbbackup fails when try to take full backup of the cluster.
Command executed: /opt/couchbase/bin/cbbackup http://Administrator:password@172.23.106.71:8091 /tmp/backup -m full

Output from Cbbackup>>

vb_937:abs_high_seqno 0
vb_756:abs_high_seqno 0
vb_982:purge_seqno 0
vb_813:high_seqno 0
vb_921:high_seqno 0
vb_1008:uuid 54486033539771
vb_976:uuid 21079925921752


Exception in thread w2:
Traceback (most recent call last):
  File "/usr/lib64/python2.6/threading.py", line 532, in __bootstrap_inner
    self.run()
  File "/usr/lib64/python2.6/threading.py", line 484, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/opt/couchbase/lib/python/pump.py", line 286, in run_worker
    source_map, sink_map, self.ctl, curx).run()
  File "/opt/couchbase/lib/python/pump.py", line 344, in run
    rv_batch, batch = self.source.provide_batch()
  File "/opt/couchbase/lib/python/pump_dcp.py", line 113, in provide_batch
    rv, dcp_conn = self.get_dcp_conn()
  File "/opt/couchbase/lib/python/pump_dcp.py", line 395, in get_dcp_conn
    self.setup_dcp_streams()
  File "/opt/couchbase/lib/python/pump_dcp.py", line 475, in setup_dcp_streams
    if int(vbid) not in self.node_vbucket_map:
ValueError: invalid literal for int() with base 10: '751:abs'


The last build when cbbackup was working: 3.0.0-1157. The build 3.0.0-1158 when cbbackup was broken.

 Comments   
Comment by Bin Cui [ 20/Aug/14 ]
http://review.couchbase.org/#/c/40767/




[MB-12035] Rationalize and 'fix' memcached / ep_engine logging Created: 20/Aug/14  Updated: 20/Aug/14

Status: Open
Project: Couchbase Server
Component/s: couchbase-bucket
Affects Version/s: 3.0
Fix Version/s: techdebt-backlog
Security Level: Public

Type: Bug Priority: Major
Reporter: Dave Rigby Assignee: Chiyoung Seo
Resolution: Unresolved Votes: 0
Labels: memcached
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Is this a Regression?: Yes

 Description   
The log levels used for things in memcached and ep_engine are extremely inconsistent, and should be fixed.

For example, we have four log levels: Warning, Info, Debug and Detail. The default level appears to be Warning, therefore people have (IMHO incorrectly) made everything they want to see in the log by default as Warning, when they aren't actually things to warn about. A few examples:

These should be INFO:

    Wed Aug 20 17:42:15.515493 BST 3: (No Engine) Bucket default registered with low priority
    Wed Aug 20 17:42:15.515525 BST 3: (No Engine) Spawning 4 readers, 4 writers, 1 auxIO, 1 nonIO threads
    Wed Aug 20 17:42:15.999738 BST 3: (default) metadata loaded in 483 ms
    Wed Aug 20 17:42:15.999766 BST 3: (default) Enough number of items loaded to enable traffic
    Wed Aug 20 17:42:15.999927 BST 3: (default) warmup completed in 483 ms

These should be INFO at most, maybe even DEBUG:

    Wed Aug 20 17:52:23.692284 BST 3: (default) DCP (Consumer) eq_dcpq:replication:n_0@192.168.0.86->n_1@127.0.0.1:default - (vb 1023) Attempting to add stream with start seqno 0, end seqno 18446744073709551615, vbucket uuid 250149181088775, snap start seqno 0, and snap end seqno 0
    Wed Aug 20 17:52:23.709485 BST 3: (default) DCP (Consumer) eq_dcpq:replication:n_0@192.168.0.86->n_1@127.0.0.1:default - (vb 1023) Attempting to add takeover stream with start seqno 0, end seqno 18446744073709551615, vbucket uuid 140850177473972, snap start seqno 0, and snap end seqno 0

    (there are plenty more examples)...

If we want our default level to be lower (and log more detail by default) then that's probably a fair thing to do, but we *shouldn't* just dump everything under WARNING as that basically defeats the point of different log levels.





[MB-12034] run of get query to *:8093 causes cbq crash Created: 20/Aug/14  Updated: 20/Aug/14

Status: Open
Project: Couchbase Server
Component/s: query
Affects Version/s: cbq-DP4
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Iryna Mironava Assignee: Gerald Sangudi
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
curl 'http://10.1.3.176:8093&#39; -v
* About to connect() to 10.1.3.176 port 8093 (#0)
* Trying 10.1.3.176... connected
* Connected to 10.1.3.176 (10.1.3.176) port 8093 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.21.4 (x86_64-unknown-linux-gnu) libcurl/7.21.4 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> Host: 10.1.3.176:8093
> Accept: */*
>
< HTTP/1.1 500 Internal Server Error
< Content-Type: text/plain; charset=utf-8
< Date: Wed, 20 Aug 2014 16:17:28 GMT
< Transfer-Encoding: chunked
<
* Connection #0 to host 10.1.3.176 left intact
* Closing connection #0
Either command or prepared must be provided.


09:13:47.207162 cbq-engine started...
09:13:47.207313 version: 0.7.0
09:13:47.207325 datastore: dir:/tmp/data/
2014/08/20 09:13:53 http: multiple response.WriteHeader calls
2014/08/20 09:13:53 http: multiple response.WriteHeader calls
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x20 pc=0x57034c]

goroutine 5 [running]:
net/http.(*switchWriter).Write(0xc200152da0, 0xc2005a3800, 0x54, 0x800, 0x335bf38a88, ...)
/usr/local/go/src/pkg/net/http/chunked.go:0 +0x5c
bufio.(*Writer).Flush(0xc20015b440, 0xc20017f770, 0x2b855bf38b30)
/usr/local/go/src/pkg/bufio/bufio.go:465 +0xb9
net/http.(*response).Flush(0xc20017f770)
/usr/local/go/src/pkg/net/http/server.go:952 +0x4a
github.com/couchbaselabs/query/server/http.(*httpRequest).Flush(0xc2005b3b00)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/http/http_response.go:216 +0x50
github.com/couchbaselabs/query/server/http.(*httpRequest).writeString(0xc2005b3b00, 0xc200628a00, 0x48, 0xc200628a50)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/http/http_response.go:114 +0x85
github.com/couchbaselabs/query/server/http.(*httpRequest).Fail(0xc2005b3b00, 0xc200153870, 0xc200628a50)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/http/http_response.go:33 +0xa6
github.com/couchbaselabs/query/server.func·003()
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:88 +0xf9
net/http.(*switchWriter).Write(0xc200152da0, 0xc2005a3800, 0xc, 0x800, 0x335bf38d90, ...)
/usr/local/go/src/pkg/net/http/chunked.go:0 +0x5c
bufio.(*Writer).Flush(0xc20015b440, 0xc20017f770, 0x2b855bf38e38)
/usr/local/go/src/pkg/bufio/bufio.go:465 +0xb9
net/http.(*response).Flush(0xc20017f770)
/usr/local/go/src/pkg/net/http/server.go:952 +0x4a
github.com/couchbaselabs/query/server/http.(*httpRequest).Flush(0xc2005b3b00)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/http/http_response.go:216 +0x50
github.com/couchbaselabs/query/server/http.(*httpRequest).writeString(0xc2005b3b00, 0xc200626480, 0xc, 0xc200628960)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/http/http_response.go:114 +0x85
github.com/couchbaselabs/query/server/http.(*httpRequest).Fail(0xc2005b3b00, 0xc200153870, 0xc200628960)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/http/http_response.go:33 +0xa6
github.com/couchbaselabs/query/server.(*Server).serviceRequest(0xc20016c1e0, 0xc20013bbe0, 0xc2005b3b00)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:101 +0x18f
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:80 +0x6c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 1 [IO wait]:
net.runtime_pollWait(0x2b855b536f00, 0x72, 0x0)
/usr/local/go/src/pkg/runtime/znetpoll_linux_amd64.c:118 +0x82
net.(*pollDesc).WaitRead(0xc200180230, 0xb, 0xc200136f90)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:75 +0x31
net.(*netFD).accept(0xc2001801b0, 0x9bc7c8, 0x0, 0xc200136f90, 0xb, ...)
/usr/local/go/src/pkg/net/fd_unix.go:385 +0x2c1
net.(*TCPListener).AcceptTCP(0xc2000002d0, 0x562186, 0x2b855a8c4d00, 0x562186)
/usr/local/go/src/pkg/net/tcpsock_posix.go:229 +0x45
net.(*TCPListener).Accept(0xc2000002d0, 0xc20016c300, 0xc200000040, 0xc2001802d0, 0x0, ...)
/usr/local/go/src/pkg/net/tcpsock_posix.go:239 +0x25
net/http.(*Server).Serve(0xc20016c2b0, 0xc20015be40, 0xc2000002d0, 0x0, 0x0, ...)
/usr/local/go/src/pkg/net/http/server.go:1542 +0x85
net/http.(*Server).ListenAndServe(0xc20016c2b0, 0x0, 0xc200136360)
/usr/local/go/src/pkg/net/http/server.go:1532 +0x9e
github.com/couchbaselabs/query/server/http.(*HttpEndpoint).ListenAndServe(0xc20016c2a0, 0x80b6c0, 0xcfebd8)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/http/http_endpoint.go:36 +0x2a
main.main()
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/main/main.go:61 +0x547

goroutine 2 [syscall]:

goroutine 261 [semacquire]:
sync.runtime_Semacquire(0xc200000328)
/usr/local/go/src/pkg/runtime/zsema_linux_amd64.c:165 +0x2e
sync.(*Cond).Wait(0xc20058f4b0)
/usr/local/go/src/pkg/sync/cond.go:74 +0x95
io.(*pipe).read(0xc20058f480, 0xc2005b6000, 0x1000, 0x1000, 0x0, ...)
/usr/local/go/src/pkg/io/pipe.go:52 +0x1f2
io.(*PipeReader).Read(0xc200000310, 0xc2005b6000, 0x1000, 0x1000, 0x9bc988, ...)
/usr/local/go/src/pkg/io/pipe.go:130 +0x5d
net/http.(*liveSwitchReader).Read(0xc2001802f8, 0xc2005b6000, 0x1000, 0x1000, 0x0, ...)
/usr/local/go/src/pkg/net/http/server.go:205 +0x91
io.(*LimitedReader).Read(0xc20015afa0, 0xc2005b6000, 0x1000, 0x1000, 0x2, ...)
/usr/local/go/src/pkg/io/io.go:394 +0xc0
net/http.(*switchReader).Read(0xc200152cb0, 0xc2005b6000, 0x1000, 0x1000, 0x0, ...)
/usr/local/go/src/pkg/net/http/chunked.go:0 +0x62
bufio.(*Reader).fill(0xc20016c360)
/usr/local/go/src/pkg/bufio/bufio.go:79 +0x10c
bufio.(*Reader).ReadSlice(0xc20016c360, 0x40af0a, 0x0, 0x0, 0x0, ...)
/usr/local/go/src/pkg/bufio/bufio.go:262 +0x202
bufio.(*Reader).ReadLine(0xc20016c360, 0x0, 0x0, 0x0, 0x2b855a8d1c00, ...)
/usr/local/go/src/pkg/bufio/bufio.go:293 +0x61
net/textproto.(*Reader).readLineSlice(0xc2005b51b0, 0xe00558a2b, 0x41f1cf, 0xc20015ca90, 0x899020, ...)
/usr/local/go/src/pkg/net/textproto/reader.go:55 +0x51
net/textproto.(*Reader).ReadLine(0xc2005b51b0, 0xc20015ca90, 0x1000, 0x5, 0x0, ...)
/usr/local/go/src/pkg/net/textproto/reader.go:36 +0x25
net/http.ReadRequest(0xc20016c360, 0xc20015ca90, 0x0, 0x0)
/usr/local/go/src/pkg/net/http/request.go:510 +0x86
net/http.(*conn).readRequest(0xc2001802d0, 0x0, 0x0, 0x0)
/usr/local/go/src/pkg/net/http/server.go:547 +0x1bc
net/http.(*conn).serve(0xc2001802d0)
/usr/local/go/src/pkg/net/http/server.go:1052 +0x398
created by net/http.(*Server).Serve
/usr/local/go/src/pkg/net/http/server.go:1564 +0x266

goroutine 6 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 7 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 8 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 9 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 10 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 11 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 12 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 13 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 14 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 15 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 16 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 17 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 18 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 19 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 20 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 21 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 22 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 23 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 24 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 25 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 26 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 27 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 28 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 29 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 30 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 31 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 32 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 33 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 34 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 35 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 36 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 37 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 38 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 39 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 40 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 41 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 42 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 43 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 44 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 45 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 46 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 47 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 48 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 49 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 50 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 51 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 52 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 53 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 54 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 55 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 56 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 57 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 58 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 59 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 60 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 61 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 62 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 63 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 64 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 65 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 66 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 67 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 68 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 69 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 70 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 71 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 72 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 73 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 74 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 75 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 76 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 77 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 78 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 79 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 80 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 81 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 82 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 83 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 84 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 85 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 86 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 87 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 88 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 89 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 90 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 91 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 92 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 93 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 94 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 95 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 96 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 97 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 98 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 99 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 100 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 101 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 102 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 103 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 104 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 105 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 106 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 107 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 108 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 109 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 110 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 111 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 112 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 113 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 114 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 115 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 116 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 117 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 118 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 119 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 120 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 121 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 122 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 123 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 124 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 125 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 126 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 127 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 128 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 129 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 130 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 131 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 132 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 133 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 134 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 135 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 136 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 137 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 138 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 139 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 140 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 141 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 142 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 143 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 144 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 145 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 146 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 147 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 148 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 149 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 150 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 151 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 152 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 153 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 154 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 155 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 156 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 157 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 158 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 159 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 160 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 161 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 162 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 163 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 164 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 165 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 166 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 167 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 168 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 169 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 170 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 171 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 172 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 173 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 174 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 175 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 176 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 177 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 178 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 179 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 180 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 181 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 182 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 183 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 184 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 185 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 186 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 187 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 188 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 189 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 190 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 191 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 192 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 193 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 194 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 195 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 196 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 197 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 198 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 199 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 200 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 201 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 202 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 203 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 204 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 205 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 206 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 207 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 208 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 209 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 210 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 211 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 212 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 213 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 214 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 215 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 216 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 217 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 218 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 219 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 220 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 221 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 222 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 223 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 224 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 225 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 226 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 227 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 228 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 229 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 230 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 231 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 232 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 233 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 234 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 235 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 236 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 237 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 238 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 239 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 240 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 241 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 242 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 243 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 244 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 245 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 246 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 247 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 248 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 249 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 250 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 251 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 252 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 253 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 254 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 255 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 256 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 257 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 258 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 259 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 260 [chan receive]:
github.com/couchbaselabs/query/server.(*Server).doServe(0xc20016c1e0)
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:79 +0x3c
created by github.com/couchbaselabs/query/server.func·002
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/server.go:73 +0x4a

goroutine 262 [IO wait]:
net.runtime_pollWait(0x2b855b536e60, 0x72, 0x0)
/usr/local/go/src/pkg/runtime/znetpoll_linux_amd64.c:118 +0x82
net.(*pollDesc).WaitRead(0xc2001802c0, 0xb, 0xc200136f90)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:75 +0x31
net.(*netFD).Read(0xc200180240, 0xc200603000, 0x8000, 0x8000, 0x0, ...)
/usr/local/go/src/pkg/net/fd_unix.go:195 +0x2b3
net.(*conn).Read(0xc200000040, 0xc200603000, 0x8000, 0x8000, 0x8000, ...)
/usr/local/go/src/pkg/net/net.go:123 +0xc3
io.Copy(0xc2005b5510, 0xc200000318, 0xc2005b5090, 0xc200000040, 0x0, ...)
/usr/local/go/src/pkg/io/io.go:348 +0x1c6
net/http.func·004()
/usr/local/go/src/pkg/net/http/server.go:162 +0x66
created by net/http.(*conn).closeNotify
/usr/local/go/src/pkg/net/http/server.go:168 +0x1c6

goroutine 263 [chan receive]:
github.com/couchbaselabs/query/server/http.func·001()
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/http/http_request.go:93 +0x39
created by github.com/couchbaselabs/query/server/http.newHttpRequest
/root/tuq/gocode/src/github.com/couchbaselabs/query/server/http/http_request.go:95 +0x587




[MB-12033] if query has both union and order by nil error appears Created: 20/Aug/14  Updated: 20/Aug/14

Status: Open
Project: Couchbase Server
Component/s: query
Affects Version/s: cbq-DP4
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Iryna Mironava Assignee: Gerald Sangudi
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
cbq> select join_day from my_bucket union select join_mo from my_bucket ORDER BY join_mo;
PANIC: runtime error: invalid memory address or nil pointer dereference.




[MB-12032] [Debian 7] Storage age about 10-15% more for high priority bucket only as opposed to low priority bucket only. Created: 19/Aug/14  Updated: 20/Aug/14  Resolved: 20/Aug/14

Status: Resolved
Project: Couchbase Server
Component/s: couchbase-bucket
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Major
Reporter: Venu Uppalapati Assignee: Venu Uppalapati
Resolution: Fixed Votes: 0
Labels: RC2
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Triaged
Is this a Regression?: No

 Description   
Steps to reproduce:
1)Create a low priority bucket and insert 100k items.
2)get the storage age statistics.
3)delete the low priority bucket and create a high priority bucket. insert 100k items.
4)get the storage age statistics.
5)upon repeating this test, it is consistently seen that storage age in case of high priority bucket only is 10-15% higher than low priority bucket.

 Comments   
Comment by Sundar Sridharan [ 19/Aug/14 ]
fix uploaded for review at http://review.couchbase.org/#/c/40741 thanks
Comment by Sundar Sridharan [ 20/Aug/14 ]
Fix has been merged




[MB-12031] MacOS build failure: cbdocloader missing Created: 19/Aug/14  Updated: 19/Aug/14  Due: 20/Aug/14  Resolved: 19/Aug/14

Status: Resolved
Project: Couchbase Server
Component/s: clients
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Test Blocker
Reporter: Chris Hillery Assignee: Bin Cui
Resolution: Fixed Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
http://builds.hq.northscale.net:8010/builders/mac-x64-300-builder/builds/1140/steps/couchbase-server%20make%20enterprise%20/logs/stdio

DEBUG: sed -i ~ -e s,/opt/couchbase,/Users/buildbot/buildbot_slave/mac-x64-300-builder/build/build/couchdbx-app/Couchbase Server/../..,g bin/tools/cbanalyze-core
Fixing Python lib path in bin/tools/cbdocloader
DEBUG: sed -i ~ -e s,/opt/couchbase,/Users/buildbot/buildbot_slave/mac-x64-300-builder/build/build/couchdbx-app/Couchbase Server/../..,g bin/tools/cbdocloader
sed: bin/tools/cbdocloader: No such file or directory

** BUILD FAILED **


 Comments   
Comment by Bin Cui [ 19/Aug/14 ]
http://review.couchbase.org/#/c/40735/




[MB-12030] Show (existing) new setting for XDCR i.e Number of Workers Created: 19/Aug/14  Updated: 20/Aug/14  Resolved: 20/Aug/14

Status: Resolved
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Critical
Reporter: Anil Kumar Assignee: Aleksey Kondratenko
Resolution: Fixed Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Triaged
Is this a Regression?: Unknown

 Description   
Show the new (existing) setting on XDCR settings which will be useful for tuning the replication. This new setting "Number of Workers" will help in tuning the XDCR performance.



 Comments   
Comment by Anil Kumar [ 19/Aug/14 ]
This is 'approved' for merging to 3.0 branch and will be included in RC2 build.
Comment by Aleksey Kondratenko [ 20/Aug/14 ]
manifest update here: http://review.couchbase.org/40758
actual change here: http://review.couchbase.org/40736




[MB-12029] Logging required for access.log generation Created: 19/Aug/14  Updated: 20/Aug/14

Status: Open
Project: Couchbase Server
Component/s: couchbase-bucket
Affects Version/s: 2.5.1
Fix Version/s: techdebt-backlog
Security Level: Public

Type: Improvement Priority: Major
Reporter: Patrick Varley Assignee: Chiyoung Seo
Resolution: Unresolved Votes: 0
Labels: customer, supportability, warmup
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Dependency

 Description   
There is no logging around access.log.

It would be good to have the following in memcached.log:

Starting to create access.log in /path/to/log
Swapping old access file with new one
Finish creating access.log with X number of keys





[MB-12028] janitor may lose entire vbucket if rebalance is stopped shortly after it's moved but before vb map is updated Created: 19/Aug/14  Updated: 19/Aug/14  Resolved: 19/Aug/14

Status: Resolved
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 2.0, 2.0.1, 2.1.0, 2.2.0, 2.5.0, 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Blocker
Reporter: Aleksey Kondratenko Assignee: Aleksey Kondratenko
Resolution: Fixed Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
SUBJ. We found this as part of MB-11890.

It should be somewhat rare case but MB-11890 indicates that it's not that rare it seems. There is bug in ns_janitor that will incorrectly handle a case where old master is missing and fast-forward map chain matches vbucket states. In this case it's supposed to update vbmap to match fast-forward chain, but it fails to do it and instead keeps old chain. Which causes real vbucket on fast-forward master to be dropped and empty vbucket to be recreated on old master.


 Comments   
Comment by Aleksey Kondratenko [ 19/Aug/14 ]
http://review.couchbase.org/40732

will update rel-3.0.0.xml manifest shortly
Comment by Aleksey Kondratenko [ 19/Aug/14 ]
manifest updated here: http://review.couchbase.org/40733




[MB-12027] XDCR@next release - Replication Manager: second phase Created: 19/Aug/14  Updated: 19/Aug/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: 32h
Time Spent: Not Specified
Original Estimate: 32h

Epic Link: XDCR next release




[MB-12026] [BUG BASH] Removing production views during a reindex renders console unusable Created: 19/Aug/14  Updated: 20/Aug/14

Status: Open
Project: Couchbase Server
Component/s: UI
Affects Version/s: 3.0-Beta
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: tnguyen Assignee: Aleksey Kondratenko
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: GZip Archive cblogs2.tar.gz     GZip Archive cblogs3.tar.gz     GZip Archive cblogs.tar.gz     PNG File Screen Shot 2014-08-19 at 4.00.58 PM.png    
Triage: Untriaged
Operating System: Centos 64-bit
Is this a Regression?: Yes

 Description   
We've been able to do this in the past with v2.2. During a reindex of Production views, if we delete a view, it causes both the Development Views and Production Views windows to remain at this "Loading" screen. Refreshing, logging out and back in, trying a different node in the cluster, etc. does nothing and you're not able to see the list of designdocs/views.

The attached image shows "Lost connection..." because the only way to get it back to a working state is to restart the server. This is after waiting several minutes for the page to return.

 Comments   
Comment by Aleksey Kondratenko [ 19/Aug/14 ]
Please upload logs of all nodes to help diagnose this.
Comment by Aleksey Kondratenko [ 19/Aug/14 ]
Best way to gather diagnostics is to use new cluster-wide cbcollectinfo gathering facility. See Logs section and tab "Collect Information".
Comment by Aleksey Kondratenko [ 19/Aug/14 ]
I only see http errors in logs as part of 2 server restarts. I see no obvious signs of issues on node 1 caused by removal of indexes.

And I was unable to reproduce this too.

I'll need you to either:

* post more verbose description of how you managed to hit this

* or reproduce this again and post cbcollectinfos.
Comment by tnguyen [ 20/Aug/14 ]
The cluster we were using got into a corrupted state (unrelated to this issue) so we wiped it out and started over. The views we were using were huge so I'm not sure if this could possibly be load related in some way. In any case, we do not have the same setup as we did when we hit this issue. Once we get there, hopefully in the next day or two, I will test again and update




[MB-12025] XDCR@next release - Integration Testing and Initial Performance Tuning Created: 19/Aug/14  Updated: 19/Aug/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: 80h
Time Spent: Not Specified
Original Estimate: 80h

Epic Link: XDCR next release




[MB-12024] XDCR@next release - Logging and diagnostics Created: 19/Aug/14  Updated: 19/Aug/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: 40h
Time Spent: Not Specified
Original Estimate: 40h

Epic Link: XDCR next release




[MB-12023] XDCR@next release - Hookup XDCR REST server with CLI Created: 19/Aug/14  Updated: 19/Aug/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: 32h
Time Spent: Not Specified
Original Estimate: 32h

Epic Link: XDCR next release




[MB-12022] XDCR@next release - Hookup statistics with ns_server stats_collector Created: 19/Aug/14  Updated: 19/Aug/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: 24h
Time Spent: Not Specified
Original Estimate: 24h

Epic Link: XDCR next release




[MB-12021] XDCR@next release - Erlang XDCR REST interface Created: 19/Aug/14  Updated: 19/Aug/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: 56h
Time Spent: Not Specified
Original Estimate: 56h

Epic Link: XDCR next release




[MB-12020] XDCR@next release - REST Server Created: 19/Aug/14  Updated: 19/Aug/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: sprint1_xdcr
Remaining Estimate: 32h
Time Spent: Not Specified
Original Estimate: 32h

Epic Link: XDCR next release

 Description   
build on top of admin port
1. request\response message format defined in protobuf
2. handlers for request




[MB-12019] XDCR@next release - Replication Manager #1: barebone Created: 19/Aug/14  Updated: 19/Aug/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: sprint1_xdcr
Remaining Estimate: 32h
Time Spent: Not Specified
Original Estimate: 32h

Epic Link: XDCR next release

 Description   
build on top of generic FeedManager with XDCR specifics
1. interface with Distributed Metadata Service
2. interface with NS-server




[MB-12018] XDCR@next release - Pipeline Factory Created: 19/Aug/14  Updated: 19/Aug/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: sprint1_xdcr
Remaining Estimate: 40h
Time Spent: Not Specified
Original Estimate: 40h

Epic Link: XDCR next release




[MB-12017] XDCR@next release - Pipeline Runtime Environment: Pipeline Supervisor (Error Handler) Created: 19/Aug/14  Updated: 19/Aug/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: sprint1_xdcr
Remaining Estimate: 24h
Time Spent: Not Specified
Original Estimate: 24h

Epic Link: XDCR next release




[MB-12016] XDCR@next release - Pipeline Runtime: Statistics Manager Created: 19/Aug/14  Updated: 19/Aug/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: 40h
Time Spent: Not Specified
Original Estimate: 40h

Epic Link: XDCR next release




[MB-12015] XDCR@next release - Pipeline Runtime: Pipeline Initializer Created: 19/Aug/14  Updated: 19/Aug/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: sprint1_xdcr
Remaining Estimate: 16h
Time Spent: Not Specified
Original Estimate: 16h

Epic Link: XDCR next release




[MB-12014] XDCR@next release - Pipeline Runtime: Checkpoint Manager Created: 19/Aug/14  Updated: 19/Aug/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: 40h
Time Spent: Not Specified
Original Estimate: 40h

Epic Link: XDCR next release




[MB-12013] XDCR@next release - Pipeline Runtime : Data Item Tracker Created: 19/Aug/14  Updated: 19/Aug/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: 16h
Time Spent: Not Specified
Original Estimate: 16h

Epic Link: XDCR next release




[MB-12012] XDCR@next release - Parts : Queue Created: 19/Aug/14  Updated: 19/Aug/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: 16h
Time Spent: Not Specified
Original Estimate: 16h

Epic Link: XDCR next release




[MB-12011] XDCR@next release - Parts: CAPI nozzle Created: 19/Aug/14  Updated: 19/Aug/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: 40h
Time Spent: Not Specified
Original Estimate: 40h

Epic Link: XDCR next release




[MB-12010] XDCR@next release - Parts : XMEM Nozzle Created: 19/Aug/14  Updated: 19/Aug/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: sprint1_xdcr
Remaining Estimate: 40h
Time Spent: Not Specified
Original Estimate: 40h

Epic Link: XDCR next release




[MB-12009] XDCR@next release - Secondary index refactoring roll out: Testing Created: 19/Aug/14  Updated: 19/Aug/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: 40h
Time Spent: Not Specified
Original Estimate: 40h

Epic Link: XDCR next release




[MB-12008] XDCR@next release - Secondary index refactoring roll out to mainline: Merge and Resolve conflicts Created: 19/Aug/14  Updated: 19/Aug/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: 40h
Time Spent: Not Specified
Original Estimate: 40h

Epic Link: XDCR next release




[MB-12007] geocouch needs a proper license Created: 19/Aug/14  Updated: 20/Aug/14  Resolved: 20/Aug/14

Status: Resolved
Project: Couchbase Server
Component/s: view-engine
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Critical
Reporter: Volker Mische Assignee: Volker Mische
Resolution: Fixed Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
The geocouch doesn't have a proper license. It's under Apache License 2.0 but it doesn't say so.

 Comments   
Comment by Volker Mische [ 19/Aug/14 ]
In review http://review.couchbase.org/40727
Comment by Sriram Melkote [ 19/Aug/14 ]
RC exception approved




[MB-12006] Weird Server state after vm wakeup with 3.0 beta Created: 19/Aug/14  Updated: 20/Aug/14

Status: Open
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 3.0-Beta
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Michael Nitschinger Assignee: Michael Nitschinger
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: Zip Archive cbcollect.zip     PNG File Screen Shot 2014-08-19 at 11.30.29.png    
Triage: Untriaged
Operating System: Ubuntu 32-bit
Is this a Regression?: Unknown

 Description   
I was experiencing odd behavior on vm wakeup. I had 2 virtualbox vms running that I always use, a 2 node cluster. I just woke up 1 VM and expected to see the other down (which was the case). I wanted to fail it over and continue work on a single node, but the failover box is missing and it its indicating to be in rebalance but it isn't.

I'll attach a screenshot and a cbcollectinfo

 Comments   
Comment by Michael Nitschinger [ 19/Aug/14 ]
Screenshot from the UI
Comment by Michael Nitschinger [ 19/Aug/14 ]
More notes:

- I just restarted the service, but it didn't change, so I suspect it is in a malicious state somehow?
- Then I started this 103 machine and it picked it up and all is green, but it is asking for a rebalance.
- The rebalance went through successfully.

Maybe you can pick some info out of the logs, I'm fairly certain though that I didn't shut down the machines during a rebalance. If it was during a rebalance, is this expected to happen?
Comment by Aleksey Kondratenko [ 19/Aug/14 ]
Have you by any chance attempted graceful failover ?
Comment by Aleksey Kondratenko [ 19/Aug/14 ]
And without logs it's going to be hard for me to investigate anything.
Comment by Michael Nitschinger [ 20/Aug/14 ]
You mean before shutting down the VMs? No, I didn't do any failover.

I've attached a cbcollectinfo, what else do you want me to collect?
Comment by Aleksey Kondratenko [ 20/Aug/14 ]
Ah. I misunderstood your description as saying that you did failover but saw rebalance instead. Will take a look at cbcollectinfo which should hopefully be enough to diagnose it.
Comment by Aleksey Kondratenko [ 20/Aug/14 ]
Sorry but cbcollectinfo you've provided is useless. It needs to be gathered by root or at least couchbase user.

And best way is to use our ui (Logs section tab Collect Logs).
Comment by Michael Nitschinger [ 20/Aug/14 ]
Woops, sorry about that. I had to reset the box, maybe I can reproduce it again.




[MB-12005] vbucket-seqno stats getting timed out during Views DGM test Created: 19/Aug/14  Updated: 22/Aug/14

Status: Open
Project: Couchbase Server
Component/s: couchbase-bucket, view-engine
Affects Version/s: 3.0
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Critical
Reporter: Meenakshi Goel Assignee: Nimish Gupta
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: 3.0.0-1166-rel

Triage: Triaged
Operating System: Ubuntu 64-bit
Is this a Regression?: Yes

 Description   
Test to Reproduce:
./testrunner -i yourfile.ini -t view.createdeleteview.CreateDeleteViewTests.pending_removal_with_ddoc_ops,ddoc_ops=update,test_with_view=True,num_ddocs=3,num_views_per_ddoc=3,items=200000,nodes_out=1,active
_resident_threshold=10,dgm_run=True,eviction_policy=fullEviction,skip_cleanup=true

Steps to Reproduce:
1. Setup a 5-node cluster
2. Rebalance in all nodes
3. Load bucket to achieve dgm 10%
4. Failover 1 node
5. Create Views and perform ddoc update operations
6. Test exits with error during ddoc validation

014-08-18 04:00:37 | INFO | MainProcess | Cluster_Thread | [rest_client._query] index query url: http://10.3.5.90:8092/default/_design/ddoc_test1/_view/views0?stale=false&connection_timeout=60000&full_set=true
2014-08-18 04:15:37 | ERROR | MainProcess | Cluster_Thread | [rest_client._http_request] socket error while connecting to http://10.3.5.90:8092/default/_design/ddoc_test1/_view/views0?stale=false&connection_timeout=60000&full_set=true error timed out
2014-08-18 04:15:37 | ERROR | MainProcess | Cluster_Thread | [task.execute] Unexpected Exception Caught
ERROR
[('/usr/lib/python2.7/threading.py', 524, '__bootstrap', 'self.__bootstrap_inner()'), ('/usr/lib/python2.7/threading.py', 551, '__bootstrap_inner', 'self.run()'), ('lib/tasks/taskmanager.py', 31, 'run', 'task.step(self)'), ('lib/tasks/task.py', 56, 'step', 'self.execute(task_manager)'), ('lib/tasks/task.py', 1525, 'execute', 'self.set_exception(e)'), ('lib/tasks/future.py', 264, 'set_exception', 'print traceback.extract_stack()')]
Mon Aug 18 04:15:37 2014
[('/usr/lib/python2.7/threading.py', 524, '__bootstrap', 'self.__bootstrap_inner()'), ('/usr/lib/python2.7/threading.py', 551, '__bootstrap_inner', 'self.run()'), ('testrunner.py', 262, 'run', '**self._Thread__kwargs)'), ('/usr/lib/python2.7/unittest/runner.py', 151, 'run', 'test(result)'), ('/usr/lib/python2.7/unittest/case.py', 391, '__call__', 'return self.run(*args, **kwds)'), ('/usr/lib/python2.7/unittest/case.py', 327, 'run', 'testMethod()'), ('pytests/view/createdeleteview.py', 626, 'pending_removal_with_ddoc_ops', 'self._verify_ddoc_data_all_buckets()'), ('pytests/view/createdeleteview.py', 274, '_verify_ddoc_data_all_buckets', 'result = self.cluster.query_view(self.master, ddoc_name, view.name, query, num_items, bucket)'), ('lib/couchbase/cluster.py', 464, 'query_view', 'return _task.result(timeout)'), ('lib/tasks/future.py', 160, 'result', 'return self.__get_result()'), ('lib/tasks/future.py', 111, '__get_result', 'print traceback.extract_stack()')]
2014-08-18 04:15:37 | WARNING | MainProcess | test_thread | [basetestcase.tearDown] CLEANUP WAS SKIPPED

======================================================================
ERROR: pending_removal_with_ddoc_ops (view.createdeleteview.CreateDeleteViewTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "pytests/view/createdeleteview.py", line 626, in pending_removal_with_ddoc_ops
    self._verify_ddoc_data_all_buckets()
  File "pytests/view/createdeleteview.py", line 274, in _verify_ddoc_data_all_buckets
    result = self.cluster.query_view(self.master, ddoc_name, view.name, query, num_items, bucket)
  File "lib/couchbase/cluster.py", line 464, in query_view
    return _task.result(timeout)
  File "lib/tasks/future.py", line 160, in result
    return self.__get_result()
  File "lib/tasks/future.py", line 112, in __get_result
    raise self._exception
ServerUnavailableException: unable to reach the host @ 10.3.5.90

Logs:
[couchdb:error,2014-08-18T4:39:41.963,ns_1@10.3.5.90:<0.12717.6>:couch_log:error:44]Set view `default`, replica group `_design/ddoc_test0`, doc loader error
error: {timeout,{gen_server,call,
                                 [<0.12658.6>,
                                  {add_stream,706,0,0,5637,6},
                                  60000]}}
stacktrace: [{gen_server,call,3,[{file,"gen_server.erl"},{line,188}]},
             {couch_dcp_client,enum_docs_since,8,
                 [{file,
                      "/home/buildbot/buildbot_slave/ubuntu-1004-x64-300-builder/build/build/couchdb/src/couch_dcp/src/couch_dcp_client.erl"},
                  {line,246}]},
             {couch_set_view_updater,'-load_changes/8-fun-2-',12,
                 [{file,
                      "/home/buildbot/buildbot_slave/ubuntu-1004-x64-300-builder/build/build/couchdb/src/couch_set_view/src/couch_set_view_updater.erl"},
                  {line,516}]},
             {lists,foldl,3,[{file,"lists.erl"},{line,1248}]},
             {couch_set_view_updater,load_changes,8,
                 [{file,
                      "/home/buildbot/buildbot_slave/ubuntu-1004-x64-300-builder/build/build/couchdb/src/couch_set_view/src/couch_set_view_updater.erl"},
                  {line,589}]},
             {couch_set_view_updater,'-update/8-fun-3-',14,
                 [{file,
                      "/home/buildbot/buildbot_slave/ubuntu-1004-x64-300-builder/build/build/couchdb/src/couch_set_view/src/couch_set_view_updater.erl"},
                  {line,281}]}]

[couchdb:error,2014-08-18T6:46:34.997,ns_1@10.3.5.90:<0.12648.6>:couch_log:error:44]dcp client (<0.12660.6>): vbucket-seqno stats timed out after 2.0 seconds. Waiting...
[couchdb:error,2014-08-18T6:46:39.608,ns_1@10.3.5.90:<0.21856.6>:couch_log:error:44]dcp client (<0.21861.6>): vbucket-seqno stats timed out after 2.0 seconds. Waiting...
[couchdb:error,2014-08-18T6:46:42.611,ns_1@10.3.5.90:<0.21856.6>:couch_log:error:44]dcp client (<0.21861.6>): vbucket-seqno stats timed out after 2.0 seconds. Waiting...

*Observed stacktraces and crashes in logs. Uploading logs.

Live Cluster:
1:10.3.5.90
2:10.3.5.91
3:10.3.5.92
4:10.3.5.93
5:10.3.4.75

 Comments   
Comment by Meenakshi Goel [ 19/Aug/14 ]
https://s3.amazonaws.com/bugdb/jira/MB-12005/f806d72b/10.3.5.90-diag.zip
https://s3.amazonaws.com/bugdb/jira/MB-12005/8f3bae64/10.3.5.91-diag.zip
https://s3.amazonaws.com/bugdb/jira/MB-12005/4fc16d8a/10.3.5.92-diag.zip
https://s3.amazonaws.com/bugdb/jira/MB-12005/16116a7e/10.3.4.75-diag.zip
https://s3.amazonaws.com/bugdb/jira/MB-12005/a3e503f6/10.3.5.93-diag.zip
Comment by Sarath Lakshman [ 19/Aug/14 ]
I believe we tried a toy build with separate connection for stats before in MB-11706. So I think I don't have much information about this problem
Comment by Nimish Gupta [ 22/Aug/14 ]
I have a toy build with change for separate connection (http://latestbuilds.hq.couchbase.com/couchbase-server-community_ubunt12-3.0.0-toy-nimish-x86_64_3.0.0-704-toy.deb). Meenakshi, please run the test with this build.
Comment by Meenakshi Goel [ 22/Aug/14 ]
Started test with the toy build.




[MB-12004] XDCR@next release - Secondary index refactor testing Created: 19/Aug/14  Updated: 20/Aug/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Yu Sui
Resolution: Unresolved Votes: 0
Labels: sprint1_xdcr
Remaining Estimate: 40h
Time Spent: Not Specified
Original Estimate: 40h

Epic Link: XDCR next release




[MB-12003] XDCR@next release - Secondary index refactor #5 : Refactor NewFeed function to a factory class implements FeedFactory Created: 19/Aug/14  Updated: 20/Aug/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Yu Sui
Resolution: Unresolved Votes: 0
Labels: sprint1_xdcr
Remaining Estimate: 16h
Time Spent: Not Specified
Original Estimate: 16h

Epic Link: XDCR next release




[MB-12002] XDCR@next release - Secondary index refactor #4 : Projector Created: 19/Aug/14  Updated: 20/Aug/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Yu Sui
Resolution: Unresolved Votes: 0
Labels: sprint1_xdcr
Remaining Estimate: 16h
Time Spent: Not Specified
Original Estimate: 16h

Epic Link: XDCR next release




[MB-12001] Stats Issue - Number of documents is misleading during rebalance with delta recovery Created: 18/Aug/14  Updated: 19/Aug/14

Status: Open
Project: Couchbase Server
Component/s: couchbase-bucket
Affects Version/s: 3.0
Fix Version/s: 3.0.1
Security Level: Public

Type: Bug Priority: Major
Reporter: Pavel Paulau Assignee: Chiyoung Seo
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: Build 3.0.0-1169

Platform = Physical
OS = CentOS 6.5
CPU = Intel Xeon E5-2630 (24 vCPU)
Memory = 64 GB
Disk = RAID 10 HDD

Attachments: PNG File curr_items.png    
Triage: Untriaged
Operating System: Centos 64-bit
Link to Log File, atop/blg, CBCollectInfo, Core dump: http://ci.sc.couchbase.com/job/ares-dev/45/artifact/
Is this a Regression?: No

 Description   
1 of 4 nodes is being re-added after failover.
500M x 2KB items, 10K mixed ops/sec.

Steps:
1. Failover one of nodes.
2. Add it back.
3. Enabled delta recovery.
4. Sleep 20 minutes.
5. Rebalance cluster.

When rebalance starts reported curr_items is significantly greater than actual number of documents.

 Comments   
Comment by Anil Kumar [ 19/Aug/14 ]
Stats Issue for new 3.0 feature 'delta node recovery'




[MB-12000] Rebalance progress is confusing during rebalance with delta recovery Created: 18/Aug/14  Updated: 19/Aug/14

Status: Open
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 3.0
Fix Version/s: 3.0.1
Security Level: Public

Type: Bug Priority: Major
Reporter: Pavel Paulau Assignee: Aleksey Kondratenko
Resolution: Unresolved Votes: 0
Labels: releasenote
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: Build 3.0.0-1169

Platform = Physical
OS = CentOS 6.5
CPU = Intel Xeon E5-2630 (24 vCPU)
Memory = 64 GB
Disk = RAID 10 HDD

Triage: Untriaged
Operating System: Centos 64-bit
Link to Log File, atop/blg, CBCollectInfo, Core dump: http://ci.sc.couchbase.com/job/ares-dev/45/artifact/
Is this a Regression?: No

 Description   
1 of 4 nodes is being re-added after failover.
500M x 2KB items, 10K mixed ops/sec.

Steps:
1. Failover one of nodes.
2. Add it back.
3. Enabled delta recovery.
4. Sleep 20 minutes.
5. Rebalance cluster.

Basically 0% progress will be reported because we don't track certain phases like warmup.




[MB-11999] Resident ratio of active items drops from 3% to 0.06% during rebalance with delta recovery Created: 18/Aug/14  Updated: 19/Aug/14

Status: Open
Project: Couchbase Server
Component/s: couchbase-bucket
Affects Version/s: 3.0
Fix Version/s: 3.0.1
Security Level: Public

Type: Bug Priority: Major
Reporter: Pavel Paulau Assignee: Chiyoung Seo
Resolution: Unresolved Votes: 0
Labels: performance, releasenote
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: Build 3.0.0-1169

Platform = Physical
OS = CentOS 6.5
CPU = Intel Xeon E5-2630 (24 vCPU)
Memory = 64 GB
Disk = RAID 10 HDD

Attachments: PNG File vb_active_resident_items_ratio.png     PNG File vb_replica_resident_items_ratio.png    
Triage: Untriaged
Operating System: Centos 64-bit
Link to Log File, atop/blg, CBCollectInfo, Core dump: http://ci.sc.couchbase.com/job/ares-dev/45/artifact/
Is this a Regression?: No

 Description   
1 of 4 nodes is being re-added after failover.
500M x 2KB items, 10K mixed ops/sec.

Steps:
1. Failover one of nodes.
2. Add it back.
3. Enabled delta recovery.
4. Sleep 20 minutes.
5. Rebalance cluster.

Most importantly it happens due to excessive memory usage.




[MB-11998] Working set is screwed up during rebalance with delta recovery (>95% cache miss rate) Created: 18/Aug/14  Updated: 19/Aug/14

Status: Open
Project: Couchbase Server
Component/s: couchbase-bucket
Affects Version/s: 3.0
Fix Version/s: 3.0.1
Security Level: Public

Type: Bug Priority: Major
Reporter: Pavel Paulau Assignee: Chiyoung Seo
Resolution: Unresolved Votes: 0
Labels: performance
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: Build 3.0.0-1169

Platform = Physical
OS = CentOS 6.5
CPU = Intel Xeon E5-2630 (24 vCPU)
Memory = 64 GB
Disk = RAID 10 HDD

Attachments: PNG File cache_miss_rate.png    
Triage: Untriaged
Operating System: Centos 64-bit
Link to Log File, atop/blg, CBCollectInfo, Core dump: http://ci.sc.couchbase.com/job/ares-dev/45/artifact/
Is this a Regression?: No

 Description   
1 of 4 nodes is being re-added after failover.
500M x 2KB items, 10K mixed ops/sec.

Steps:
1. Failover one of nodes.
2. Add it back.
3. Enabled delta recovery.
4. Sleep 20 minutes.
5. Rebalance cluster.




[MB-11997] Spelling error - beta documentation release notes Created: 18/Aug/14  Updated: 19/Aug/14  Resolved: 19/Aug/14

Status: Closed
Project: Couchbase Server
Component/s: documentation
Affects Version/s: 3.0-Beta
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Don Pinto Assignee: Ruth Harris
Resolution: Won't Fix Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
<uri>
    <iriParsing enabled="true"/>
<uri>

Maybe this should be uriParsing not iriParsing?

Thanks,

 Comments   
Comment by Matt Ingenthron [ 18/Aug/14 ]
iriParsing is correct. I think there's a link in there that will give you more info on International Resource Identifiers.
Comment by Amy Kurtzman [ 19/Aug/14 ]
This element is correct as is.
Comment by Amy Kurtzman [ 19/Aug/14 ]
Nothing to fix here.




[MB-11996] Clicking on Removing Read-Only User does not give a meaningful message in the popup dialog Created: 18/Aug/14  Updated: 20/Aug/14  Resolved: 20/Aug/14

Status: Resolved
Project: Couchbase Server
Component/s: UI
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Major
Reporter: Raju Suravarjjala Assignee: Pavel Blagodov
Resolution: Fixed Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: hostname: 10.3.2.43
Centos 5.8 but this is a generic issue

Attachments: PNG File Screen Shot 2014-08-18 at 4.55.35 PM.png    
Triage: Untriaged
Is this a Regression?: No

 Description   
Go to http://10.3.2.43:8091/index.html
Login as Administrator
Password: password
Go to Settings and Click on on Account management
Create a read only user (raju_read_only)
Try to click on Removing the Readonly User. You will see a popup dialog saying "Removing". "Do you want to Remove it?"
Popup message does not mention any user to be removed

Please see the attached screenshot

 Comments   
Comment by Anil Kumar [ 19/Aug/14 ]
Pavel - We can take this minor UI change for RC2.

Message - "Are you sure you want to remove <read-only username>?
Comment by Pavel Blagodov [ 20/Aug/14 ]
http://review.couchbase.org/40747
Comment by Aleksey Kondratenko [ 20/Aug/14 ]
http://review.couchbase.org/40747




[MB-11995] No way to see the username of the Read only User Created: 18/Aug/14  Updated: 20/Aug/14  Resolved: 20/Aug/14

Status: Resolved
Project: Couchbase Server
Component/s: UI
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Major
Reporter: Raju Suravarjjala Assignee: Pavel Blagodov
Resolution: Fixed Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: hostname: 10.3.2.43
Centos 5.8 but this is a generic issue

Attachments: PNG File Screen Shot 2014-08-18 at 4.35.07 PM.png    
Triage: Untriaged
Is this a Regression?: No

 Description   
Go to http://10.3.2.43:8091/index.html
Login as Administrator
Password: password
Go to Settings and Click on on Account management
Create a read only user (raju_read_only)
logout and log back in after a while. You will not be able to see the name of the read_only_user (Please clear your cache if the username is displayed)
Please see the screenshot below

 Comments   
Comment by Anil Kumar [ 19/Aug/14 ]
Minor UI fix we can take it for RC2.

Read-Only username which got created needs to shown.

" <read-only username> is existing user. "
Comment by Pavel Blagodov [ 20/Aug/14 ]
http://review.couchbase.org/40747
Comment by Aleksey Kondratenko [ 20/Aug/14 ]
http://review.couchbase.org/40747




[MB-11994] cbdocloader doesn't load anything Created: 18/Aug/14  Updated: 18/Aug/14  Resolved: 18/Aug/14

Status: Closed
Project: Couchbase Server
Component/s: tools
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Blocker
Reporter: Pavel Paulau Assignee: Pavel Paulau
Resolution: Fixed Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: Build 3.0.0-1169

Attachments: Zip Archive 1169.zip    
Triage: Untriaged
Operating System: Centos 64-bit
Is this a Regression?: Yes

 Description   
Steps:
1. Install Couchbase Server
2. Follow setup wizard with no major modifications
3. Make sure that both samples buckets are checked.

Eventually observe three empty buckets:
-- beer-sample
-- default
-- gamesim-sample

From UI log:
"Loading sample bucket gamesim-sample failed: {failed_to_load_samples_with_status,
126}"

"Loading sample bucket beer-sample failed: {failed_to_load_samples_with_status,
126}"

The same issue when invoking cbdocloader via "Settings" menu. cbcollectinfo archive is attached.

CLI manipulations:

/opt/couchbase/bin/cbdocloader -h
/opt/couchbase/bin/cbdocloader: line 5: cd: /opt/couchbase/bin/../../lib/python: No such file or directory
/opt/couchbase/bin/cbdocloader: line 15: /root/cbdocloader: No such file or directory
/opt/couchbase/bin/cbdocloader: line 15: exec: /root/cbdocloader: cannot execute: No such file or directory


 Comments   
Comment by Bin Cui [ 18/Aug/14 ]
http://review.couchbase.org/#/c/40706/




[MB-11993] XDCR@next release - Secondary index refactor #3 : VbucketRoutine implements Connector interface Created: 18/Aug/14  Updated: 20/Aug/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Yu Sui
Resolution: Unresolved Votes: 0
Labels: sprint1_xdcr
Remaining Estimate: 8h
Time Spent: Not Specified
Original Estimate: 8h

Epic Link: XDCR next release




[MB-11992] XDCR@next release - Secondary index refactor #2: BucketFeed implements Nozzle Created: 18/Aug/14  Updated: 20/Aug/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Yu Sui
Resolution: Unresolved Votes: 0
Labels: sprint1_xdcr
Remaining Estimate: 16h
Time Spent: Not Specified
Original Estimate: 16h

Epic Link: XDCR next release




[MB-11991] XDCR@next release - Secondary index refactor #1: KVFeed implements interface Connectable and Nozzle Created: 18/Aug/14  Updated: 20/Aug/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Yu Sui
Resolution: Unresolved Votes: 0
Labels: sprint1_xdcr
Remaining Estimate: 8h
Time Spent: Not Specified
Original Estimate: 8h

Epic Link: XDCR next release




[MB-11990] XDCR@next release - Common Interfaces and Generic Classes Created: 18/Aug/14  Updated: 19/Aug/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: techdebt-backlog
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: sprint1_xdcr
Remaining Estimate: 16h
Time Spent: Not Specified
Original Estimate: 16h

Epic Link: XDCR next release




[MB-11989] XDCR next release Created: 18/Aug/14  Updated: 19/Aug/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: feature-backlog
Fix Version/s: None
Security Level: Public

Type: Epic Priority: Major
Reporter: Xiaomei Zhang Assignee: Xiaomei Zhang
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Epic Name: XDCR next release
Epic Status: To Do




[MB-11988] Incorrect data file path for mac in 2.5.1 manual Created: 18/Aug/14  Updated: 18/Aug/14  Resolved: 18/Aug/14

Status: Closed
Project: Couchbase Server
Component/s: documentation
Affects Version/s: 2.5.1
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Major
Reporter: Don Pinto Assignee: Ruth Harris
Resolution: Fixed Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
On http://docs.couchbase.com/couchbase-manual-2.5/cb-admin/ page,
 
Under Couchbase data files, we have path for Mac as -
~/Library/Application Support/Couchbase/var/lig/couchbase/data

Note that there is no 'lig' sub-folder. Should be 'lib' as below -
~/Library/Application Support/Couchbase/var/lib/couchbase/data


 Comments   
Comment by Ruth Harris [ 18/Aug/14 ]
fixed. and pushed to stage.
Comment by Ruth Harris [ 18/Aug/14 ]
fixed in 2.5




[MB-11987] UI enforces limit on password characters (24), but not tools. Created: 18/Aug/14  Updated: 18/Aug/14

Status: Open
Project: Couchbase Server
Component/s: tools
Affects Version/s: 3.0-Beta
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Minor
Reporter: Don Pinto Assignee: Bin Cui
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
If user sets/re-sets the password using the CLI tools and that password is greater than 24 characters, he cannot log into the admin console using the UI since the UI does not allow more than 24 characters.

I know this is a corner case but should be fixed and documented (Maximum number of characters allowed in password is 24 characters).




[MB-11986] Stats for every operations. (prepend append incr decr) Created: 18/Aug/14  Updated: 18/Aug/14  Resolved: 18/Aug/14

Status: Resolved
Project: Couchbase Server
Component/s: memcached
Affects Version/s: 2.5.1, 3.0
Fix Version/s: None
Security Level: Public

Type: Improvement Priority: Major
Reporter: Patrick Varley Assignee: Trond Norbye
Resolution: Duplicate Votes: 0
Labels: stats
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Duplicate
duplicates MB-7761 Move operations stats out of memcache... Open

 Description   
It would be good to break down stats to the operations used by the client SDKs.




[MB-11985] couchbase server 2.x.x failed to start in server if that server was used to upgrade from 2.x to 3.0.0 and uninstall couchbase server 3.0.0 Created: 18/Aug/14  Updated: 18/Aug/14

Status: Open
Project: Couchbase Server
Component/s: installer
Affects Version/s: 2.5.1, 3.0
Fix Version/s: 3.0.1
Security Level: Public

Type: Bug Priority: Blocker
Reporter: Thuan Nguyen Assignee: Bin Cui
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: windows server 2008 R2 64-bit

Triage: Untriaged
Operating System: Windows 64-bit
Is this a Regression?: Yes

 Description   
Install couchbase server 2.x.x to a windows server 2008 R2 64-bit
Upgrade couchbase server to 3.0.0-1159
Uninstall this couchbase server 3.0.0
Install any couchbase server 2.x.x into this server, couchbase server 2.x.x will not start.

 Comments   
Comment by Thuan Nguyen [ 18/Aug/14 ]
This will block all windows upgrade jobs from 2.x.x to 3.0.0




[MB-11984] Intra-cluster replication slows down during intensive ejection Created: 18/Aug/14  Updated: 19/Aug/14

Status: Open
Project: Couchbase Server
Component/s: couchbase-bucket
Affects Version/s: 3.0
Fix Version/s: 3.0.1
Security Level: Public

Type: Bug Priority: Major
Reporter: Pavel Paulau Assignee: Sundar Sridharan
Resolution: Unresolved Votes: 0
Labels: performance
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: Build 3.0.0-1166

Platform = Physical
OS = CentOS 6.5
CPU = Intel Xeon E5-2680 v2 (40 vCPU)
Memory = 256 GB
Disk = RAID 10 SSD

Attachments: PNG File ejections_and_replication_queue.png    
Issue Links:
Relates to
relates to MB-11642 Intra-replication falling far behind ... Reopened
Triage: Untriaged
Operating System: Centos 64-bit
Link to Log File, atop/blg, CBCollectInfo, Core dump: https://s3.amazonaws.com/bugdb/jira/MB-11984/172.23.100.17.zip
https://s3.amazonaws.com/bugdb/jira/MB-11984/172.23.100.18.zip
https://s3.amazonaws.com/bugdb/jira/MB-11984/172.23.100.19.zip
https://s3.amazonaws.com/bugdb/jira/MB-11984/172.23.100.20.zip
https://s3.amazonaws.com/bugdb/jira/MB-11984/172.23.100.21.zip
https://s3.amazonaws.com/bugdb/jira/MB-11984/172.23.100.22.zip
https://s3.amazonaws.com/bugdb/jira/MB-11984/172.23.100.23.zip
https://s3.amazonaws.com/bugdb/jira/MB-11984/172.23.100.24.zip
https://s3.amazonaws.com/bugdb/jira/MB-11984/172.23.100.25.zip
Is this a Regression?: Yes

 Description   
Following up MB-11642.

I noticed that higher replication queue (10-100K items) is caused by intensive ejection (by intensive I mean 1-2M items ejected per second).

My setup:
9 nodes, 1 bucket, 2 replicas
50K inserts/sec (~1KB docs)

 Comments   
Comment by Raju Suravarjjala [ 18/Aug/14 ]
Triage: Moving to 3.0.1 as this is due to millions of documents
Comment by Sundar Sridharan [ 19/Aug/14 ]
Still trying to reproduce, does not occur with lower set ops/sec




[MB-11983] Rebalance stuck on a 3.5TB deployment. Created: 18/Aug/14  Updated: 19/Aug/14  Resolved: 19/Aug/14

Status: Closed
Project: Couchbase Server
Component/s: DCP
Affects Version/s: 3.0
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Critical
Reporter: Cihan Biyikoglu Assignee: Mike Wiederhold
Resolution: Incomplete Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: Version: 3.0.0 enterprise edition (build-913) x64
Ubuntu 12


Triage: Untriaged
Is this a Regression?: Unknown

 Description   
cbcollectinfo
https://s3.amazonaws.com/bugdb/jira/MB-11983/cbc/cbcollectinfo_node32.zip
https://s3.amazonaws.com/bugdb/jira/MB-11983/cbc/cbcollectinfo_node33.zip
https://s3.amazonaws.com/bugdb/jira/MB-11983/cbc/cbcollectinfo_node34.zip
https://s3.amazonaws.com/bugdb/jira/MB-11983/cbc/cbcollectinfo_node35.zip


 Comments   
Comment by Cihan Biyikoglu [ 18/Aug/14 ]
scenario: rebelancing in a node with full recovery. the smaller db is done but the large db is stuck at 3.4% for 72 hours.
thanks
-cihan
Comment by Parag Agarwal [ 18/Aug/14 ]
Cihan: why are you using build-913? Is this 973?
Comment by Cihan Biyikoglu [ 18/Aug/14 ]
bit of an old cluster. I can upgrade and try again. should I go 1118?
Comment by Parag Agarwal [ 18/Aug/14 ]
latest build is 1168

are you trying to test on beta?
Comment by Cihan Biyikoglu [ 18/Aug/14 ]
lets try latest.
-cihan




[MB-11982] Upgrade paths stated in 3.0 beta documentation are inaccruate Created: 18/Aug/14  Updated: 18/Aug/14

Status: Open
Project: Couchbase Server
Component/s: documentation
Affects Version/s: 3.0, 3.0-Beta
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Critical
Reporter: Brent Woodruff Assignee: Ruth Harris
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
Current beta documentation states that one must be on Couchbase 2.5 before upgrading to Couchbase 3.0.

From:

http://docs.couchbase.com/prebuilt/couchbase-manual-3.0/Install/upgrading.html

The following are the officially supported upgrade paths for Couchbase Server for both online upgrades or offline upgrades:

    Couchbase 1.8.1 to Couchbase 2.x
    Couchbase 2.0.x to Couchbase 2.1 or higher
    Couchbase 2.1.x to Couchbase 2.2 or higher
    Couchbase 2.2 to Couchbase 2.5 or higher
    Couchbase 2.5 to Couchbase 3.0 or higher


----

The above implies that one must be on Couchbase 2.5 in order to upgrade to Couchbase 3.0. In addition to the upgrade paths listed, one should be able to upgrade from Couchbase 2.x to 3.0. Suggested rewording:

The following are the officially supported upgrade paths for Couchbase Server for both online upgrades or offline upgrades:

    Couchbase 1.8.1 to Couchbase 2.x
    Couchbase 2.x to any later Couchbase 2.x
    Couchbase 2.x to Couchbase 3.0





[MB-11981] [3.0.0-1166-Windows] items are stucked in ep-engine queue Created: 18/Aug/14  Updated: 19/Aug/14  Resolved: 19/Aug/14

Status: Resolved
Project: Couchbase Server
Component/s: couchbase-bucket, storage-engine
Affects Version/s: 3.0
Fix Version/s: 3.0.1
Security Level: Public

Type: Bug Priority: Test Blocker
Reporter: Sangharsh Agarwal Assignee: Sundar Sridharan
Resolution: Duplicate Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: Build 3.0.0-1166


Issue Links:
Duplicate
duplicates MB-11948 [Windows]: Simple-test broken - Rebal... Open
Triage: Untriaged
Operating System: Windows 64-bit
Link to Log File, atop/blg, CBCollectInfo, Core dump: [Source]
172.23.107.28 : https://s3.amazonaws.com/bugdb/jira/MB-11981/1b8b74c4/172.23.107.28-8182014-328-diag.zip
172.23.107.28 : https://s3.amazonaws.com/bugdb/jira/MB-11981/88b9ed53/172.23.107.28-diag.txt.gz
172.23.107.28 : https://s3.amazonaws.com/bugdb/jira/MB-11981/f2ec9c16/172.23.107.28-8182014-316-couch.tar.gz
172.23.107.29 : https://s3.amazonaws.com/bugdb/jira/MB-11981/0a9b3f01/172.23.107.29-8182014-342-diag.zip
172.23.107.29 : https://s3.amazonaws.com/bugdb/jira/MB-11981/600d3625/172.23.107.29-diag.txt.gz
172.23.107.29 : https://s3.amazonaws.com/bugdb/jira/MB-11981/ad9338cd/172.23.107.29-8182014-316-couch.tar.gz

[Destination]
172.23.107.30 : https://s3.amazonaws.com/bugdb/jira/MB-11981/526c0e5f/172.23.107.30-diag.txt.gz
172.23.107.30 : https://s3.amazonaws.com/bugdb/jira/MB-11981/95808cfb/172.23.107.30-8182014-317-couch.tar.gz
172.23.107.30 : https://s3.amazonaws.com/bugdb/jira/MB-11981/b2d7477c/172.23.107.30-8182014-358-diag.zip
172.23.107.31 : https://s3.amazonaws.com/bugdb/jira/MB-11981/2942afe4/172.23.107.31-diag.txt.gz
172.23.107.31 : https://s3.amazonaws.com/bugdb/jira/MB-11981/76d55789/172.23.107.31-8182014-318-couch.tar.gz
172.23.107.31 : https://s3.amazonaws.com/bugdb/jira/MB-11981/cc0fbfe9/172.23.107.31-8182014-49-diag.zip
Is this a Regression?: Yes

 Description   
http://qa.sc.couchbase.com/job/win_2012_x64--105_01--uniXDCR_biXDCR-P0/17/consoleFull

[Test Errors]
2014-08-18 03:05:08 | WARNING | MainProcess | Cluster_Thread | [task.check] Not Ready: ep_queue_size 10000 == 0 expected on '172.23.107.29:8091', default bucket
2014-08-18 03:05:11 | WARNING | MainProcess | Cluster_Thread | [task.check] Not Ready: ep_queue_size 10000 == 0 expected on '172.23.107.28:8091', sasl_bucket_1 bucket
2014-08-18 03:05:12 | WARNING | MainProcess | Cluster_Thread | [task.check] Not Ready: ep_queue_size 10000 == 0 expected on '172.23.107.28:8091', default bucket
2014-08-18 03:05:13 | WARNING | MainProcess | Cluster_Thread | [task.check] Not Ready: ep_queue_size 10000 == 0 expected on '172.23.107.29:8091', sasl_bucket_1 bucket
2014-08-18 03:05:13 | WARNING | MainProcess | Cluster_Thread | [task.check] Not Ready: ep_queue_size 10000 == 0 expected on '172.23.107.29:8091', default bucket
2014-08-18 03:05:16 | WARNING | MainProcess | Cluster_Thread | [task.check] Not Ready: ep_queue_size 10000 == 0 expected on '172.23.107.28:8091', sasl_bucket_1 bucket
2014-08-18 03:05:17 | WARNING | MainProcess | Cluster_Thread | [task.check] Not Ready: ep_queue_size 10000 == 0 expected on '172.23.107.28:8091', default bucket
2014-08-18 03:05:18 | WARNING | MainProcess | Cluster_Thread | [task.check] Not Ready: ep_queue_size 10000 == 0 expected on '172.23.107.29:8091', sasl_bucket_1 bucket

 Comments   
Comment by Sangharsh Agarwal [ 18/Aug/14 ]
Strange that following files present on 172.23.107.28 under data folder:

No vbucket files are present for "default bucket" and few vbucket files for sasl_bucket_1.

[root@cen-1413 data]# ll
total 28
drwxr-xr-x. 2 10500 10513 4096 Aug 18 04:42 default
-rw-r--r--. 1 10500 10513 8271 Aug 18 03:07 _replicator.couch.1
drwxr-xr-x. 2 10500 10513 4096 Aug 18 03:06 sasl_bucket_1
-rw-r--r--. 1 10500 10513 4175 Aug 18 02:46 _users.couch.1
[root@cen-1413 data]# tree
.
├── default
│   └── master.couch.1
├── _replicator.couch.1
├── sasl_bucket_1
│   ├── 1017.couch.1
│   ├── 1018.couch.1
│   ├── 1019.couch.1
│   ├── 1020.couch.1
│   ├── 1021.couch.1
│   ├── 1022.couch.1
│   ├── 1023.couch.1
│   ├── master.couch.1
│   └── stats.json
└── _users.couch.1

2 directories, 12 files
Comment by Chiyoung Seo [ 18/Aug/14 ]
Sundar,

Please work with Sriram to see if this issue is the duplicate of

https://www.couchbase.com/issues/browse/MB-11948
Comment by Cihan Biyikoglu [ 18/Aug/14 ]
Triage: not an RC1 blocker
Comment by Sundar Sridharan [ 18/Aug/14 ]
ep_queue_size showing max items indicates failure to persist, which is the same root cause as seen in MB-11948
Comment by Chiyoung Seo [ 18/Aug/14 ]
Sundar,

Then, please work with Sriram to figure out this persistence issue on windows.

Closing this ticket as duplicate of MB-11948




[MB-11980] {DCP}:: Bad Replicas leads to Rebalance-out for Max Buckets (failure:: Exit) after a series of Rebalance-in/out operations (with 1, 2, 3, 4 node) Created: 17/Aug/14  Updated: 20/Aug/14  Resolved: 20/Aug/14

Status: Closed
Project: Couchbase Server
Component/s: couchbase-bucket
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Blocker
Reporter: Parag Agarwal Assignee: Mike Wiederhold
Resolution: Fixed Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: 10.6.2.144-10.6.2.150

Triage: Untriaged
Link to Log File, atop/blg, CBCollectInfo, Core dump: https://s3.amazonaws.com/bugdb/jira/MB-11980/bad_replicas_max_bucket.tar.gz
Is this a Regression?: Yes

 Description   
1166, 3.0.0

1. Create 1 node cluster
2. Create max buckets allowed (10 buckets)
3. Add 100 K items per buckets
4. Rebalance-out 1 Node and with mutation running
5. Rebalance-in 1 Node and with mutation running
6. Rebalance-out 2 Node and with mutation running
7. Rebalance-in 2 Node and with mutation running
8. Rebalance-out 3 Node and with mutation running
9. Rebalance-in 3 Node and with mutation running
10. Rebalance-out 4 Node and with mutation running
11. Rebalance-in 4 Node and with mutation running

After Step 11 was complete. Was trying to do a rebalance-out operation for 1 node and the rebalance would exit with Bad Replicas issue.

Bad replicators after rebalance:
Missing = [{'ns_1@10.6.2.146','ns_1@10.6.2.148',11}]
Extras = [] (repeated 1 times)


Test Case::

./testrunner -i ~/run_tests/palm.ini get-cbcollect-info=False,get-logs=False,stop-on-failure=False,get-coredumps=True,force_kill_memached=True,verify_unacked_bytes=True,total_vbuckets=128,std_vbuckets_dist=5,chk_max_items=1000,chk_period=30 -t rebalance.rebalanceinout.RebalanceInOutTests.incremental_rebalance_in_out_with_max_buckets_number,items=10000,default_bucket=False,skip_cleanup=True,vbuckets=128,GROUP=IN_OUT_LONG;P2

This test case will pass but the issue comes after we run this scenario. So its a good way of finding the issue. We tested it for a simple rebalance-out operation without clean up. Note that this test case would fail if we removed skip_cleanup=True parameter as well.

Marking the bug as regression. Wasn't expecting 1 node rebalance-out to fail for bad replicas.

 Comments   
Comment by Raju Suravarjjala [ 18/Aug/14 ]
Triage: Alk can you please investigate, looks similar area to MB-11979
Comment by Aleksey Kondratenko [ 18/Aug/14 ]
erange thing again:

[ns_server:debug,2014-08-17T13:04:08.588,ns_1@10.6.2.148:dcp_consumer_conn-standard_bucket0-ns_1@10.6.2.146<0.1526.0>:dcp_commands:add_stream:61]Add stream for partition 11, opaque = 0xB, type = add
[ns_server:debug,2014-08-17T13:04:08.588,ns_1@10.6.2.148:dcp_consumer_conn-standard_bucket0-ns_1@10.6.2.146<0.1526.0>:dcp_consumer_conn:handle_call:188]Setup DCP streams:
Current [10,12,111]
Streams to open [11]
Streams to close []

[ns_server:debug,2014-08-17T13:04:08.589,ns_1@10.6.2.148:dcp_consumer_conn-standard_bucket0-ns_1@10.6.2.146<0.1526.0>:dcp_proxy:handle_packet:120]Proxy packet: REQUEST: 0x53 (dcp_stream_req) vbucket = 11 opaque = 0xC8000000
80 53 00 00
30 00 00 0B
00 00 00 30
C8 00 00 00
00 00 00 00
00 00 00 00
00 00 00 00
00 00 00 00
00 00 00 00
00 00 02 A3
FF FF FF FF
FF FF FF FF
00 00 EC 70
1D E5 F3 8B
00 00 00 00
00 00 00 00
00 00 00 00
00 00 00 00

[ns_server:debug,2014-08-17T13:04:08.595,ns_1@10.6.2.148:<0.1527.0>:dcp_proxy:handle_packet:120]Proxy packet: RESPONSE: 0x53 (dcp_stream_req) vbucket = 0 opaque = 0xC8000000 status = 0x22 (erange)
81 53 00 00
00 00 00 22
00 00 00 0D
C8 00 00 00
00 00 00 00
00 00 00 00
4F 75 74 73
69 64 65 20
72 61 6E 67
65
[ns_server:debug,2014-08-17T13:04:08.595,ns_1@10.6.2.148:dcp_consumer_conn-standard_bucket0-ns_1@10.6.2.146<0.1526.0>:dcp_proxy:handle_packet:120]Proxy packet: RESPONSE: 0x51 (dcp_add_stream) vbucket = 0 opaque = 0xB status = 0x22 (erange)
81 51 00 00
04 00 00 22
00 00 00 04
00 00 00 0B
00 00 00 00
00 00 00 00
00 00 00 C8

[ns_server:debug,2014-08-17T13:04:08.595,ns_1@10.6.2.148:dcp_consumer_conn-standard_bucket0-ns_1@10.6.2.146<0.1526.0>:dcp_consumer_conn:maybe_reply_setup_streams:294]Setup stream request completed with {errors,[{34,11}]}.
Comment by Mike Wiederhold [ 18/Aug/14 ]
Parag,

Please re-run this test and give me the live cluster. I need to look at the data files.
Comment by Parag Agarwal [ 18/Aug/14 ]
ok
Comment by Parag Agarwal [ 18/Aug/14 ]
10.6.2.144:8091
Comment by Cihan Biyikoglu [ 19/Aug/14 ]
Triage - awaiting update form Mike.
Comment by Mike Wiederhold [ 19/Aug/14 ]
http://review.couchbase.org/#/c/40707/
Comment by Parag Agarwal [ 20/Aug/14 ]
fixed for build 1174




[MB-11979] {DCP}: Rebalance Exit with dcp_wait_for_data_move_failed after we stop swap rebalance twice (using different priority of buckets) Created: 17/Aug/14  Updated: 19/Aug/14  Resolved: 19/Aug/14

Status: Resolved
Project: Couchbase Server
Component/s: couchbase-bucket, ns_server
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Blocker
Reporter: Parag Agarwal Assignee: Mike Wiederhold
Resolution: Duplicate Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: 10.6.2.144-10.6.2.150

Triage: Untriaged
Link to Log File, atop/blg, CBCollectInfo, Core dump: https://s3.amazonaws.com/bugdb/jira/MB-11979/rebalance_exit_1066.tar.gz
Is this a Regression?: Yes

 Description   
1166, Centos 6x, Ubuntu 12.04

1. Create 5 Node cluster (10.6.2.144-10.6.2.148)
2. Create 3 standard buckets, 2 sasl buckets
   2 standard, 1 sasl have high priority and 1 standard, 1 sasl has lower priority
3. Add 100 K items to each bucket
4. Swap rebalance out 2 nodes (out:: 10.6.2.145, 10.6.2.146 in:: 10.6.2.149, 10.6.2.150)
5. Stop Rebalance at 20 % and continue rebalance
6. Stop Rebalance at 40 %

After Step 6, it was observed that attempt to rebalance would exit with the following message

Rebalance exited with reason {unexpected_exit,
                              {'EXIT',<0.22236.1>,
                               {dcp_wait_for_data_move_failed,
                                "standard_bucket1",487,'ns_1@10.6.2.144',
                                ['ns_1@10.6.2.148','ns_1@10.6.2.146'],
                                {error,no_stats_for_this_vbucket}}}}

Note that this issue is not observed when buckets have same priority

 Comments   
Comment by Chiyoung Seo [ 17/Aug/14 ]
Please ask the ns-server team for the initial investigation as they can point out the root cause easily.
Comment by Anil Kumar [ 18/Aug/14 ]
TRiage - Raising this to Blocker we need to look at this today.
Comment by Aleksey Kondratenko [ 18/Aug/14 ]
Parag, please note that if you stop removing details of log messages (node and timestamp) things could by _much_ easier for me.
Comment by Aleksey Kondratenko [ 18/Aug/14 ]
erange thing again:

[ns_server:debug,2014-08-16T18:14:03.268,ns_1@10.6.2.148:dcp_consumer_conn-standard_bucket1-ns_1@10.6.2.144<0.682.0>:dcp_commands:add_stream:61]Add stream for partition 487, opaque = 0x1E7, type = add
[ns_server:debug,2014-08-16T18:14:03.268,ns_1@10.6.2.148:dcp_consumer_conn-standard_bucket1-ns_1@10.6.2.144<0.682.0>:dcp_consumer_conn:handle_call:188]Setup DCP streams:
Current [428,429,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453,454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,469,470,471,472,473,474,475,476,477,478,479,480,481,482,483,484,485,486,488,489,490,491,492,493,494,495,496,497,498,499,500,501,502,503,504,505,506,507,508,509,510,511,512]
Streams to open [487]
Streams to close []

[ns_server:debug,2014-08-16T18:14:03.269,ns_1@10.6.2.148:dcp_consumer_conn-standard_bucket1-ns_1@10.6.2.144<0.682.0>:dcp_proxy:handle_packet:120]Proxy packet: REQUEST: 0x53 (dcp_stream_req) vbucket = 487 opaque = 0x80000000
80 53 00 00
30 00 01 E7
00 00 00 30
80 00 00 00
00 00 00 00
00 00 00 00
00 00 00 00
00 00 00 00
00 00 00 00
00 00 00 62
FF FF FF FF
FF FF FF FF
00 00 80 78
85 F2 26 27
00 00 00 00
00 00 00 00
00 00 00 00
00 00 00 00

[ns_server:debug,2014-08-16T18:14:03.270,ns_1@10.6.2.148:<0.683.0>:dcp_proxy:handle_packet:120]Proxy packet: RESPONSE: 0x53 (dcp_stream_req) vbucket = 0 opaque = 0x80000000 status = 0x22 (erange)
81 53 00 00
00 00 00 22
00 00 00 0D
80 00 00 00
00 00 00 00
00 00 00 00
4F 75 74 73
69 64 65 20
72 61 6E 67
65
[ns_server:debug,2014-08-16T18:14:03.271,ns_1@10.6.2.148:dcp_consumer_conn-standard_bucket1-ns_1@10.6.2.144<0.682.0>:dcp_proxy:handle_packet:120]Proxy packet: RESPONSE: 0x51 (dcp_add_stream) vbucket = 0 opaque = 0x1E7 status = 0x22 (erange)
81 51 00 00
04 00 00 22
00 00 00 04
00 00 01 E7
00 00 00 00
00 00 00 00
00 00 00 80

[ns_server:debug,2014-08-16T18:14:03.271,ns_1@10.6.2.148:dcp_consumer_conn-standard_bucket1-ns_1@10.6.2.144<0.682.0>:dcp_consumer_conn:maybe_reply_setup_streams:294]Setup stream request completed with {errors,[{34,487}]}.


and because of that:
[ns_server:error,2014-08-16T18:14:29.597,ns_1@10.6.2.144:<0.18640.0>:dcp_replicator:wait_for_data_move_loop:134]No dcp backfill stats for bucket "standard_bucket1", partition 487, connection "replication:ns_1@10.6.2.144->ns_1@10.6.2.148:standard_bucket1"
Comment by Aleksey Kondratenko [ 18/Aug/14 ]
All 3 erange bugs are likely duplicates. But it's up to you to make final decision on that.
Comment by Chiyoung Seo [ 18/Aug/14 ]
Mike,

It seems to me that all those rebalance issues was caused by the DCP changes that were merged last week.

I'm closing this ticket as duplicate of

https://www.couchbase.com/issues/browse/MB-11980




[MB-11978] Add undocumented "detailed memcached stats" to the deprication list. Created: 16/Aug/14  Updated: 18/Aug/14

Status: Open
Project: Couchbase Server
Component/s: documentation
Affects Version/s: 2.5.1
Fix Version/s: None
Security Level: Public

Type: Task Priority: Major
Reporter: Cihan Biyikoglu Assignee: Ruth Harris
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
Add the following under the deprecated list for 2.5.1 and 3.0 as the docs become available at
http://docs.couchbase.com/couchbase-manual-2.5/#deprecated-items

- The undocumented facility for enabling legacy memcached detailed stats through "stats detail on" message is deprecated.

 Comments   
Comment by Cihan Biyikoglu [ 16/Aug/14 ]
Trond, could you ensure my description is accurate for what we are taking out. Also are there REST, CLI or any tools impact (even unsupported tools we ship) that we need to add?
thanks
Comment by Trond Norbye [ 18/Aug/14 ]
Perhaps reword to:

- The undocumented facility for enabling legacy memcached detailed stats through "stats detail on" and "stats detail dump" is deprecated.
Comment by Trond Norbye [ 18/Aug/14 ]
In addition the following should be added to the release notes about features _removed_ from our server as of 3.0:

The following command line options is removed from memcached:
 * -h, -i, -p, -l, -U, -B, -r, -s, -a, -u, -n, -f, -M, -m, -I

The following command line options will be deprecated as of 3.0:
 * -d, -L, -P, -k
Comment by Cihan Biyikoglu [ 18/Aug/14 ]
great lets go with this for now: since details are not documented, I think we are all good.
Ruth, could we add the following line to deprecation list at the top level. no cli or rest impact specifically. thanks!

- The undocumented facility for enabling legacy memcached detailed stats through "stats detail on" and "stats detail dump" is deprecated.




[MB-11977] cbtransfer on CentOS 5.8(with python2.4 by default): error: could not import ctypes module Created: 16/Aug/14  Updated: 18/Aug/14  Resolved: 18/Aug/14

Status: Resolved
Project: Couchbase Server
Component/s: tools
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Test Blocker
Reporter: Andrei Baranouski Assignee: Bin Cui
Resolution: Fixed Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: 3.0.0-1118

Triage: Untriaged
Operating System: Centos 64-bit
Is this a Regression?: Unknown

 Description   
http://qa.hq.northscale.net/job/centos_x64--33_01--cbtransfer-P1/15/console

[root@centos-58 ~]# /opt/couchbase/bin/cbtransfer couchstore-files:///opt/couchbase/var/lib/couchbase/data/
error: could not import ctypes module
[root@centos-58 ~]# cat /etc/*rele*
cat: /etc/lsb-release.d: Is a directory
CentOS release 5.8 (Final)
[root@centos-58 ~]# python
Python 2.4.3 (#1, Feb 22 2012, 16:05:45)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-52)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import ctypes
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
ImportError: No module named ctypes

according to http://python.net/crew/theller/ctypes/ ctypes was included only python2.5
"ctypes is an advanced ffi (Foreign Function Interface) package for Python 2.3 and higher. In Python 2.5 it is already included."


 Comments   
Comment by Bin Cui [ 18/Aug/14 ]
For centos 5.x, we will include a ctype module as part of distribution. Here we need to set the right permission to access these modules.

http://review.couchbase.org/#/c/40701/
Comment by Bin Cui [ 18/Aug/14 ]
push to 3.0.0 branch
http://review.couchbase.org/#/c/40702/
Comment by Bin Cui [ 18/Aug/14 ]
root cause: ctypes module is needed not only by pysnappy, but also by couchbase-cli in a broader way for centos 5.x because the default python is 2.4 while ctypes is included only after python 2.5.

It was caused by the change 56c07f4e3c408f8bb0011342e6f3d560453ff43b in voltron. We do need ctypes for centos 5 build no matter we need to build pysnappy or not.
Comment by Chris Hillery [ 18/Aug/14 ]
I don't think this diagnosis is correct. pysnappy is always built and required. It's not accurate to say that "ctype is bundled only if pysnappy is needed"; it's always built if python 2.4 is the default python on the system. python 2.4 is the default on our Centos 5.x builders, so ctypes is always included in the centos5 rpms.

Bin, let's try and talk about this, because I don't understand how this worked before.
Comment by Chris Hillery [ 18/Aug/14 ]
The problem actually appears to be that the build is somehow identifying python as 2.6 on the buildslave. I'm attempting to debug.
Comment by Chris Hillery [ 18/Aug/14 ]
Voltron's Makefile was using the output of "python -V" to determine the python version. Turns out that "python -V" outputs to stderr rather than stdout, so make didn't see the output and fell back to the default of python2.6.

Fix for 3.0.0 and master: http://review.couchbase.org/#/c/40711/ http://review.couchbase.org/#/c/40712/

Next Centos 5 build should contain ctypes.




[MB-11976] memory usage differs by 1.5 percent when we recreate the buckets and load the same data Created: 16/Aug/14  Updated: 21/Aug/14  Resolved: 21/Aug/14

Status: Resolved
Project: Couchbase Server
Component/s: couchbase-bucket
Affects Version/s: 3.0
Fix Version/s: 3.0.1
Security Level: Public

Type: Bug Priority: Major
Reporter: Andrei Baranouski Assignee: Abhinav Dangeti
Resolution: Won't Fix Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: 3.0.0-1118-rel

Triage: Untriaged
Operating System: Centos 64-bit
Is this a Regression?: Yes

 Description   
Hi Abhinav,

I know that you wrote this test, and so I'm going straight to you )

http://qa.hq.northscale.net/job/centos_x64--51_01--mem_sanity-P1/8/consoleFull

./testrunner -i /tmp/centos_x64--51_01--mem_sanity-P1.ini get-cbcollect-info=True,get-logs=False,stop-on-failure=False,GROUP=ALL -t memorysanitytests.MemorySanity.repetitive_create_delete,items=200000,repetition_count=5

This test creates a bucket, adds an initial front end load,
        checks memory stats, deletes the bucket, and recreates the
        same scenario repetitively for the specified number of times,
        and checks after the last repetition if the memory usage is
        the same that was at the end of the very first front end load.

2014-08-15 23:36:48 | INFO | MainProcess | Cluster_Thread | [rest_client.rebalance] rebalance params : password=password&ejectedNodes=&user=Administrator&knownNodes=ns_1%4010.1.2.58%2Cns_1%4010.1.2.56%2Cns_1%4010.1.2.57%2Cns_1%4010.1.2.55
2014-08-15 23:36:58 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] http://10.1.2.55:8091/pools/default/buckets with param: bucketType=membase&evictionPolicy=valueOnly&threadsNumber=3&ramQuotaMB=2069&proxyPort=11211&authType=sasl&name=default&flushEnabled=1&replicaNumber=1&replicaIndex=1&saslPassword=None
2014-08-15 23:38:20 | INFO | MainProcess | load_gen_task | [task.has_next] Batch create documents done #: 200000 with exp:0
2014-08-15 23:38:26 | INFO | MainProcess | test_thread | [memorysanitytests.repetitive_create_delete] Initial max_data_size of bucket 'default': 8678014976
2014-08-15 23:38:26 | INFO | MainProcess | test_thread | [memorysanitytests.repetitive_create_delete] initial memory consumption of bucket 'default' with load: 350739600

remove bucket default ...
2014-08-15 23:38:42 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] http://10.1.2.55:8091/pools/default/buckets with param: bucketType=membase&evictionPolicy=valueOnly&threadsNumber=3&ramQuotaMB=2069&proxyPort=11211&authType=sasl&name=default&flushEnabled=1&replicaNumber=1&replicaIndex=1&saslPassword=None
2014-08-15 23:40:01 | INFO | MainProcess | load_gen_task | [task.has_next] Batch create documents done #: 200000 with exp:0
2014-08-15 23:40:07 | INFO | MainProcess | test_thread | [memorysanitytests.repetitive_create_delete] Memory used after attempt 1 = 349445216, Difference from initial snapshot: -1294384

remove bucket default ...
2014-08-15 23:40:22 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] http://10.1.2.55:8091/pools/default/buckets with param: bucketType=membase&evictionPolicy=valueOnly&threadsNumber=3&ramQuotaMB=2069&proxyPort=11211&authType=sasl&name=default&flushEnabled=1&replicaNumber=1&replicaIndex=1&saslPassword=None
2014-08-15 23:41:48 | INFO | MainProcess | load_gen_task | [task.has_next] Batch create documents done #: 200000 with exp:0
2014-08-15 23:41:54 | INFO | MainProcess | test_thread | [memorysanitytests.repetitive_create_delete] Memory used after attempt 2 = 343476928, Difference from initial snapshot: -7262672

remove bucket default ...
2014-08-15 23:42:09 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] http://10.1.2.55:8091/pools/default/buckets with param: bucketType=membase&evictionPolicy=valueOnly&threadsNumber=3&ramQuotaMB=2069&proxyPort=11211&authType=sasl&name=default&flushEnabled=1&replicaNumber=1&replicaIndex=1&saslPassword=None
2014-08-15 23:43:33 | INFO | MainProcess | load_gen_task | [task.has_next] Batch create documents done #: 200000 with exp:0
2014-08-15 23:43:39 | INFO | MainProcess | test_thread | [memorysanitytests.repetitive_create_delete] Memory used after attempt 3 = 359316944, Difference from initial snapshot: 8577344

remove bucket default ...
2014-08-15 23:43:55 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] http://10.1.2.55:8091/pools/default/buckets with param: bucketType=membase&evictionPolicy=valueOnly&threadsNumber=3&ramQuotaMB=2069&proxyPort=11211&authType=sasl&name=default&flushEnabled=1&replicaNumber=1&replicaIndex=1&saslPassword=None
2014-08-15 23:45:15 | INFO | MainProcess | load_gen_task | [task.has_next] Batch create documents done #: 200000 with exp:0
2014-08-15 23:45:21 | INFO | MainProcess | test_thread | [memorysanitytests.repetitive_create_delete] Memory used after attempt 4 = 359671952, Difference from initial snapshot: 8932352

remove bucket default ...
2014-08-15 23:45:37 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] http://10.1.2.55:8091/pools/default/buckets with param: bucketType=membase&evictionPolicy=valueOnly&threadsNumber=3&ramQuotaMB=2069&proxyPort=11211&authType=sasl&name=default&flushEnabled=1&replicaNumber=1&replicaIndex=1&saslPassword=None
2014-08-15 23:46:59 | INFO | MainProcess | load_gen_task | [task.has_next] Batch create documents done #: 200000 with exp:0
2014-08-15 23:47:15 | INFO | MainProcess | test_thread | [memorysanitytests.repetitive_create_delete] default :: Initial: 350739600 :: Now: 355466704 :: Difference: 4727104

in short:
in the initial attempt we've got 350739600 Byte(334.49 Megabyte) mem_used
in the 5-th attempt it was 355466704 byte(338.99 Megabyte)
so, here the difference is about 4.5 megabyte

in our tests, you accept the difference only 100000 byte( ~0.1 Megabyte)
and seems like this was always enough (3.0.0-767-rel http://qa.hq.northscale.net/job/centos_x64--51_01--mem_sanity-P1/2/) because I had not seen this issue before
https://github.com/couchbase/testrunner/blob/master/pytests/memorysanitytests.py#L38


 Comments   
Comment by Andrei Baranouski [ 16/Aug/14 ]
https://s3.amazonaws.com/bugdb/jira/MB-11976/f8e52844/10.1.2.55-8152014-2348-diag.zip
https://s3.amazonaws.com/bugdb/jira/MB-11976/f8e52844/10.1.2.56-8152014-2349-diag.zip
https://s3.amazonaws.com/bugdb/jira/MB-11976/f8e52844/10.1.2.57-8152014-2349-diag.zip
https://s3.amazonaws.com/bugdb/jira/MB-11976/f8e52844/10.1.2.58-8152014-2350-diag.zip
Comment by Abhinav Dangeti [ 21/Aug/14 ]
Hey Andrei,

In 3.0, to support the features like rollback provided by UPR/DCP, the checkpoints hold on to previous item instances for slightly longer, causing a slight memory usage increase, but then once they do let the reference go, the memory usage does drop back down to the initial usage.

I verified the results with this test runner code change:
http://review.couchbase.org/#/c/40797

2014-08-21 10:23:09 | INFO | MainProcess | test_thread | [basetestcase.sleep] sleep for 15 secs. ...
2014-08-21 10:23:24 | INFO | MainProcess | test_thread | [memorysanitytests.repetitive_create_delete] Initial max_data_size of bucket 'default': 9162457088
2014-08-21 10:23:24 | INFO | MainProcess | test_thread | [memorysanitytests.repetitive_create_delete] initial memory consumption of bucket 'default' with load: 162832728

2014-08-21 10:24:25 | INFO | MainProcess | test_thread | [basetestcase.sleep] sleep for 15 secs. ...
2014-08-21 10:24:40 | INFO | MainProcess | test_thread | [memorysanitytests.repetitive_create_delete] Memory used after attempt 1 = 162832728, Difference from initial snapshot: 0

2014-08-21 10:25:32 | INFO | MainProcess | test_thread | [basetestcase.sleep] sleep for 15 secs. ...
2014-08-21 10:25:47 | INFO | MainProcess | test_thread | [memorysanitytests.repetitive_create_delete] Memory used after attempt 2 = 162832984, Difference from initial snapshot: 256

2014-08-21 10:26:38 | INFO | MainProcess | test_thread | [basetestcase.sleep] sleep for 15 secs. ...
2014-08-21 10:26:53 | INFO | MainProcess | test_thread | [memorysanitytests.repetitive_create_delete] Memory used after attempt 3 = 162833240, Difference from initial snapshot: 512

2014-08-21 10:27:43 | INFO | MainProcess | test_thread | [basetestcase.sleep] sleep for 15 secs. ...
2014-08-21 10:27:59 | INFO | MainProcess | test_thread | [memorysanitytests.repetitive_create_delete] Memory used after attempt 4 = 162833016, Difference from initial snapshot: 288

2014-08-21 10:28:49 | INFO | MainProcess | test_thread | [basetestcase.sleep] sleep for 15 secs. ...
2014-08-21 10:29:05 | INFO | MainProcess | test_thread | [memorysanitytests.repetitive_create_delete] Memory used after attempt 5 = 162833112, Difference from initial snapshot: 384




[MB-11975] {DCP}:: Bad Replicas leads to Rebalance Exit after add-back recovery=delta for gracefully failver node Created: 16/Aug/14  Updated: 19/Aug/14  Resolved: 19/Aug/14

Status: Resolved
Project: Couchbase Server
Component/s: couchbase-bucket, ns_server
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Blocker
Reporter: Parag Agarwal Assignee: Chiyoung Seo
Resolution: Duplicate Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: 10.6.2.144-10.6.2.150

Triage: Untriaged
Link to Log File, atop/blg, CBCollectInfo, Core dump: https://s3.amazonaws.com/bugdb/jira/MB-11975/1066.rebalance_exit.log.gz
Is this a Regression?: Unknown

 Description   
1166, Centos 6x

Test Case:: /testrunner -i ubuntu_x64.ini get-cbcollect-info=False,get-logs=False,stop-on-failure=False,get-coredumps=True,force_kill_memached=False,verify_unacked_bytes=True,total_vbuckets=128,std_vbuckets_dist=5 -t failover.failovertests.FailoverTests.test_failover_then_add_back,replicas=1,num_failed_nodes=1,items=100000,withMutationOps=True,doc_ops=update,upr_check=False,recoveryType=delta,graceful=True,GROUP=P0;GRACEFUL

1. Create 7 node cluster
2. Create default bucket with 100 k items
3. Graceful failover 1 node with mutations running in parallel
4. Add-back with recovery set to delta
5. Rebalance clusters with mutations running in parallel
Starting rebalance, KeepNodes = ['ns_1@10.6.2.146','ns_1@10.6.2.144',
'ns_1@10.6.2.145','ns_1@10.6.2.147',
'ns_1@10.6.2.148','ns_1@10.6.2.150',
'ns_1@10.6.2.149'], EjectNodes = [], Failed over and being ejected nodes = [], Delta recovery nodes = ['ns_1@10.6.2.146'], Delta recovery buckets = all

Step 5 leads to Rebalance exit with bad replicators issue

Bad replicators after rebalance:
Missing = [{'ns_1@10.6.2.147','ns_1@10.6.2.146',64}]
Extras = []

 ERROR - {u'node': u'ns_1@172.23.107.153', u'code': 2, u'text': u'Rebalance exited with reason {unexpected_exit,\n {\'EXIT\',<0.16515.3>,\n {dcp_wait_for_data_move_failed,"default",83,\n \'ns_1@172.23.107.153\',\n [\'ns_1@172.23.107.156\'],\n {error,no_stats_for_this_vbucket}}}}\n', u'shortText': u'message', u'serverTime': u'2014-08-15T19:43:54.062Z', u'module': u'ns_orchestrator', u'tstamp': 1408157034062, u'type': u'info'}
[2014-08-15 19:39:05,783] - [rest_client:2011] ERROR - {u'node': u'ns_1@172.23.107.153', u'code': 0, u'text': u'<0.16503.3> exited with {unexpected_exit,\n {\'EXIT\',<0.16515.3>,\n {dcp_wait_for_data_move_failed,"default",83,\n \'ns_1@172.23.107.153\',\n [\'ns_1@172.23.107.156\'],\n {error,no_stats_for_this_vbucket}}}}', u'shortText': u'message', u'serverTime': u'2014-08-15T19:43:54.060Z', u'module': u'ns_vbucket_mover', u'tstamp': 1408157034060, u'type': u'critical'}
[2014-08-15 19:39:05,783] - [rest_client:2011] ERROR - {u'node': u'ns_1@172.23.107.153', u'code': 0, u'text': u'Bucket "default" rebalance does not seem to be swap rebalance', u'shortText': u'message', u'serverTime': u'2014-08-15T19:43:53.999Z', u'module': u'ns_vbucket_mover', u'tstamp': 1408157033999, u'type': u'info'}
[2014-08-15 19:39:05,783] - [rest_client:2011] ERROR - {u'node': u'ns_1@172.23.107.153', u'code': 0, u'text': u'Started rebalancing bucket default', u'shortText': u'message', u'serverTime': u'2014-08-15T19:43:53.398Z', u'module': u'ns_rebalancer', u'tstamp': 1408157033398, u'type': u'info'}
[2014-08-15 19:39:05,783] - [rest_client:2011] ERROR - {u'node': u'ns_1@172.23.107.156', u'code': 0, u'text': u'Bucket "default" loaded on node \'ns_1@172.23.107.156\' in 0 seconds.', u'shortText': u'message', u'serverTime': u'2014-08-15T19:43:52.890Z', u'module': u'ns_memcached', u'tstamp': 1408157032890, u'type': u'info'}
[2014-08-15 19:39:05,784] - [rest_client:2011] ERROR - {u'node': u'ns_1@172.23.107.153', u'code': 4, u'text': u"Starting rebalance, KeepNodes = ['ns_1@172.23.107.153','ns_1@172.23.107.156',\n 'ns_1@172.23.107.154','ns_1@172.23.107.157',\n 'ns_1@172.23.107.155'], EjectNodes = [], Failed over and being ejected nodes = [], Delta recovery nodes = ['ns_1@172.23.107.156'], Delta recovery buckets = all", u'shortText': u'message', u'serverTime': u'2014-08-15T19:43:52.317Z', u'module': u'ns_orchestrator', u'tstamp': 1408157032317, u'type': u'info'}
[2014-08-15 19:39:05,784] - [rest_client:2011] ERROR - {u'node': u'ns_1@172.23.107.156', u'code': 0, u'text': u'Shutting down bucket "default" on \'ns_1@172.23.107.156\' for deletion', u'shortText': u'message', u'serverTime': u'2014-08-15T19:42:03.021Z', u'module': u'ns_memcached', u'tstamp': 1408156923021, u'type': u'info'}
[2014-08-15 19:39:05,784] - [rest_client:2011] ERROR - {u'node': u'ns_1@172.23.107.153', u'code': 0, u'text': u"Failed over 'ns_1@172.23.107.156': ok", u'shortText': u'message', u'serverTime': u'2014-08-15T19:42:02.728Z', u'module': u'ns_rebalancer', u'tstamp': 1408156922728, u'type': u'info'}
[2014-08-15 19:39:05,784] - [rest_client:2011] ERROR - {u'node': u'ns_1@172.23.107.153', u'code': 0, u'text': u"Starting failing over 'ns_1@172.23.107.156'", u'shortText': u'message', u'serverTime': u'2014-08-15T19:42:02.701Z', u'module': u'ns_rebalancer', u'tstamp': 1408156922701, u'type': u'info'}
[2014-08-15 19:39:05,784] - [rest_client:2011] ERROR - {u'node': u'ns_1@172.23.107.153', u'code': 0, u'text': u'Bucket "default" rebalance does not seem to be swap rebalance', u'shortText': u'message', u'serverTime': u'2014-08-15T19:42:01.975Z', u'module': u'ns_vbucket_mover', u'tstamp': 1408156921975, u'type': u'info'}
ERROR

This issue is recent. Was not occuring when last tested with 1160

 Comments   
Comment by Parag Agarwal [ 16/Aug/14 ]
did not observe an memcached core dump
Comment by Chiyoung Seo [ 17/Aug/14 ]
Parag,

As this is the rebalance failure, please ask the ns-server team for the initial investigation.
Comment by Anil Kumar [ 18/Aug/14 ]
Triage - This needs to be looked for 3.0 RC1
Comment by Aleksey Kondratenko [ 18/Aug/14 ]
erange again:

[ns_server:debug,2014-08-16T1:09:53.429,ns_1@10.6.2.146:dcp_consumer_conn-default-ns_1@10.6.2.147<0.21602.0>:dcp_commands:add_stream:61]Add stream for partition 64, opaque = 0x40, type = add
[ns_server:debug,2014-08-16T1:09:53.430,ns_1@10.6.2.146:dcp_consumer_conn-default-ns_1@10.6.2.147<0.21602.0>:dcp_consumer_conn:handle_call:188]Setup DCP streams:
Current [62,63]
Streams to open [64]
Streams to close []

[ns_server:debug,2014-08-16T1:09:53.432,ns_1@10.6.2.146:dcp_consumer_conn-default-ns_1@10.6.2.147<0.21602.0>:dcp_proxy:handle_packet:120]Proxy packet: REQUEST: 0x53 (dcp_stream_req) vbucket = 64 opaque = 0x1D000000
80 53 00 00
30 00 00 40
00 00 00 30
1D 00 00 00
00 00 00 00
00 00 00 00
00 00 00 00
00 00 00 00
00 00 00 00
00 00 03 1C
FF FF FF FF
FF FF FF FF
00 00 7C EB
80 49 F9 6F
00 00 00 00
00 00 03 1B
00 00 00 00
00 00 03 1B

[ns_server:debug,2014-08-16T1:09:53.434,ns_1@10.6.2.146:<0.21603.0>:dcp_proxy:handle_packet:120]Proxy packet: RESPONSE: 0x53 (dcp_stream_req) vbucket = 0 opaque = 0x1D000000 status = 0x22 (erange)
81 53 00 00
00 00 00 22
00 00 00 0D
1D 00 00 00
00 00 00 00
00 00 00 00
4F 75 74 73
69 64 65 20
72 61 6E 67
65
[ns_server:debug,2014-08-16T1:09:53.434,ns_1@10.6.2.146:dcp_consumer_conn-default-ns_1@10.6.2.147<0.21602.0>:dcp_proxy:handle_packet:120]Proxy packet: RESPONSE: 0x51 (dcp_add_stream) vbucket = 0 opaque = 0x40 status = 0x22 (erange)
81 51 00 00
04 00 00 22
00 00 00 04
00 00 00 40
00 00 00 00
00 00 00 00
00 00 00 1D

[ns_server:debug,2014-08-16T1:09:53.434,ns_1@10.6.2.146:dcp_consumer_conn-default-ns_1@10.6.2.147<0.21602.0>:dcp_consumer_conn:maybe_reply_setup_streams:294]Setup stream request completed with {errors,[{34,64}]}.
Comment by Chiyoung Seo [ 18/Aug/14 ]
Duplicate as https://www.couchbase.com/issues/browse/MB-11979
Comment by Chiyoung Seo [ 18/Aug/14 ]
I was confused. This is not duplicate issue. Reopening this ticket.
Comment by Chiyoung Seo [ 18/Aug/14 ]
Duplicate of

https://www.couchbase.com/issues/browse/MB-11980




[MB-11974] investigate influence of tcmalloc TCMALLOC_TRANSFER_NUM_OBJ setting on memory fragmentation and performance Created: 15/Aug/14  Updated: 18/Aug/14

Status: Open
Project: Couchbase Server
Component/s: performance
Affects Version/s: 3.0
Fix Version/s: 3.0.1
Security Level: Public

Type: Bug Priority: Major
Reporter: Aleksey Kondratenko Assignee: Pavel Paulau
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
In MB-11961 we found that -DTCMALLOC_SMALL_BUT_SLOW is actually causing visible degradation in performance.

But in order to try to limit aggressiveness of tcmalloc's trade of memory fragmentation for performance we can still try playing with TCMALLOC_TRANSFER_NUM_OBJ. Specifically because in 2.1 default number of items to transfer was bumped significantly for small memory chunks. Which may negatively affect memory fragmentation.

So we should try to do at least one run wth TCMALLOC_TRANSFER_NUM_OBJ so to 40. We can also play with larger values. I.e. 512 and 4096.

 Comments   
Comment by Raju Suravarjjala [ 18/Aug/14 ]
Triage: Needs extensive testing, retargeting to 3.0.1




[MB-11973] [System test] Seeing odd messages on couchdb.logs, where startSeqNo> endSeqNo, expecting a rollback for partition X. Created: 15/Aug/14  Updated: 21/Aug/14  Resolved: 21/Aug/14

Status: Resolved
Project: Couchbase Server
Component/s: couchbase-bucket, view-engine
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Critical
Reporter: Ketaki Gangal Assignee: Mike Wiederhold
Resolution: Fixed Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: Centos 6.4
6 Nodes, 2 Buckets, 1 ddoc X 2 Views
3.0.0-1163-rel

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
1. Load items ~ 700M, 630M on both the buckets, 10% dgm
2. During initial indexing / loading phase, I see a number of log messages which expect rollbacks on partitions due to startSeqNo > endSeqNo.

Not sure why should the above happen - given that there is no rebalance/failover/add new node etc activity.

Logs from the cluster https://s3.amazonaws.com/bugdb/11973/bug_logs.tar

 Comments   
Comment by Sriram Melkote [ 15/Aug/14 ]
Moving to critical as even though there's no problem, if we did indeed rollback after indexing 65k mutations, there'll be a big performance penalty
Comment by Mike Wiederhold [ 15/Aug/14 ]
I don't know why this was assigned to me. These log messages need to be looked at on the view engine side first.
Comment by Ketaki Gangal [ 15/Aug/14 ]
These have been looked at by the view-team, it looks like there is an issue w/ the endSeqNo which the view-component receives from ep-engine seqno resets likely.

Will wait for view-team to add more details on this however.


Comment by Sarath Lakshman [ 18/Aug/14 ]
Siri, I am currently looking into this bug
Comment by Sarath Lakshman [ 18/Aug/14 ]
One possibility of problem at view engine side is that, we cache seq nos stats every 300ms. I will check if this is due to low stats cache update frequency
Comment by Sarath Lakshman [ 18/Aug/14 ]
Looks like its a problem due to stats caching. Since it is a lazily updated async cache, it takes triggers a cache update only when cache ttl is expired and it returns old cache value without waiting for fetching latest. For replica vbuckets, only time cache will be updated is by every 5 seconds trigger. At each 5 second trigger, it will read old seqs from cache. Only next cache reader will see the asynchronously updated cache. But next reader is again after 5 seconds since there are no queries consuming the seqnos. So always updater is started with around 5 seconds old seqnos.
Comment by Sarath Lakshman [ 18/Aug/14 ]
Just confirmed from code that we don't use cached seqs for 5 seconds trigger. I just realized that we dont use cached seqs for anything other than stale=update_after queries. Hence this doesn't look like a view engine problem.
Comment by Sarath Lakshman [ 18/Aug/14 ]
Looks like there is a problem in EP-Engine for replica vbuckets. I read the log files and found that EP-Engine silently seem to rollback to a seqno less than it had for replica vbuckets. I do not see this problem for any active vbuckets.

Example from view engine logs:

View engine received items from a snapshot for vbucket 386 upto seqno 518299 and wrote data to index.
Next time when the updater tried to index next set of items, we received highseqn=518222 from vbucket-seqno stats

[couchdb:info,2014-08-15T8:50:18.745,ns_1@10.6.2.166:<0.26830.22>:couch_log:info:41]set view `saslbucket`, replica (prod) group `_design/ddoc2`: received a snapshot marker (on-disk) for partition 386 from sequence 518052 to 518299
[couchdb:info,2014-08-15T8:50:27.357,ns_1@10.6.2.166:<0.26990.22>:couch_log:info:41]dcp client (<0.1755.0>): Expecting a rollback for partition 386. Found start_seqno > end_seqno (518299 > 518222).


I see this pattern only for replica vbuckets. Atleast from the logs of a node that I investigated, I do not see any such pattern for any of the active vbuckets.

Is it something to do with replication in EP-Engine ?
Comment by Sarath Lakshman [ 18/Aug/14 ]
I just double checked that we are grabing latest seqnos stats (no caching) before starting updater and use those seqnos as endseqnos.
Comment by Raju Suravarjjala [ 18/Aug/14 ]
Triage: Not a blocker for 3.0 RC1
Comment by Chiyoung Seo [ 18/Aug/14 ]
Mike,

It seems to me that this issue was caused by the recent change that was made in the ep-engine:

http://review.couchbase.org/#/c/40346/

The above commit adapts the replica vbucket so that it should use its closed checkpoint's end seqno as its high seqno for the UPR stream. Did you communicate the above change to the view team?
Comment by Mike Wiederhold [ 18/Aug/14 ]
I'll look into this. The changes made on the ep-engine side shouldn't have affected anyone since nothing changed externally.
Comment by Mike Wiederhold [ 20/Aug/14 ]
http://review.couchbase.org/#/c/40765/




[MB-11972] Erlang crash during loading of data, with on-going indexing. Created: 15/Aug/14  Updated: 20/Aug/14  Resolved: 20/Aug/14

Status: Resolved
Project: Couchbase Server
Component/s: view-engine
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Major
Reporter: Ketaki Gangal Assignee: Igor Platitsa
Resolution: Incomplete Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: CentOs 6.4

7 Nodes, 2 Buckets, 1 ddoc X 2 Views.

Attachments: File stackTrace_beam.rtf    
Triage: Untriaged
Is this a Regression?: Unknown

 Description   
- Load data on the cluster, where during the load, I see the following error message on one of the nodes.


Port server ns_server on node 'babysitter_of_ns_1@127.0.0.1' exited with status 139. Restarting. Messages: Apache CouchDB has started. Time to relax.
working as port
working as port
[os_mon] memory supervisor port (memsup): Erlang has closed
[os_mon] cpu supervisor port (cpu_sup): Erlang has closed

Attached stack trace from the core.beam.smp

Core.dump https://s3.amazonaws.com/bugdb/11972/core.beam.smp.26785


 Comments   
Comment by Igor Platitsa [ 15/Aug/14 ]
I believe this is a VM, which means that physical RAM does not exist. We can run memory tests on hypervisor, but this will require some time. I currently have no access to hypervisors and as soon as I get it, I will take a closer look.


 Xen: 0000000000000000 - 00000000000a0000 (usable)
 Xen: 00000000000a0000 - 0000000000100000 (reserved)
 Xen: 0000000000100000 - 00000005dc000000 (usable)
Booting paravirtualized kernel on Xen
Xen version: 4.1.5 (preserve-AD)
Xen: using vcpu_info placement
Xen: using vcpuop timer interface
installing Xen timer for CPU 0
installing Xen timer for CPU 1
installing Xen timer for CPU 2
installing Xen timer for CPU 3
installing Xen timer for CPU 4
installing Xen timer for CPU 5
installing Xen timer for CPU 6
installing Xen timer for CPU 7
Initialising Xen virtual ethernet driver.
Comment by Ketaki Gangal [ 15/Aug/14 ]
Hi Igor,

please let me know before you run any tests. This is an active VM and I would like to make sure it does not impact any running functionalities.

Comment by Raju Suravarjjala [ 18/Aug/14 ]
Triage: Not a 3.0 RC1 blocker
Comment by Cihan Biyikoglu [ 19/Aug/14 ]
Triage - Sri to followup with Harsha to check the logs one more time but if this does not reliably reproduce we won't take this for 3.0 RC2.
Comment by Harsha Havanur [ 20/Aug/14 ]
Tried on Centos EC2 setup. Issue is not reproducible. This seems to be related to VM setup or infrastructure. We can close this bug as non issue.
Comment by Ketaki Gangal [ 20/Aug/14 ]
Assigning this back to Igor.

Hi Igor,

Per our previous discussion, can you take a look at the memory tests on this VM, when you get a chance.

thanks,
ketaki
Comment by Igor Platitsa [ 20/Aug/14 ]
Ketaki,

Since it is a VM we need to schedule a maintenance window (VMs will not be available during that time). I will talk to the rest of the IT team in order to schedule this maintenance. Is that considered as blocker?

Regards,
Igor
Comment by Ketaki Gangal [ 20/Aug/14 ]
No this is not a blocker, it is seen for the first time - I would re-run and re-open for any futher shutdown work on this.

Many VMs on this host have been recently re-provisioned, so I would not consider this to be a critical issue currently.




[MB-11971] [make-simple] Sporadic failures on make-simple-upr/tap on warmup-tests Created: 15/Aug/14  Updated: 19/Aug/14  Resolved: 19/Aug/14

Status: Resolved
Project: Couchbase Server
Component/s: couchbase-bucket
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Critical
Reporter: Ketaki Gangal Assignee: Abhinav Dangeti
Resolution: Fixed Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Yes

 Description   
http://factory.couchbase.com/job/make-simple-github-upr/758/console

The warmup tests fail on keyError while retrieving stats.

Abhinav, could you take a look?

This particular test has started failing intermittently since Aug 14.

 Comments   
Comment by Abhinav Dangeti [ 15/Aug/14 ]
Ketaki, would it be possible to get the logs from this test?
I'm not able to reproduce.
Comment by Chris Hillery [ 15/Aug/14 ]
You can see the output of the failed tests on the Jenkins jobs:

http://factory.couchbase.com/job/make-simple-github-tap/
http://factory.couchbase.com/job/make-simple-github-upr/

If you want server logs, that's a bit harder since the job re-runs on the same workspace every time. I could probably extend the job to save the logs if you tell me exactly what commands would achieve that...
Comment by Abhinav Dangeti [ 15/Aug/14 ]
Test issue, you're trying to get ep_warmup_time when ep_warmup_thread is still running.

Here's the proof:
2014-08-15 13:04:35 | INFO | MainProcess | test_thread | [memcapable._do_warmup] ep_warmup_thread directly after kill_memcached: running
2014-08-15 13:04:35 | ERROR | MainProcess | test_thread | [memcapable._do_warmup] 'ep_warmup_time' was not found in stats:{'ep_item_eviction_policy': 'value_only', 'ep_backfill_mem_threshold': '95', 'ep_dbname': '/root/couchbase30/ns_server/data/n_0/data/default', 'cmd_set': '0', 'vb_replica_queue_memory': '0', 'vb_replica_queue_pending': '0', 'ep_tap_throttle_cap_pcnt': '10', 'libevent': '1.4.13-stable', 'rejected_conns': '0', 'ep_vbucket_del': '0', 'ep_chk_remover_stime': '5', 'ep_dcp_enable_noop': '1', 'ep_tap_bg_fetched': '0', 'ep_meta_data_disk': '0', 'ep_exp_pager_stime': '3600', 'ep_blob_overhead': '621593', 'ep_oom_errors': '0', 'ep_num_value_ejects': '0', 'ep_tap_throttle_threshold': '90', 'cas_hits': '0', 'bytes_written': '26688', 'ep_vb_total': '64', 'ep_diskqueue_drain': '0', 'listen_disabled_num': '0', 'ep_chk_period': '5', 'ep_tap_ack_window_size': '10', 'ep_tap_backoff_period': '5', 'vb_pending_queue_drain': '0', 'ep_max_item_size': '20971520', 'vb_pending_ht_memory': '0', 'ep_warmup_thread': 'running' ....

If you see the stats, ep_warmup_time was not found when ep_warmup_thread is still found to be running, which is expected.

Here's your test runner fix:
http://review.couchbase.org/#/c/40661




[MB-11970] vb_active_perc_mem_resident reports 0% when there are 0 items Created: 15/Aug/14  Updated: 20/Aug/14  Resolved: 20/Aug/14

Status: Resolved
Project: Couchbase Server
Component/s: couchbase-bucket
Affects Version/s: 2.2.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Major
Reporter: Bryce Jasmer Assignee: Mike Wiederhold
Resolution: Fixed Votes: 0
Labels: rc2, stats
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
I can argue both ways (0% or 100%) for what vb_active_perc_mem_resident should be when there are no items in the vbucket, but it seems like the safer choice for this condition would be to report 100% of the items are resident. Reporting 0% indicates to me there is a bad situation and everything has been flushed to disk, but since there isn't anything at all there, there is no error and everything that should be in memory is in memory, just that there isn't anything to put in memory.


 Comments   
Comment by Mike Wiederhold [ 20/Aug/14 ]
http://review.couchbase.org/#/c/40756/
http://review.couchbase.org/#/c/40759/
Comment by Anil Kumar [ 20/Aug/14 ]
Minor stats issue "approved" to be included for RC2.
Comment by Aleksey Kondratenko [ 20/Aug/14 ]
Hm. I was not aware of ep-engine side change. ns_server change is merged and is now part of 3.0 manifest. http://review.couchbase.org/40761
Comment by Mike Wiederhold [ 20/Aug/14 ]
I'm backporting the ep-engine one, but this is related to the UI so the ep-engine change is not necessary for resolving this issue since it is not used by the UI.




Generated at Fri Aug 22 08:32:34 CDT 2014 using JIRA 5.2.4#845-sha1:c9f4cc41abe72fb236945343a1f485c2c844dac9.