[MB-11715] Rebalance after failover with views is slow (>6h), couch_view_group_cleanup crashes Created: 11/Jul/14  Updated: 12/Jul/14

Status: Open
Project: Couchbase Server
Component/s: view-engine
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Blocker
Reporter: Pavel Paulau Assignee: Pavel Paulau
Resolution: Unresolved Votes: 0
Labels: performance
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: 3.0.0-943

Platform = Physical
OS = CentOS 6.5
CPU = Intel Xeon E5-2630 (24 vCPU)
Memory = 64 GB
Disk = 2 x SSD

Attachments: Text File gdb.log    
Triage: Untriaged
Operating System: Centos 64-bit
Link to Log File, atop/blg, CBCollectInfo, Core dump: http://ci.sc.couchbase.com/job/leto/282/artifact/
Is this a Regression?: Yes

 Description   
Rebalance after failover, 3 -> 4, 1 bucket x 100M x 2KB, 1 view, 10K mixed ops/sec, 400 queries/sec

Steps:
1. "Failover" one node.
2. Add it back.
3. Wait 20 minutes
4. Trigger cluster rebalance, wait for rebalance to finish.

Rebalance in 2x slower than in beta build, indexing of individual vbuckets takes up to 3.5h:

http://cbmonitor.sc.couchbase.com/reports/movements/?filename=e1175a17dae049d38215d5b25ed46f72

couch_view_group_cleanup crashed 3 times during rebalance: "Segmentation fault".

gdb --ex 't a a bt full' /opt/couchbase/bin/couch_view_group_cleanup /tmp/core.couch_view_grou.113377.leto-s304.1405132791 < /dev/null > gdb.log




[MB-11714] 3x regression in XDCR replication latency (LAN) Created: 11/Jul/14  Updated: 11/Jul/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Major
Reporter: Pavel Paulau Assignee: Pavel Paulau
Resolution: Unresolved Votes: 0
Labels: performance
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: Build 3.0.0-919

Platform = Physical
OS = CentOS 6.5
CPU = Intel Xeon E5-2680 v2 (40 vCPU)
Memory = 256 GB
Disk = RAID 10 SSD

Link to Log File, atop/blg, CBCollectInfo, Core dump: Before - Beta build 3.0.0-918:
http://ci.sc.couchbase.com/job/xdcr-5x5/339/artifact/

Regression - build 3.0.0-919:
http://ci.sc.couchbase.com/job/xdcr-5x5/353/artifact/
Is this a Regression?: Yes

 Description   
5 -> 5 UniDir, 2 buckets x 500M x 1KB, 10K SETs/sec, LAN
xdcrOptimisticReplicationThreshold = 0

Before: ~60-70ms
After: 220-270ms

List of changes:
http://builder.hq.couchbase.com/#/compare/couchbase-server-enterprise_centos6_x86_64_3.0.0-918-rel.rpm/couchbase-server-enterprise_centos6_x86_64_3.0.0-919-rel.rpm





[MB-11713] UPR logging needs to be improved for view engine and xdcr Created: 11/Jul/14  Updated: 11/Jul/14

Status: Open
Project: Couchbase Server
Component/s: couchbase-bucket
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Major
Reporter: Mike Wiederhold Assignee: Mike Wiederhold
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
After looking at one of the system tests I found the following number of log messages

[root@soursop-s11205 ~]# cat /opt/couchbase/var/lib/couchbase/logs/memcached.log.* | wc -l
1061224 // Total log messages

[root@soursop-s11205 ~]# cat /opt/couchbase/var/lib/couchbase/logs/memcached.log.* | grep xdcr | wc -l
1033792 // XDCR related upr log messages

[root@soursop-s11205 ~]# cat /opt/couchbase/var/lib/couchbase/logs/memcached.log.* | grep -v xdcr | grep UPR | wc -l
3730 // Rebalance related UPR messages

In this case 97% of all log messages are for XDCR UPR streams.




[MB-11712] Compaction fails constantly for one vbucket because /tmp is not writable Created: 11/Jul/14  Updated: 11/Jul/14

Status: Open
Project: Couchbase Server
Component/s: couchbase-bucket
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Major
Reporter: Mike Wiederhold Assignee: Sundar Sridharan
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
Two buckets in my cluster are having problems compacting vbucket 38. I've traced the issue down to compaction trying to do an on disk merge sort, but this sort fails due to not being able to open a file. Below is the output of lsof. The compaction fails after calling this function:

error_pass(TreeWriterSort(ctx.tree_writer));

memcached 16320 couchbase 61r REG 253,2 17350747 4718850 /data/standardbucket1/38.couch.1
memcached 16320 couchbase 73u REG 253,2 0 4718895 /data/standardbucket1/38.couch.1.compact
memcached 16320 couchbase 78u REG 253,2 0 4718896 /data/standardbucket1/38.couch.1.compactTmp (deleted)

 Comments   
Comment by Sundar Sridharan [ 11/Jul/14 ]
http://review.couchbase.org/#/c/39042/ fixes this issue as well.




[MB-11711] Lots of get failures from couchkvstore in the logs Created: 11/Jul/14  Updated: 11/Jul/14

Status: Open
Project: Couchbase Server
Component/s: couchbase-bucket
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Major
Reporter: Mike Wiederhold Assignee: Mike Wiederhold
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
In a two hour period I saw 4560 instances of the message on one node below which is printed from the couchkvstore multi-get function. We need to decide whether or not this behavior is expected and if it is then we need a way to suppress false positives.

Thu Jul 10 15:34:39.384231 PDT 3: (saslbucket) Warning: failed to fetch data from database, vBucket=963 key=B33BB0AF-62_558753 error=document not found [none]

 Comments   
Comment by Chiyoung Seo [ 11/Jul/14 ]
This could happen because of getMeta bg fetches when the XDCR is enabled. We need to lower the log level if that's the case.
Comment by Venu Uppalapati [ 11/Jul/14 ]
can you provide details on the workload that was run i.e cbworkloadgen (with parameters)?




[MB-11710] Upgrade of 1.8.1 to 3.0 RC Build 918 Fails Created: 11/Jul/14  Updated: 11/Jul/14

Status: Reopened
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 3.0
Fix Version/s: feature-backlog
Security Level: Public

Type: Bug Priority: Minor
Reporter: Ian McCloy Assignee: Anil Kumar
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: PNG File 18to3upgradefailure.png    
Triage: Untriaged
Is this a Regression?: Unknown

 Description   
I have a 2 node 1.8.1 cluster (with all available hotfixes applied), I want to upgrade to 3.0 RC Build 918. I offline upgraded 1 node, but I cannot add it back to the cluster, I get error message

"Attention - Prepare join failed. Joining 1.8.1-937-rel-enterprise cluster is not supported"
(see screenshot)

I don't know if upgrading from 1.8.1 to 3 will be supported, the code in question is at

http://src.couchbase.org/source/xref/3.0.0/ns_server/src/menelaus_web.erl#863

 Comments   
Comment by Aleksey Kondratenko [ 11/Jul/14 ]
Yes we have made decision officially to drop 1.8.x compat in 3.0

In general it appears to be a policy that we support previous major version but not necessarily anything older than that.
Comment by Dave Rigby [ 11/Jul/14 ]
Can I suggest we make the error message more helpful and inform the user of how to solve the problem? For example change to something like:

"Attention - Prepare join failed. Joining 1.8.1-937-rel-enterprise cluster is not supported. **Please upgrade the 1.8 node to 2.X and retry the operation**"
Comment by Dave Rigby [ 11/Jul/14 ]
http://review.couchbase.org/#/c/39314/




[MB-11709] Cannot rebalance between 2.x and 3.0 RC Build 908 Created: 11/Jul/14  Updated: 11/Jul/14  Resolved: 11/Jul/14

Status: Resolved
Project: Couchbase Server
Component/s: None
Affects Version/s: 2.2.0, 2.5.1, 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Blocker
Reporter: Mark Woosey Assignee: Aleksey Kondratenko
Resolution: Cannot Reproduce Votes: 0
Labels: rebalance, upgrade
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: Centos 5.10 w/ Server 2.2 + 3.0, Ubuntu 12 w/ Server 2.5.1 + 3.0

Both 4x1 core/1024MB Memory Vagrant VMs

Triage: Untriaged
Flagged:
Impediment
Link to Log File, atop/blg, CBCollectInfo, Core dump: https://s3.amazonaws.com/customers.couchbase.com/mwuk/22030/collectinfo-2014-07-11T153138-ns_1%40192.168.59.104.zip
https://s3.amazonaws.com/customers.couchbase.com/mwuk/22030/node1.zip
https://s3.amazonaws.com/customers.couchbase.com/mwuk/22030/node2.zip
https://s3.amazonaws.com/customers.couchbase.com/mwuk/22030/node3.zip
Is this a Regression?: Unknown

 Description   
In attempting to rebalance (swap _or_ add), rebalance fails when adding 3.0 node to a 2.x cluster (and hence during upgrade process).

node{1:3} are 2.2, node{4} is 3.0 in included logs


 Comments   
Comment by Chiyoung Seo [ 11/Jul/14 ]
Assigning it to the ns-server team for the initial investigation.
Comment by Aleksey Kondratenko [ 11/Jul/14 ]
duplicate of already fixed bug.
Comment by David Haikney [ 11/Jul/14 ]
Can we get a link to the duplicate so that we can track and verify the fix. Cheers, DH
Comment by Aleksey Kondratenko [ 11/Jul/14 ]
I'm too busy with lots of other things. Bug does not occur anymore. And might even be somewhere in release notes.




[MB-11708] memcached_version is wrong Created: 11/Jul/14  Updated: 11/Jul/14

Status: Open
Project: Couchbase Server
Component/s: build
Affects Version/s: 3.0
Fix Version/s: None
Security Level: Public

Type: Bug Priority: Minor
Reporter: Patrick Varley Assignee: Trond Norbye
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Operating System: Centos 64-bit
Link to Log File, atop/blg, CBCollectInfo, Core dump: [vagrant@localhost ~]$ uname -a
Linux localhost.localdomain 2.6.32-431.el6.x86_64 #1 SMP Fri Nov 22 03:15:09 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
[vagrant@localhost ~]$ /opt/couchbase/bin/cbstats 192.168.71.103:11210 all -b XDCR | grep memcached
 memcached_version: 2.0.1-macosx-233-g8e4053f
[vagrant@localhost ~]$
Is this a Regression?: Unknown

 Description   
On the Centos 6u5 x86_64 memcached_version is wrong:

[vagrant@localhost ~]$ uname -a
Linux localhost.localdomain 2.6.32-431.el6.x86_64 #1 SMP Fri Nov 22 03:15:09 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
[vagrant@localhost ~]$ /opt/couchbase/bin/cbstats 192.168.71.103:11210 all -b XDCR | grep memcached
 memcached_version: 2.0.1-macosx-233-g8e4053f
[vagrant@localhost ~]$


 Comments   
Comment by Dave Rigby [ 11/Jul/14 ]
I had a quick look at this - it's created by the output of `git describe` - which shows the last tag on the memcached tree. It just so happens this is "2.0.1-macosx". We should probably tag the 3.0 branch before it goes out which will "fix" this.

Either way the hex string (8e4053f) *is* the SHA of the most recent commit on the 3.0 memcached branch, so it is useful to a degree.
Comment by Phil Labee [ 11/Jul/14 ]
This is a code error and not the resposibility of the build team. No part of the product should compute any sort of version string based on git tags, mostly because they are unreliable. This has been our policy for almost a year.

From:

    http://hub.internal.couchbase.com/confluence/display/CR/Branching+and+Tagging

Although "git describe" accurately describes the state of a repo, it may not accurately describe the product being release. If exposed to the end-user, this may cause confusion, so it is important that the output of "git describe" is NOT used to describe the product. This is in Help :: About dialog boxes in UIs, text files like README and VERSION.txt, and in log files and diagnostic tools that report the version of the installed product.
Comment by Phil Labee [ 11/Jul/14 ]
assuming this is a bug in memcached

If it's a bug in some other project (e.g. couchbase-cli) please reassign.




[MB-11707] UI - Incoming XDCR graph lacks appropriate title Created: 11/Jul/14  Updated: 11/Jul/14

Status: Open
Project: Couchbase Server
Component/s: UI
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Major
Reporter: David Haikney Assignee: Aleksey Kondratenko
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
(applies to 3.0 build 918)

* Go to Data Buckets Tab and select a bucket
* Under the summary graphs select Outbound XDCR mutations
* Note main graph changes to "outbound XDCR mutations"
* Select Incoming XDCR ops graph
* Title is "total ops/sec"

Even though this may seem trivial, for customer support screenshares it's important to be able to quickly ascertain the graph we're looking at from the title.

 Comments   
Comment by Aleksey Kondratenko [ 11/Jul/14 ]
Thanks for spotting it. We'll fix it.
Comment by Aliaksey Artamonau [ 11/Jul/14 ]
The problem is that we have xdc_ops stat in two different blocks with different titles. And our UI uses stats name only to designate which graph is selected.
Comment by Aleksey Kondratenko [ 11/Jul/14 ]
The problem is even more generic. E.g. pick sets per sec stat in Incoming XDCR operations block. Observe that "title" is confusing "sets per sec.". I.e. when stat name is part of context (being under incoming xdcr operations) it's meaning is clear. But when it's out of context (just title) it's unclear. E.g.: http://i.imgur.com/iYri37m.png




[MB-11706] Graceful failover gets to 55% then hangs Created: 11/Jul/14  Updated: 11/Jul/14

Status: Open
Project: Couchbase Server
Component/s: ns_server, view-engine
Affects Version/s: 3.0-Beta
Fix Version/s: 3.0, 3.0-Beta
Security Level: Public

Type: Bug Priority: Blocker
Reporter: Dave Rigby Assignee: Nimish Gupta
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: CentOS, CB 3.0.0 build 918

Attachments: PNG File Screen Shot 2014-07-11 at 15.41.06.png    
Triage: Untriaged
Operating System: Centos 64-bit
Is this a Regression?: Unknown

 Description   
Created 3-node cluster (1GB RAM, 2 CPUs), with 4 buckets. Put one bucket under modest cbworkloadgen workload (10,000 items, 13k op/s).

Selected "Graceful Failover" for one of the nodes. Failover started, got to ~55% then just paused (see screenshot). Waited for maybe 30mins but no progress.

Logs uploaded (via "Collect Information" :)

https://s3.amazonaws.com/customers.couchbase.com/daver_graceful_failover_hung/collectinfo-2014-07-11T144818-ns_1%40192.168.73.101.zip
https://s3.amazonaws.com/customers.couchbase.com/daver_graceful_failover_hung/collectinfo-2014-07-11T144818-ns_1%40192.168.73.102.zip
https://s3.amazonaws.com/customers.couchbase.com/daver_graceful_failover_hung/collectinfo-2014-07-11T144818-ns_1%40192.168.73.103.zip






 Comments   
Comment by Aleksey Kondratenko [ 11/Jul/14 ]
We're waiting for views to become up-to-date.
Comment by Sriram Melkote [ 11/Jul/14 ]
Seems pretty bad bug. Nimish, please help




[MB-11705] UI: Bucket dropdown for target buckets Created: 11/Jul/14  Updated: 11/Jul/14  Resolved: 11/Jul/14

Status: Resolved
Project: Couchbase Server
Component/s: UI
Affects Version/s: 3.0
Fix Version/s: None
Security Level: Public

Type: Improvement Priority: Minor
Reporter: David Haikney Assignee: Aleksey Kondratenko
Resolution: Duplicate Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
When Creating an XDCR relationship we see the following:

* A pre-populated list of source buckets
* A pre-populated list of XDCR target references
* A manual text box for the target bucket name.

A more user-friendly system would populate a list of candidate target buckets once the destination cluster had been selected. I expect the target cluster can be quickly polled for available buckets.

 Comments   
Comment by Aleksey Kondratenko [ 11/Jul/14 ]
It was requested somewhere already
Comment by David Haikney [ 11/Jul/14 ]
I thought it might have been and I tried a few search terms to try and find it. If you have the reference to the dup, that would be handy. Cheers, DH.




[MB-11704] cbstats has upr stats should it be renamed to dcp? Created: 11/Jul/14  Updated: 11/Jul/14  Resolved: 11/Jul/14

Status: Resolved
Project: Couchbase Server
Component/s: tools
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Trivial
Reporter: Patrick Varley Assignee: Bin Cui
Resolution: Fixed Votes: 0
Labels: stats
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: No

 Description   
I believe we have renamed upr to dcp

[vagrant@localhost ~]$ /opt/couchbase/bin/cbstats localhost:11210
Usage: cbstats [options]

Options:
  -h, --help show this help message and exit
  -a iterate over all buckets (requires admin u/p)
  -b BUCKETNAME the bucket to get stats from (Default: default)
  -p PASSWORD the password for the bucket if one exists
Usage: cbstats host:port all
  or cbstats host:port allocator
  or cbstats host:port checkpoint [vbid]
  or cbstats host:port config
  or cbstats host:port dcp
  or cbstats host:port dcp-takeover vb name
  or cbstats host:port dcpagg
  or cbstats host:port diskinfo [detail]
  or cbstats host:port dispatcher [logs]
  or cbstats host:port failovers [vbid]
  or cbstats host:port hash [detail]
  or cbstats host:port items (memcached bucket only)
  or cbstats host:port key keyname vbid
  or cbstats host:port kvstore
  or cbstats host:port kvtimings
  or cbstats host:port memory
  or cbstats host:port prev-vbucket
  or cbstats host:port raw argument
  or cbstats host:port reset
  or cbstats host:port slabs (memcached bucket only)
  or cbstats host:port tap
  or cbstats host:port tap-takeover vb name
  or cbstats host:port tapagg
  or cbstats host:port timings
  or cbstats host:port upr
  or cbstats host:port upr-takeover vb name
  or cbstats host:port upragg
  or cbstats host:port uuid
  or cbstats host:port vbucket
  or cbstats host:port vbucket-details [vbid]
  or cbstats host:port vbucket-seqno [vbid]
  or cbstats host:port vkey keyname vbid
  or cbstats host:port warmup


 Comments   
Comment by Patrick Varley [ 11/Jul/14 ]
On closer look it has both upr and dcp.
Comment by Mike Wiederhold [ 11/Jul/14 ]
The plan is to remove the upr stats and only have dcp once all other components stop using the upr stats.




[MB-11703] cbcollect_info does not validate the ticket number Created: 11/Jul/14  Updated: 11/Jul/14

Status: Open
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 3.0
Fix Version/s: bug-backlog
Security Level: Public

Type: Bug Priority: Minor
Reporter: Patrick Varley Assignee: Aleksey Kondratenko
Resolution: Unresolved Votes: 0
Labels: supportability
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Operating System: Centos 64-bit
Is this a Regression?: No

 Description   
cbcollect info should ensure that ticket is a number.

/opt/couchbase/bin/cbcollect_info --upload-host s3.amazonaws.com/customers.couchbase.com --customer PatRICK --ticket lkjlkjlkj test-3.0.zip

This is help the other work that is going on in support and to keep S3 clean.

 Comments   
Comment by Thuan Nguyen [ 11/Jul/14 ]
Ticket number is optional. I don't know if we have any format/rule for our ticket number that we could enforce it.
Comment by Aleksey Kondratenko [ 11/Jul/14 ]
We actually have rule for that which is enforced by erlang side but not by "bastard python uploader".




[MB-11702] cluster-wide collectinfo facility should convert the customer name to lower case Created: 11/Jul/14  Updated: 11/Jul/14

Status: Open
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 3.0
Fix Version/s: feature-backlog
Security Level: Public

Type: Improvement Priority: Major
Reporter: Patrick Varley Assignee: Aleksey Kondratenko
Resolution: Unresolved Votes: 0
Labels: supportability
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
As part of the other work that is going on in support, and to keep S3 clean, the customer name should be converted to lowercase.

/opt/couchbase/bin/cbcollect_info --upload-host s3.amazonaws.com/customers.couchbase.com --customer PatRICK --ticket lkjlkjlkj test-3.0.zip

URL: https://s3.amazonaws.com/customers.couchbase.com/PatRICK/lkjlkjlkj/test-3.0.zip




[MB-11701] UI graceful option should be greyed out when there are no replicas. Created: 11/Jul/14  Updated: 11/Jul/14

Status: Open
Project: Couchbase Server
Component/s: UI
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Major
Reporter: Patrick Varley Assignee: Anil Kumar
Resolution: Unresolved Votes: 1
Labels: failover
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: 2 buckets
beer-sample bucket 1 replica
XDCR bucket 0 replica

Attachments: PNG File Failover.png    
Triage: Untriaged
Operating System: Centos 64-bit
Is this a Regression?: No

 Description   
When a bucket has no replicas it cannot be gracefully failed over.

In the UI we hide the graceful button which I believe is bad UI design instead we should grey it out and explain that graceful failover is not available without the correct replicas vBuckets.




[MB-11700] GUI reports Failing over 2 nodes when doing a single graceful fail over Created: 11/Jul/14  Updated: 11/Jul/14  Resolved: 11/Jul/14

Status: Resolved
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Major
Reporter: Ian McCloy Assignee: Aleksey Kondratenko
Resolution: Duplicate Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: PNG File 3FailoverStep1.png     PNG File 3FailoverStep2.png     PNG File 3FailoverStep3.png    
Triage: Untriaged
Is this a Regression?: Unknown

 Description   
I have a 2 node cluster, running 3.0 RC, Build 918

I click failover on the 2nd node, 192.168.71.104, choose a Graceful Fail Over.

The GUI reports "Failing over 2 nodes" not 1 node.

This should say, Failing over node 192.168.71.104

 Comments   
Comment by Anil Kumar [ 11/Jul/14 ]
MB-11451




[MB-11699] Automated cbcollect - unsuccessful upload reported as success Created: 11/Jul/14  Updated: 11/Jul/14  Resolved: 11/Jul/14

Status: Resolved
Project: Couchbase Server
Component/s: UI
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Critical
Reporter: David Haikney Assignee: Aliaksey Artamonau
Resolution: Fixed Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: PNG File ss_2014-07-11_at_11.36.03 AM.png     PNG File unsuccessful_upload.png    
Triage: Untriaged
Is this a Regression?: Unknown

 Description   
(Applies to 3.0 build 918)

* Go to Log->Collect Information
* Select Upload to couchbase
* Enter the name of your favourite website (www.couchbase.com, www.bbc.co.uk)
* Enter valid details for customer and ticket number
* Click collect

Collection window appears and logs are reported as having been successfully uploaded to the site. Trying to curl the same URL manually yields a 301 error.


 Comments   
Comment by Thuan Nguyen [ 11/Jul/14 ]
Tested on build 3.0.0-935 on ubuntu 12.04 64-bit.
I could not reproduce this bug. Did collect logs and upload successfully to S3.
Verified this upload file is downloadable.
https://s3.amazonaws.com/packages.couchbase/collect_info/3_0_0/2014_06/test/test3.0.0-935/collectinfo-2014-07-11T182801-ns_1%40192.168.171.148.zip
Comment by David Haikney [ 11/Jul/14 ]
Looks like the test performed was "can I upload successfully?" Whereas this defect is about making sure an attempt to upload to an invalid host returns a correct error.
Comment by Thuan Nguyen [ 11/Jul/14 ]
It may relate to permission to create folder in upload host. We may catch output from HTTP/1.1
I want to create folder 1234 that require authentication,
  curl -v --upload-file 1234.txt "https://s3.amazonaws.com/1234/1234.txt"
 I got errors as the following

Content-Length: 281106
> Expect: 100-continue
>
< HTTP/1.1 404 Not Found
< x-amz-request-id: F019A41FD
< x-amz-id-2: aGjjCebse1HDzDL/W/UySxxhhR0Nhp9Wkbi1xrrOSdBt4TUUi
< Content-Type: application/xml
< Transfer-Encoding: chunked
< Date: Fri, 11 Jul 2014 19:12:51 GMT
< Connection: close



But if I upload to a folder that it is created by authentication (test) and create sub folder on fly by curl when upload, it works ok
curl -v --upload-file erl_crash.dump "https://s3.amazonaws.com/test/1234/1234.txt


Content-Length: 281106
> Expect: 100-continue
>
< HTTP/1.1 100 Continue
* We are completely uploaded and fine
< HTTP/1.1 200 OK
< x-amz-id-2: 7kBFpca+eDCvA4yWGOBwLFsvb/6wTwjYMmWcHZ5EaaN1LQb8h6v
< x-amz-request-id: B3A014F
< Date: Fri, 11 Jul 2014 19:26:23 GMT
< ETag: "b8aa80a750707be2494fe385bd0"
< Content-Length: 0
< Server: AmazonS3

Comment by Aliaksey Artamonau [ 11/Jul/14 ]
http://review.couchbase.org/39331




[MB-11698] Replication failed after enabling 'Metadata on disk' Created: 11/Jul/14  Updated: 11/Jul/14

Status: Open
Project: Couchbase Server
Component/s: couchbase-bucket
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Blocker
Reporter: Chris Malarky Assignee: Chris Malarky
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Operating System: Ubuntu 64-bit
Is this a Regression?: Unknown

 Description   
Testing Beta RC build 918 in a 4-node cluster. After I changed the 'Cache Metadata' from 'Retain' to 'Don't retain' it seems that replication stopped. When I failed-over a node I then lost data as the replicas didn't exist.

 Comments   
Comment by Chris Malarky [ 11/Jul/14 ]
Logs for all four nodes before failover, and the remaining three nodes after.
Comment by Chiyoung Seo [ 11/Jul/14 ]
Chris,

Can you provide more details in your test? For example, did you change the ejection policy in the middle of load and replication? Did you rebalance after you did failover? Note that if you don't rebalance after the failover, then, replicas have less items than active vbuckets.




[MB-11697] Automated cbcollect fails silently after 30 seconds Created: 11/Jul/14  Updated: 11/Jul/14

Status: Open
Project: Couchbase Server
Component/s: UI
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Major
Reporter: David Haikney Assignee: David Haikney
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: PNG File 2014-07-11-164340_1920x1048_scrot.png    
Triage: Untriaged
Is this a Regression?: Unknown

 Description   
(Applies to 3.0 build 918)

* Go to Log->Collect Information
* Select "Upload to Couchbase"
* Set the host as s3.mistype.com
* Enter valid details for customer and ticket number.
* Click collect

The collect button is disabled but the "collecting" dialogue doesn't appear. There are no error messages and then after approx 30 seconds the Collect button is enabled again. If the collection process is unsuccessful, we need to provide feedback to the user as to what has happened.

 Comments   
Comment by Aleksey Kondratenko [ 11/Jul/14 ]
Please note, that next time we might bounce ticket back if it doesn't have any diagnostics attached.
Comment by Aliaksey Artamonau [ 11/Jul/14 ]
Attached a screenshot of an error that I got.
Comment by Aliaksey Artamonau [ 11/Jul/14 ]
Please provide the cbcollect_info output.




[MB-11696] Automated cbcollect host error pop-up long and inconsistent Created: 11/Jul/14  Updated: 11/Jul/14

Status: Open
Project: Couchbase Server
Component/s: UI
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Major
Reporter: David Haikney Assignee: Aleksey Kondratenko
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
Applies to 3.0 build 918

Invalid hostname should just give the same nice red error alongside the input box rather than a pop-up.

Go to Log->Collect Information
Select "upload to couchbase"
Enter an invalid hostname

A box pops up stating:

"Failed to check reachability of https://www.klfjhasdlkfjhasldkfjhasdlkfjahsdf.com/david/hello/: {failed_connect,
[{to_address,
{"www.klfjhasdlkfjhasldkfjhasdlkfjahsdf.com",
443}},
{inet,
[inet],
nxdomain}]}

An inline red error message of "failed to connect to host" would be much cleaner.




[MB-11695] http_access.log does not track the user name who access the UI from a browser. Created: 11/Jul/14  Updated: 11/Jul/14  Resolved: 11/Jul/14

Status: Resolved
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 3.0
Fix Version/s: feature-backlog
Security Level: Public

Type: Bug Priority: Critical
Reporter: Patrick Varley Assignee: Aleksey Kondratenko
Resolution: Won't Fix Votes: 0
Labels: log
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Operating System: Centos 64-bit
Is this a Regression?: No

 Description   
If you use the command line we track the user, however if you access the cluster from a browser we do not:

127.0.0.1 - Administrator [11/Jul/2014:11:33:55 +0000] "GET /nodes/self HTTP/1.1" 200 1479 - -
192.168.71.1 - ui-token [11/Jul/2014:11:34:10 +0000] "GET /pools/default HTTP/1.1" 200 5419 http://192.168.71.101:8091/index.html Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36
192.168.71.1 - ui-token [11/Jul/2014:11:34:31 +0000] "GET /pools/default HTTP/1.1" 200 5434 http://192.168.71.101:8091/index.html Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36
127.0.0.1 - read [11/Jul/2014:11:35:03 +0000] "GET /pools HTTP/1.1" 200 1033 - -
127.0.0.1 - read [11/Jul/2014:11:35:03 +0000] "GET /nodes/self HTTP/1.1" 200 1464 - -

How to reproduce

Create a read only user called "read"
Access the UI from a browser using both read and Administrator
Use both users to access the cluster from the cli:
/opt/couchbase/bin/couchbase-cli server-info -c localhost:8091 -u Administrator -p password
/opt/couchbase/bin/couchbase-cli server-info -c localhost:8091 -u read -p reader

Check the logs using:
sudo /opt/couchbase/bin/cbbrowse_logs http_access.log | less


 Comments   
Comment by Aleksey Kondratenko [ 11/Jul/14 ]
By design. This is _not_ audit log.




[MB-11694] Default host for automated cbcollect Created: 11/Jul/14  Updated: 11/Jul/14

Status: Open
Project: Couchbase Server
Component/s: UI
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Major
Reporter: David Haikney Assignee: Pavel Blagodov
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
When selecting "upload to couchbase" under the collect logs feature, the host should be defaulted (but still editable) in our Enterprise edition. Otherwise the only place this is documented is the "working with support" wiki page so they would need to remember that URL in order to lookup the right URL they need to supply. We can trivially save them this inconvenience and default it to

s3.amazonaws.com/cb-customers OR cb-customers.s3.amazonaws.com



 Comments   
Comment by Aleksey Kondratenko [ 11/Jul/14 ]
First option works. Not sure about second.




[MB-11693] Help message for cbbrowse_logs is wrong Created: 11/Jul/14  Updated: 11/Jul/14

Status: Open
Project: Couchbase Server
Component/s: tools
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Trivial
Reporter: Patrick Varley Assignee: Bin Cui
Resolution: Unresolved Votes: 0
Labels: log
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: No

 Description   
[vagrant@localhost ~]$ sudo /opt/couchbase/bin/cbbrowse_logs -h
option log requires 1 arguments
Usage: <progname> -report_dir <dir> [-log <name>]


<progname> should be cbbrowse_logs.




[MB-11692] On server group tab missing white arrow to the menu bar. Created: 11/Jul/14  Updated: 11/Jul/14

Status: Open
Project: Couchbase Server
Component/s: UI
Affects Version/s: 2.5.0, 2.5.1, 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Trivial
Reporter: Patrick Varley Assignee: Pavel Blagodov
Resolution: Unresolved Votes: 0
Labels: UI
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: PNG File Server groups UI.png     PNG File Server Nodes UI.png    
Triage: Untriaged
Is this a Regression?: No

 Description   
See the screenshot attached.

I assume this is because there is no Server group in the menu bar.




[MB-11691] Support Link on Enterprise Edition should point to Support page Created: 11/Jul/14  Updated: 11/Jul/14

Status: Open
Project: Couchbase Server
Component/s: UI
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Major
Reporter: David Haikney Assignee: Pavel Blagodov
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
The Support Forum link for our Enterprise Edition should link to the actual support forum, not the community page. http://support.couchbase.com

 Comments   
Comment by Anil Kumar [ 11/Jul/14 ]
David - Originally it was meant for getting FAQ from forums. But your suggestion to add link to support make sense.
Comment by Anil Kumar [ 11/Jul/14 ]
Pavel - Lets make below changes

On EE

1. Change the existing text from "Support Forum" to just "Support".
2. Change the existing link from "http://www.couchbase.com/communities/" to "http://support.couchbase.com".

On CE

Lets keep the existing UI




[MB-11690] UI: Add Server - Unfriendly error message when CB not running Created: 11/Jul/14  Updated: 11/Jul/14  Resolved: 11/Jul/14

Status: Resolved
Project: Couchbase Server
Component/s: UI
Affects Version/s: 3.0
Fix Version/s: None
Security Level: Public

Type: Improvement Priority: Minor
Reporter: David Haikney Assignee: Aleksey Kondratenko
Resolution: Duplicate Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: PNG File add_server.png    

 Description   
Trying to add a server to a cluster where Couchbase is not running yields "Attention - Failed to reach erlang port mapper at node "192.168.63.102". Error: ehostunreach"

Whilst this is obviously technically correct it's not terribly meaningful to the average customer. A more user-friendly error message would be helpful. e.g. "Failed to contact couchbase service, please check it is running and the ports are accessible"

 Comments   
Comment by Aleksey Kondratenko [ 11/Jul/14 ]
Disagree about unfriendly.

Plus your dumped down version omits two important details: that it failed on talking to port mapper stage and how exactly it failed.

Note that we already have request to convert ehostunreach and co to human text a-la strerror
Comment by Anil Kumar [ 11/Jul/14 ]
MB-9171




[MB-11689] [cache metadata]: No indication of what percentage of metadata is in RAM Created: 11/Jul/14  Updated: 11/Jul/14

Status: Open
Project: Couchbase Server
Component/s: couchbase-bucket, UI
Affects Version/s: 3.0-Beta
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Major
Reporter: Dave Rigby Assignee: Chiyoung Seo
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: PNG File Screen Shot 2014-07-11 at 11.32.16.png    
Triage: Untriaged
Operating System: Centos 64-bit
Is this a Regression?: Unknown

 Description   
For the new Cache Metadata feature we don't appear to give the user any indication of how much metadata has been flushed out to disk.

See attached screenshot - while we do show the absolute amount of RAM used for metadata, there doesn't seem to be any indication of how much of the total is still in RAM.

Note: I had a brief look at the available stats (https://github.com/membase/ep-engine/blob/master/docs/stats.org) and couldn't see a stat about total metadata size (flushed to disk); so this may also need ep-engine if there isn't an underlying stat for this.




[MB-11688] [UI] Bucket summary page should auto-refresh ops and item counts Created: 11/Jul/14  Updated: 11/Jul/14

Status: Reopened
Project: Couchbase Server
Component/s: UI
Affects Version/s: 2.5.1, 3.0-Beta
Fix Version/s: feature-backlog
Security Level: Public

Type: Improvement Priority: Major
Reporter: Dave Rigby Assignee: Anil Kumar
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: Client: OS X Mavericks 10.9.3, Safari 7.0.4

Attachments: PNG File auto-refresh 1.png     PNG File auto-refresh 2.png     PNG File auto-refresh 3.png    
Issue Links:
Relates to
relates to MB-11253 UI does not auto-refresh list of buck... Open

 Description   
When running cbworkloadgen against a bucket ("high") the ops and item counts in the bucket summary doesn't auto-refresh.

See attached screenshots:

1) cbworkloadgen started, op/s occurring (see dots in terminal) but UI shows 0
2) Refresh web browser, op/s is updated to show correct (instantaneous) value.
3) cbworkloagen stopped. op/s still shows as non-zero value.

 Comments   
Comment by Aleksey Kondratenko [ 11/Jul/14 ]
yes by design
Comment by Dave Rigby [ 11/Jul/14 ]
That's inconsistent with the front page, which does show live data. I do think this is potentially confusing to a user. Re-opening as an improvement request.
Comment by Aleksey Kondratenko [ 11/Jul/14 ]
From page doesn't have potentially hundreds of things to update.
Comment by Aleksey Kondratenko [ 11/Jul/14 ]
In any case, given this is improvement worth thinking about for ui redesign, I'm getting this off my queue.
Comment by Anil Kumar [ 11/Jul/14 ]
There are bunch UI not refreshing improvement tickets (ex - MB-6897, MB-11253) this will be handled with UI redesign.
Comment by Dave Rigby [ 11/Jul/14 ]
Neither of the listed bugs - MB-6897 (which has been closed) or MB-11253 mention the bucket summary page, therefore this isn't a duplicate. Reopening, and linking as related to MB-11253 to aid in tracking.




[MB-11687] [cache metadata] No indication on main UI of bucket's metadata setting / impossible for RO user to see Created: 11/Jul/14  Updated: 11/Jul/14

Status: Open
Project: Couchbase Server
Component/s: UI
Affects Version/s: 3.0-Beta
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Major
Reporter: Dave Rigby Assignee: Anil Kumar
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: PNG File Screen Shot 2014-07-11 at 11.04.39.png    
Triage: Untriaged
Is this a Regression?: No

 Description   
Once a bucket has been created, there is no indication of it's "Cache metadata" setting on the "Data Buckets" page (see attached screenshot)

We show replica counts, Access control, compaction etc and so it would be very helpful to users to see what the metadata caching setting of their buckets are in the list.

Note while the user *could* click "Edit" and see what the setting is, but this only works for the Admin user - the read-only user cannot see the metadata setting *at all*.





[MB-11686] [bucket priority] No indication on main UI of bucket's priority / impossible for RO user to see. Created: 11/Jul/14  Updated: 11/Jul/14

Status: Open
Project: Couchbase Server
Component/s: UI
Affects Version/s: 3.0-Beta
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Major
Reporter: Dave Rigby Assignee: Anil Kumar
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: PNG File Screen Shot 2014-07-11 at 11.04.39.png    
Triage: Triaged
Is this a Regression?: No

 Description   
Once a bucket has been created, there is no indication of it's priority on the "Data Buckets" page (see attached screenshot)

We show replica counts, Access control, compaction etc and so it would be very helpful to users to see what the priority of their buckets are in the list.

Note while the user *could* click "Edit" and see what the setting is, but this only works for the Admin user - the read-only user cannot see what the priorities are.


 Comments   
Comment by Venu Uppalapati [ 11/Jul/14 ]
There is a new ep_bucket_priority stat in the default ep-engine statistics providing this information.




[MB-11685] windows uninstall failed to remove files Created: 10/Jul/14  Updated: 11/Jul/14

Status: Open
Project: Couchbase Server
Component/s: installer
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Critical
Reporter: Thuan Nguyen Assignee: Bin Cui
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: windows 2008 R2 64bit

Attachments: PNG File ss_2014-07-10_at_5.48.41 PM.png    
Triage: Untriaged
Operating System: Windows 64-bit
Is this a Regression?: Unknown

 Description   
Install couchbase server 3.0.0-936 on windows server 2008 R2 64-bit
Uninstall couchbase server 3.0.0-936, the uninstall process did finish but it did not delete couchbase server files
under c:/Program Files/Couchbase/Server

Install couchbase server 3.0.0-949 on windows server 2008 R2 64-bit
Uninstall couchbase server 3.0.0-9349. Got the same issue. All files are not deleted.
IP of windows server: 10.1.2.92
Vm is available for debug


 Comments   
Comment by Chris Hillery [ 11/Jul/14 ]
Bin, please take a look - I think you're the only one with an InstallShield license.




[MB-11684] Clarify FLUSH on Couchbase bucket vs Memacached bucket Created: 10/Jul/14  Updated: 10/Jul/14

Status: Open
Project: Couchbase Server
Component/s: documentation
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Improvement Priority: Major
Reporter: Anil Kumar Assignee: Ruth Harris
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
Clarify FLUSH on Couchbase bucket vs Memacached bucket -

"Enable flush allows you to flush data from a bucket. If the bucket is a couchbase bucket, then all items are immediately removed and the item count immediately becomes zero. If the bucket is a memcached bucket, then all Items are immediately marked as expired and, while the item count will not immediately become zero, the expired items cannot be retrieved and will removed as normal operations on the memached bucket continue."




[MB-11683] Document the incremental relancing operation in 3.0 Created: 10/Jul/14  Updated: 10/Jul/14

Status: Open
Project: Couchbase Server
Component/s: documentation
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Improvement Priority: Major
Reporter: Anil Kumar Assignee: Ruth Harris
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
http://docs.couchbase.com/couchbase-manual-2.5/cb-admin/#rebalancing make changes to this section in 3.0 to include the incremental rebalance operation.




[MB-11682] return function in "Writing custom reduce functions" is confusing Created: 10/Jul/14  Updated: 10/Jul/14

Status: Open
Project: Couchbase Server
Component/s: documentation
Affects Version/s: 2.5.1
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Minor
Reporter: Volker Mische Assignee: Ruth Harris
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
The Admin docs for Couchbase Server in the "Writing custom reduce functions" chapter [1] is a bit confusing. It talks about the "calling the return() function". Normally in JavaScript (and in almost any of the examples in out docs) we use the return expression. This should be fixed to be consistent, else it leads to confusion (we had just one user on IRC).

[1]: http://docs.couchbase.com/couchbase-manual-2.5/cb-admin/index.html#re-writing-the-built-in-reduce-functions




[MB-11681] ns_server.stats.log does not scale with nodes or buckets. Created: 10/Jul/14  Updated: 11/Jul/14

Status: Open
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 2.5.1
Fix Version/s: feature-backlog
Security Level: Public

Type: Improvement Priority: Major
Reporter: Patrick Varley Assignee: Aleksey Kondratenko
Resolution: Unresolved Votes: 1
Labels: supportability
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: Large number of nodes.


 Description   
ns_server.stats.log contains the statuses of every node in the cluster from ns_doctor:

[ns_doctor:debug,2014-07-10T0:58:24.510,ns_1@cb-01.lan:ns_doctor<0.3026.0>:ns_doctor:handle_info:167]Current node statuses:

As the number of node increases the ns_doctor output gets bigger and as a result we lose the historical data that mortimer uses from stats_collector:

[stats:debug,2014-07-10T5:25:03.967,ns_1@cb-01.lan:<0.5614.0>:stats_collector:log_stats:136](at {{2014,7,10},{5,25,3}} (1404984303962)) Stats for bucket "Default":

This means on large cluster we only get a few hours of data that mortimer can use. We also have this problem when there is a lot of buckets.

It might be worth putting the doctor information into a different file: ns_server.doctor.log and maybe a stats file per a bucket: ns_server.stats-<BUCKET NAME>.log.

It would be good to get input from other people in the field team.

 Comments   
Comment by Aleksey Kondratenko [ 10/Jul/14 ]
yes it does. Just like all of our logging.

And I cannot see how separate log files can help given that we're not supposed to eat unlimited amount of space for logs
Comment by Patrick Varley [ 10/Jul/14 ]
Well it is not unlimited we still have the 200MB limit per a file. In this case we have increased the disk space usage of logging by 200MB and extra 200MB per a bucket. Disk space is cheap and they compress well.

Anyway it was a suggestion, I do not care how you fix it but for large clusters I would like to see at least 5 to 7 days worth of stats and not the 8 hours we have now!
Comment by Aleksey Kondratenko [ 10/Jul/14 ]
Seeing your point now. If we scale our logs requirements by buckets count, it indeed might be reasonable.
Comment by Aleksey Kondratenko [ 10/Jul/14 ]
We'll think about this. But we cannot promise anything in short to mid term. Meanwhile you can always edit log rotation settings and increase log preservation period sufficiently for whatever bucket count you have.




[MB-11680] Graceful failover to continue not allowed after we stop it Created: 09/Jul/14  Updated: 09/Jul/14

Status: Open
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Critical
Reporter: Parag Agarwal Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: No

 Description   
Any OS, version 918 and above

1. Create 3 node cluster
2. Add 1 default bucket with replica=1
3. Add 500K items
4. Wait till all queues drain and replication is complete
5. Graceful failover one node
6. When Step 5 is executing stop graceful failover using UI button
7. Start Graceful failover again

Step 7 never happens since we are allowed only rebalance after we stop graceful failover once. By this I mean graceful over to continue is not possible- just as rebalance. We will have to pick the node again to graceful failover as in Step 5.

https://s3.amazonaws.com/bugdb/jira/MB-11680/log_failover_issue.tar.gz, logs are for version 943




[MB-11679] Docs: Swap rebalance doesn't require an even number of nodes Created: 09/Jul/14  Updated: 11/Jul/14

Status: Open
Project: Couchbase Server
Component/s: documentation
Affects Version/s: 2.0, 2.5.1, 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Major
Reporter: Perry Krug Assignee: Ruth Harris
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
This link: http://docs.couchbase.com/couchbase-manual-2.5/cb-admin/index.html#swap-rebalance (and the corresponding ones in previous versions)

States that a swap rebalance requires an "even number of nodes" which isn't correct. Perhaps it's supposed to be an "equal" number of nodes, but the section then seems a bit redundant so maybe a little reworking is in order.




[MB-11678] SELECT COUNT(*) FROM BUCKET makes a full scan without WHERE condition Created: 09/Jul/14  Updated: 10/Jul/14  Due: 28/Jul/14

Status: Open
Project: Couchbase Server
Component/s: query
Affects Version/s: cbq-DP3
Fix Version/s: cbq-DP4
Security Level: Public

Type: Improvement Priority: Major
Reporter: fredericb Assignee: Gerald Sangudi
Resolution: Unresolved Votes: 0
Labels: performance
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
Using N1QL, when executing
SELECT COUNT(*) FROM Bucket
It produces a full scan of the bucket, while there is no Where condition and therefore the result shall be quick.




[MB-11677] Indexes on array elements Created: 09/Jul/14  Updated: 09/Jul/14  Due: 28/Jul/14

Status: Open
Project: Couchbase Server
Component/s: query
Affects Version/s: cbq-DP3
Fix Version/s: cbq-alpha
Security Level: Public

Type: Improvement Priority: Major
Reporter: Gerald Sangudi Assignee: Gerald Sangudi
Resolution: Unresolved Votes: 1
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
Investigate the ability to index array elements, and to use those indexes collection-based queries.

See http://www.couchbase.com/communities/q-and-a/how-write-where-condition-arrays-using-index





[MB-11676] [System Test] Indexing stuck during rebalance-In in light DGM setup Created: 09/Jul/14  Updated: 09/Jul/14  Resolved: 09/Jul/14

Status: Resolved
Project: Couchbase Server
Component/s: couchbase-bucket, view-engine
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Test Blocker
Reporter: Ketaki Gangal Assignee: Mike Wiederhold
Resolution: Fixed Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: 3.0.0-931-rel

Triage: Untriaged
Operating System: Centos 64-bit
Is this a Regression?: Unknown

 Description   
Setup

1. 7 Node cluster, 2 Buckets, each with 1 ddoc, 2 Views
2. Load 140M, 170M items on both the buckets - to achieve dgm - 70%
3. Add 1 Node and rebalance

- After 13 hours +, indexing and consequently rebalancing appears to be stuck.

- Logs contain numerous server timeouts as seen previously
[couchdb:error,2014-07-09T9:18:10.856,ns_1@10.6.2.168:<0.675.65>:couch_log:error:42]upr client (default, mapreduce_view: default _design/ddoc1 (prod/main)): Obtaining mutation from server timed out after 60.0 seconds [RequestId 68560, PartId 877, StartSeq 0, EndSeq 79964]. Waiting...

Attaching logs


 Comments   
Comment by Sarath Lakshman [ 09/Jul/14 ]

View engine requested for Stream with startseq=0 and endseq=79964 with upr latest flag.
So ep-engine set stream endseqno=highseqno=230028.

We received a snapshot marker as follows:

[couchdb:info,2014-07-08T21:08:10.049,ns_1@10.6.2.168:<0.928.65>:couch_log:info:39]set view `default`, main (prod) group `_design/ddoc1`: received a snapshot marker (in-memory) for partition 877 from sequence 0 to 230028

upr stream got stuck after that. We did not get stream_end:

[couchdb:error,2014-07-09T9:26:10.864,ns_1@10.6.2.168:<0.675.65>:couch_log:error:42]upr client (default, mapreduce_view: default _design/ddoc1 (prod/main)): Obtaining mutation from server timed out after 60.0 seconds [RequestId 68560, PartId 877, StartSeq 0, EndSeq 79964]. Waiting...


[root@centos-64-x64 ~]# /opt/couchbase/bin/cbstats 0:11210 vbucket-seqno 877
 vb_877:high_seqno: 230028
 vb_877:purge_seqno: 0
 vb_877:uuid: 85209949351450


[root@centos-64-x64 ~]# /opt/couchbase/bin/cbstats 0:11210 upr | grep mapreduce | grep 877
 eq_uprq:mapreduce_view: default _design/ddoc1 (prod/main):stream_877_backfilled: 0
 eq_uprq:mapreduce_view: default _design/ddoc1 (prod/main):stream_877_end_seqno: 230028
 eq_uprq:mapreduce_view: default _design/ddoc1 (prod/main):stream_877_flags: 6
 eq_uprq:mapreduce_view: default _design/ddoc1 (prod/main):stream_877_items_ready: false
 eq_uprq:mapreduce_view: default _design/ddoc1 (prod/main):stream_877_last_sent_seqno: 0
 eq_uprq:mapreduce_view: default _design/ddoc1 (prod/main):stream_877_memory: 0
 eq_uprq:mapreduce_view: default _design/ddoc1 (prod/main):stream_877_opaque: 3490382080
 eq_uprq:mapreduce_view: default _design/ddoc1 (prod/main):stream_877_snap_end_seqno: 0
 eq_uprq:mapreduce_view: default _design/ddoc1 (prod/main):stream_877_snap_start_seqno: 0
 eq_uprq:mapreduce_view: default _design/ddoc1 (prod/main):stream_877_start_seqno: 0
 eq_uprq:mapreduce_view: default _design/ddoc1 (prod/main):stream_877_state: in-memory
 eq_uprq:mapreduce_view: default _design/ddoc1 (prod/main):stream_877_vb_uuid: 0


[root@centos-64-x64 ~]# /opt/couchbase/bin/cbstats 0:11210 upr | grep mapreduce | grep unack
 eq_uprq:mapreduce_view: default _design/ddoc1 (prod/main):unacked_bytes: 44


[root@centos-64-x64 ~]# /opt/couchbase/bin/cbstats 0:11210 upr | grep mapreduce | grep remai
 eq_uprq:mapreduce_view: default _design/ddoc1 (prod/main):items_remaining: 230028


There are items remaining to be sent from ep-engine but they are not ready yet as per the above stats.
Comment by Sriram Melkote [ 09/Jul/14 ]
Chiyoung, can you please help?
Comment by Ketaki Gangal [ 09/Jul/14 ]
Logs https://s3.amazonaws.com/bugdb/11676/11676.tar

Promoting this to a test blocker, since it blocks running of system tests.
Comment by Mike Wiederhold [ 09/Jul/14 ]
http://review.couchbase.org/39242




[MB-11675] 40-50% performance degradation on append-heavy workload compared to 2.5.1 Created: 09/Jul/14  Updated: 11/Jul/14

Status: Open
Project: Couchbase Server
Component/s: couchbase-bucket
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Task Priority: Blocker
Reporter: Dave Rigby Assignee: David Liao
Resolution: Unresolved Votes: 0
Labels: performance
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: OS X Mavericks 10.9.3
CB server 3.0.0-918 (http://packages.northscale.com/latestbuilds/3.0.0/couchbase-server-enterprise_x86_64_3.0.0-918-rel.zip)
Haswell MacBook Pro (16GB RAM)

Attachments: PNG File CB 2.5.1 revAB_sim.png     PNG File CB 3.0.0-918 revAB_sim.png     Zip Archive MB-11675.trace.zip     Zip Archive perf_report_result.zip     Zip Archive revAB_sim_v2.zip     Zip Archive revAB_sim.zip    
Issue Links:
Relates to
relates to MB-11623 test for performance regressions with... Open

 Description   
When running an append-heavy workload (modelling a social network address book, see below) the performance of CB has dropped from ~100K ops down to 50K ops compared to 2.5.1-1083 on OS X.

Edit: I see a similar (but slightly smaller - around 40% degradation on Linux (Ubuntu 14.04) - see comment below for details.

== Workload ==

revAB_sim - generates a model social network, then builds a representation of this in Couchbase. Keys are a set of phone numbers, values are lists of phone books which contain that phone number. (See attachment).

Configured for 8 client threads, 100,000 people (documents).

To run:

* pip install networkx
* Check revAB_sim.py for correct host, port, etc
* time ./revAB_sim.py

== Cluster ==

1 node, default bucket set to 1024MB quota.

== Runtimes for workload to complete ==


## CB-2.5.1-1083:

~107K op/s. Timings for workload (3 samples):

real 2m28.536s
real 2m28.820s
real 2m31.586s


## CB-3.0.0-918

~54K op/s. Timings for workload:

real 5m23.728s
real 5m22.129s
real 5m24.947s


 Comments   
Comment by Pavel Paulau [ 09/Jul/14 ]
I'm just curious, what does consume all CPU resources?
Comment by Dave Rigby [ 09/Jul/14 ]
I haven't had chance to profile it yet; certainly in both instances (fast / slow) the CPU is at 100% between the client workload and server.
Comment by Pavel Paulau [ 09/Jul/14 ]
Is memcached top consumer? or beam.smp? or client?
Comment by Dave Rigby [ 09/Jul/14 ]
memcached highest (as expected). From the 3.0.0 package (which I still have installed):

PID COMMAND %CPU TIME #TH #WQ #PORT #MREG MEM RPRVT PURG CMPRS VPRVT VSIZE PGRP PPID STATE UID FAULTS COW MSGSENT MSGRECV SYSBSD SYSMACH CSW
34046 memcached 476.9 01:34.84 17/7 0 36 419 278M+ 277M+ 0B 0B 348M 2742M 34046 33801 running 501 73397+ 160 67 26 13304643+ 879+ 4070244+
34326 Python 93.4 00:18.57 9/1 0 25 418 293M+ 293M+ 0B 0B 386M 2755M 34326 1366 running 501 77745+ 399 70 28 15441263+ 629 5754198+
0 kernel_task 71.8 00:14.29 95/9 0 2 949 1174M+ 30M 0B 0B 295M 15G 0 0 running 0 42409 0 57335763+ 52435352+ 0 0 278127194+
...
32800 beam.smp 8.5 00:05.61 30/4 0 49 330 155M- 152M- 0B 0B 345M- 2748M- 32800 32793 running 501 255057+ 468 149 30 6824071+ 1862753+ 1623911+


Python is the workload generator.

I shall try to collect an Instruments profile of 3.0 and 2.5.1 to compare...
Comment by Dave Rigby [ 09/Jul/14 ]
Instruments profile of two runs:

Run 1: 3.0.0 (slow)
Run 2: 2.5.1 (fast)

I can look into the differences tomorrow if no-one else gets there first.


Comment by Dave Rigby [ 10/Jul/14 ]
Running on Linux (Ubuntu 14.04), 24 core Xeon, I see a similar effect, but the magnitude is not as bad - 40% performance drop.

100,000 documents with 4 worker threads, same bucket size (1024MB). (Note: worker threads was dropped to 4 as I couldn't get Python SDK to reliably connect with 8 threads at the same time).

## CB-3.0.0 (source build):

    83k op/s
    real 3m26.785s

## CB-2.5.1 (source build):

    133K op/s
    real 2m4.276s


Edit: Attached updated zip file as: revAB_sim_v2.zip
Comment by Dave Rigby [ 10/Jul/14 ]
Attaching the output of `perf report` for both 2.5.1 and 3.0.0 - perf_report_result.zip

There's nothing obvious jumping out at me, looks like quite a bit has changed between the two in ep_engine.
Comment by Dave Rigby [ 11/Jul/14 ]
I'm tempted to bump this to "blocker" considering it also affects Linux - any thoughts?
Comment by Pavel Paulau [ 11/Jul/14 ]
It's a product/release blocker, no doubt.

(though raising priority at this point will not move ticket to the top of the backlog due to other issues)
Comment by Dave Rigby [ 11/Jul/14 ]
@Pavel done :)
Comment by Abhinav Dangeti [ 11/Jul/14 ]
Think I should bring this up to people's notice that JSON detection has been moved to before items are set in memory, in 3.0. This could very well be the cause for this regression (as previously, we did do this JSON check but just before persistence).
This was part of the datatype related change, now required by UPR.
A HELLO protocol was introduced new in 3.0, which clients can invoke, there by letting the server know that clients would be setting the datatype themselves, in which case this JSON check wouldn't take place.
If a client doesn't invoke the HELLO command, then we would do JSON detection to set the datatype correctly.

However, recently, the HELLO was disabled as we weren't ready to handle compressed documents in view engine. This implied that we do a mandatory JSON check for every store operation, before setting the document even in memory.
Comment by Cihan Biyikoglu [ 11/Jul/14 ]
Thanks Abhinav. Can we try out if this simply resolves the issue quickly and if this is proven, revert this change?




[MB-11674] REST Api ignores unsupported request to update replicaIndex field instead of replying with error Created: 08/Jul/14  Updated: 11/Jul/14

Status: Open
Project: Couchbase Server
Component/s: RESTful-APIs
Affects Version/s: 2.5.1
Fix Version/s: bug-backlog
Security Level: Public

Type: Bug Priority: Major
Reporter: Jeff Dillon Assignee: Aleksey Kondratenko
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Dependency
Duplicate
Triage: Untriaged
Is this a Regression?: No

 Description   
Attempting to set replicaIndex via the REST API has no effect, which should work according to the documentation:

http://docs.couchbase.com/couchbase-manual-2.5/cb-rest-api/#modifying-bucket-parameters

To reproduce:

* Make the following curl call:

curl -v -X POST -u Administrator:password -d replicaIndex=1 http://host:8091/pools/default/buckets/beer-sample

200/OK

* Note the setting does not take effect by making the following call:

curl http://host:8091/pools/default/buckets/beer-sample

{"name":"beer-sample","bucketType":"membase","authType":"sasl","saslPassword":"","proxyPort":0,"replicaIndex":false,"uri":"/pools/default/buckets/beer-sample...

ASK: Determine why replicaIndex is not able to be set

 Comments   
Comment by Jeff Dillon [ 09/Jul/14 ]
Also, might there be a work around? The "index replicas" checkbox looks to be disabled in the Admin UI
Comment by Aleksey Kondratenko [ 09/Jul/14 ]
We don't allow index replicas change at runtime because we currently cannot track and apply this change at runtime for existing indexes.

Some /diag/eval magic + re-creation of indexes is likely possible.

But please note that currently replica indexes feature is somewhat a joke. Specifically because we don't control it's up-to-dateness during rebalance in same way as we do for main indexes so it can very easily be arbitrarily stale and unhelpful.




[MB-11673] Add method of monitoring connection health between XDCR source and destination clusters Created: 08/Jul/14  Updated: 08/Jul/14

Status: Open
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: 2.0, 2.5.1
Fix Version/s: feature-backlog
Security Level: Public

Type: Improvement Priority: Major
Reporter: Mark Woosey Assignee: Cihan Biyikoglu
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Gantt: start-finish

 Description   
There does not appear to be an elegant method of an application server can check if a XDCR destination server is down.




[MB-11672] Missing items in index after rebalance (Intermittent failure) Created: 08/Jul/14  Updated: 11/Jul/14

Status: Open
Project: Couchbase Server
Component/s: view-engine
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Blocker
Reporter: Sarath Lakshman Assignee: Meenakshi Goel
Resolution: Unresolved Votes: 0
Labels: releasenote
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Duplicate
duplicates MB-11641 {UPR}:: Reading from views timing out... Closed
Relates to
relates to MB-11371 Corruption in PartitionVersions struc... Resolved
Triage: Untriaged
Is this a Regression?: Unknown

 Description   
Test:
NODES=4 TEST=rebalance.rebalanceinout.RebalanceInOutTests.measure_time_index_during_rebalance,items=200000,data_perc_add=30,nodes_init=3,nodes_in=1,skip_cleanup=True,nodes_out=1,num_ddocs=2,num_views=2,max_verify=50000,value_size=1024,GROUP=IN_OUT make any-test

Once in three or four times, it is found that view query results have lesser items than expected number of items.

Logs:
https://s3.amazonaws.com/bugdb/jira/MB-11371/f9ad56ee/172.23.107.24-diag.zip
https://s3.amazonaws.com/bugdb/jira/MB-11371/07e24114/172.23.107.25-diag.zip
https://s3.amazonaws.com/bugdb/jira/MB-11371/a9c9a36d/172.23.107.26-diag.zip
https://s3.amazonaws.com/bugdb/jira/MB-11371/2517f70b/172.23.107.27-diag.zip

 Comments   
Comment by Parag Agarwal [ 08/Jul/14 ]
What is the output of the test run?
Comment by Parag Agarwal [ 08/Jul/14 ]
Sarath: Did you hit this issue while verifying https://www.couchbase.com/issues/browse/MB-11641 ?
Comment by Parag Agarwal [ 08/Jul/14 ]
Saw this in 935
Comment by Sriram Melkote [ 10/Jul/14 ]
Sarath mentioned on today's codebase, we're not hitting it - it's not clear it it's just reduced in frequency, or was fixed by recent changes. Will update again.
Comment by Sarath Lakshman [ 10/Jul/14 ]
I tried a lot of time trying to reproduce this on latest code. Meenakshi, you see if it can be reproduced with latest build ?
Comment by Sarath Lakshman [ 11/Jul/14 ]
Currently the test removes one node and adds one node. Meenakshi, is it possible to reproduce this problem with only either of rebalance_in or rebalance_out ?
It looks like this issue is not related to swap rebalance since MB-11641 mentions about a test with rebalance_in != rebalance_out and same issue happened.
Comment by Meenakshi Goel [ 11/Jul/14 ]
Trying reproducing the problem with rebalance_in but not hitting the issue.
Will try few more times and with only rebalance_out too and will update accordingly.




[MB-11671] couchbase.log needs alternitives to net-tools - ifconfig netstat output Created: 08/Jul/14  Updated: 11/Jul/14  Resolved: 11/Jul/14

Status: Resolved
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 2.5.1
Fix Version/s: 3.0
Security Level: Public

Type: Improvement Priority: Major
Reporter: Ian McCloy Assignee: Aleksey Kondratenko
Resolution: Fixed Votes: 0
Labels: supportability
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
Major linux distros (Fedora18, RHEL7 and CentOS7) stopped installing legacy net-tools (ifconfig, netstat, route, hostname) by default.

ifconfig shows the rx/tx of each network device, we don't capture this elsewhere, it would be helpful to capture 'cat /proc/net/dev' and 'ip -s link' output

netstat shows all open network connections, as it is no longer shipped by default an alternative we could capture is the output from the 'ss' command.

 Comments   
Comment by Aleksey Kondratenko [ 08/Jul/14 ]
Thanks for raising it. I was not aware. BTW best way to address stuff like that is to contribute a patch.
Comment by Ian McCloy [ 09/Jul/14 ]
Fair point, I'll push a patch to gerrit.
Comment by Ian McCloy [ 10/Jul/14 ]
Code added to http://review.couchbase.org/#/c/39267/ Thanks !




[MB-11670] Rebuild whole project when header file changes Created: 08/Jul/14  Updated: 09/Jul/14

Status: Open
Project: Couchbase Server
Component/s: view-engine
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Minor
Reporter: Volker Mische Assignee: Chris Hillery
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
When you change a header file in the view-engine (couchdb project) the whole project should be rebuild.

Currently if you change a header file and you don't clean up the project you could end up with run-time errors like a badmatch on the #writer_acc record.

PS: I opened that as an MB bug and not as a CBD as this is valueable information about badmatch errors that should be public.

 Comments   
Comment by Chris Hillery [ 09/Jul/14 ]
This really has nothing to do with build team, and as such it's perfectly appropriate for it to be MB.

I'm assigning it back to Volker for some more information. Can you give me a specific set of actions you can take that demonstrate this not happening? Is it to do with Erlang code, or C++?
Comment by Volker Mische [ 09/Jul/14 ]
Build Couchbase with a make.

Now edit a couchdb Erlang header file. For example edit couchdb/src/couch_set_view/include/couch_set_view.hrl and comment this block out (with leading `%`):

-record(set_view_params, {
    max_partitions = 0 :: non_neg_integer(),
    active_partitions = [] :: [partition_id()],
    passive_partitions = [] :: [partition_id()],
    use_replica_index = false :: boolean()
}).

When you do a "make" again, ns_server will complain about something missing, but couchdb won't as it doesn't rebuild at all.

Chris, I hope this information is good enough, if you need more, let me know.




[MB-11668] Document the limits on document size for indexing Created: 08/Jul/14  Updated: 10/Jul/14

Status: Open
Project: Couchbase Server
Component/s: documentation, view-engine
Affects Version/s: 2.5.0, 2.5.1, 3.0-Beta
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Major
Reporter: Dave Rigby Assignee: Ruth Harris
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
As introduced by MB-9467, there are two limits on the size of documents during indexing:

1) indexer_max_doc_size - documents larger then this value are skipped by the
indexer. A message is logged (with document ID, its size, bucket name, view name, etc)
when such a document is encountered. A value of 0 means no limit (like what it used to
be before). Current default value is 1048576 bytes (1Mb). This is already a very large
value, such large documents take a long time to process, slowing down rebalance, etc.

2) max_kv_size_per_doc - maximum total size (bytes) of KV pairs that can be emitted for
a single document for a single view. When such limit is passed, message is logged (with
document ID, its size, bucket name, view name, etc). A value of 0 means no limit (like what
it used to be before). Current default value is 1048576 bytes (1Mb), which is already a
too large value, that makes everything far from efficient.

There is no mention of these anywhere in the documentation at present, and so they can be confusing to users who find that certain (large) documents are inexplicably not indexed.

I note there is an outstanding 3.0 bug (MB-9713) to add REST endpoints for these - currently you have to use a magic diag/eval to change them - but we should at least mention their existance and default values even if the REST API isn't ready yet.

- -
We probably should also document this:
3) function_timeout - maximum time mapreduce functions can take for any one document.
If it's taking longer than this, the function invocation is aborted. The default limit is 10 seconds.
Setting it to 0 will disable it (not recommended).

 Comments   
Comment by Dave Rigby [ 08/Jul/14 ]
See for example http://stackoverflow.com/questions/24609638/couchbase-view-index-update-hanging-after-adding-file-over-1-4mb where someone is hitting exactly this.




[MB-11667] bignum string value is not correct in document page of couchbase console Created: 08/Jul/14  Updated: 08/Jul/14

Status: Open
Project: Couchbase Server
Component/s: UI
Affects Version/s: 2.5.0
Fix Version/s: bug-backlog
Security Level: Public

Type: Bug Priority: Major
Reporter: linbo Assignee: Aleksey Kondratenko
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Operating System: Ubuntu 64-bit
Is this a Regression?: Unknown

 Description   
cb_conn = new couchbase.Connection({
    host: "localhost",
    bucket: "test",
    password: "123456"
});
 
 
cb_conn.set("test", "2122488373469796992", function(err, result){
    console.log(result);
});
 
cb_conn.get("test", function(err, result){
    if(err){
        console.log(err);
        console.log(err.code, err.code == couchbase.errors.keyNotFound);
    }else{
        console.log(result.value);
    }
});


$ node test_cb.js
{ cas: { '0': 2148270080, '1': 3811324087 } }
2122488373469796992



set value as "2122488373469796992", and get it by nodejs client, everything works fine.

Open console console: in data buckets -> test bucket -> Documents, search item "test", UI show value is "2122488373469796900"

"2122488373469796992" != "2122488373469796900"



 Comments   
Comment by Aleksey Kondratenko [ 08/Jul/14 ]
We're aware of this behavior. I think we might have duplicate of this ticket already.

This happens because our UI does text -> json -> text conversion which causes issues with long numbers due to JS limit numbers precision.




[MB-11666] [system test] UI is not up after node rejected by failover Created: 07/Jul/14  Updated: 08/Jul/14  Resolved: 08/Jul/14

Status: Resolved
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Major
Reporter: Thuan Nguyen Assignee: Aliaksey Artamonau
Resolution: Duplicate Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: centos 6.4 64-bit

Attachments: Zip Archive 172.23.107.195-772014-188-diag.zip     Zip Archive 172.23.107.196-772014-189-diag.zip     Zip Archive 172.23.107.197-772014-1812-diag.zip     Zip Archive 172.23.107.199-772014-1815-diag.zip     Zip Archive 172.23.107.200-772014-1818-diag.zip     Zip Archive 172.23.107.201-772014-1824-diag.zip     Zip Archive 172.23.107.202-772014-1821-diag.zip     Zip Archive 172.23.107.203-772014-1825-diag.zip    
Triage: Untriaged
Operating System: Centos 64-bit
Link to Log File, atop/blg, CBCollectInfo, Core dump: Link to manifest file of this build http://builds.hq.northscale.net/latestbuilds/couchbase-server-enterprise_x86_64_3.0.0-928-rel.rpm.manifest.xml
Is this a Regression?: Yes

 Description   
Install couchbase server 3.0.0-928 to 8 nodes centos 6.4 64-bit.
Create 7 nodes cluster
172.23.107.195
172.23.107.196
172.23.107.197
172.23.107.199
172.23.107.200
172.23.107.202
172.23.107.201
Create 2 buckets: default and sasl-2 with bucket quota 2.5 GB each
Load 25 M items to each bucket to bring active resident ratio below 80%
Do access phase in 4 hours with update, delete, exp and set

Rebalance in node 203 to cluster. Passed
Rebalance out node 200. Passed
Swap rebalance. Add node 200 back and remove node 195. Passed
Failover node 201 and rebalance. Passed
UI of node 201 was not up after rejected from cluster.
Login into node 201, I saw one erlang process showing
ns_babysitter_bootstrap:remote_stop


[root@carambola-s20523 ~]# ps aux | grep beam.smp
498 6145 0.0 0.2 1311892 22956 ? Sl 12:28 0:06 /opt/couchbase/lib/erlang/erts-5.10.4/bin/beam.smp -A 16 -- -root /opt/couchbase/lib/erlang -progname erl -- -home /opt/couchbase -- -smp enable -kernel inet_dist_listen_min 21100 inet_dist_listen_max 21299 error_logger false -sasl sasl_error_logger false -hidden -name babysitter_of_ns_1@127.0.0.1 -setcookie nocookie -noshell -noinput -noshell -noinput -run ns_babysitter_bootstrap -- -couch_ini /opt/couchbase/etc/couchdb/default.ini /opt/couchbase/etc/couchdb/default.d/capi.ini /opt/couchbase/etc/couchdb/default.d/geocouch.ini /opt/couchbase/etc/couchdb/local.ini -ns_babysitter cookiefile "/opt/couchbase/var/lib/couchbase/couchbase-server.cookie" -ns_server config_path "/opt/couchbase/etc/couchbase/static_config" -ns_server pidfile "/opt/couchbase/var/lib/couchbase/couchbase-server.pid" -ns_server cookiefile "/opt/couchbase/var/lib/couchbase/couchbase-server.cookie-ns-server" -ns_server enable_mlockall false
498 6224 28.1 0.9 1555760 94692 ? Ssl 12:28 94:39 /opt/couchbase/lib/erlang/erts-5.10.4/bin/beam.smp -A 16 -sbt u -P 327680 -K true -swt low -MMmcs 30 -e102400 -- -root /opt/couchbase/lib/erlang -progname erl -- -home /opt/couchbase -- -smp enable -setcookie nocookie -kernel inet_dist_listen_min 21100 inet_dist_listen_max 21299 error_logger false -sasl sasl_error_logger false -nouser -run child_erlang child_start ns_bootstrap -- -smp enable -couch_ini /opt/couchbase/etc/couchdb/default.ini /opt/couchbase/etc/couchdb/default.d/capi.ini /opt/couchbase/etc/couchdb/default.d/geocouch.ini /opt/couchbase/etc/couchdb/local.ini
root 19383 0.0 0.1 581216 15144 ? Sl 17:37 0:00 /opt/couchbase/lib/erlang/erts-5.10.4/bin/beam.smp -- -root /opt/couchbase/lib/erlang -progname erl -- -home /root -- -name executioner@executioner -noshell -hidden -setcookie hsdranhtkjomoymo -eval ns_babysitter_bootstrap:remote_stop('babysitter_of_ns_1@127.0.0.1')
root 20485 0.0 0.0 103236 868 pts/0 S+ 18:04 0:00 grep beam.smp


 Comments   
Comment by Aliaksey Artamonau [ 08/Jul/14 ]
Looks like a duplicate of MB-11608.




[MB-11665] {UPR}: During a 2 node rebalance-in scenario :: Java SDK (1.4.2) usage sees a drop in OPS (get/set) by 50% and high error rate Created: 07/Jul/14  Updated: 08/Jul/14

Status: Open
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Critical
Reporter: Parag Agarwal Assignee: Parag Agarwal
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: 172.23.107.174-177

Triage: Untriaged
Operating System: Centos 64-bit
Flagged:
Release Note
Is this a Regression?: Yes

 Description   
We have compared the run of 2.5.1-0194 vs 3.0.0-918, Java SDK used 1.4.2

Common Scenario

1. Create a 2 node cluster
2. Create 1 default bucket
3. Add 15 K items with do get and set
4. Add 2 nodes and then rebalance
5. Run Get and Set again in parallel to rebalance

Issue observed during Step5: Ops drop by 50% , error rate is high for most of the times, when compared to 2.5.1

The comparative report is shared here

General Comparison Summary

https://docs.google.com/document/d/1PjQBdJvLFaK85OrrYzxOaZ54fklTXibj6yKVrrU-AOs/edit

3.0.0-918:: http://sdk-testresults.couchbase.com.s3.amazonaws.com/SDK-SDK/CB-3.0.0-918/Rb2In-HYBRID/07-03-14/068545/22bcef05a4f12ef3f9e7f69edcfc6aa4-MC.html

2.5.1-1094: http://sdk-testresults.couchbase.com.s3.amazonaws.com/SDK-SDK/CB-2.5.1-1094/Rb2In-HYBRID/06-24-14/083822/2f416c3207cf6c435ae631ae37da4861-MC.html
Attaching logs

We are trying to run more tests for different version of SDK like 1.4.3, 2.0

https://s3.amazonaws.com/bugdb/jira/MB-11665/logs_3_0_0_918_SDK_142.tar.gz


 Comments   
Comment by Parag Agarwal [ 07/Jul/14 ]
Pavel: Please add your comments for such a scenario with libcouchbase
Comment by Pavel Paulau [ 07/Jul/14 ]
Not exactly the same scenario but I'm not seeing major drops/errors in my tests (using lcb based workload generator).
Comment by Parag Agarwal [ 08/Jul/14 ]
So Deepti posted results and we are not seeing issues with 1.4.3 for the same run. What is the difference between SDK 1.4.2 Vs 1.4.3
Comment by Aleksey Kondratenko [ 08/Jul/14 ]
Given that problem seems to be sdk version specific and there's no evidence yet that it's something ns_server may cause, I'm bouncing this ticket back.
Comment by Matt Ingenthron [ 08/Jul/14 ]
Check the release notes for 1.4.3. We had an issue where there would be authentication problems, including timeouts and problems with retries. This was introduced in changes in 1.4.0 and fixed in 1.4.3. There's no direct evidence, but that sounds like a likely cause.
Comment by Matt Ingenthron [ 08/Jul/14 ]
Parag: not sure why you assigned this to me. I don't think there's any action for me. Reassigning back. I was just giving you additional information.
Comment by Wei-Li Liu [ 08/Jul/14 ]
Re-run the test with 1.4.2 SDK against 3.0.0 Server with just 4GB RAM per node ( comparing to my initial test with 16GB RAM per node)
The test result is much better. Not seeing the errors and operations rate never drop significantly
http://sdk-testresults.couchbase.com.s3.amazonaws.com/SDK-SDK/CB-3.0.0-918/Rb2In-HYBRID/07-08-14/074980/d5e2508529f1ad565ee38c9b8ab0c75b-MC.html
 
Comment by Parag Agarwal [ 08/Jul/14 ]
Sorry, Matt ! Should we close this as a documentation for release notes?
Comment by Matt Ingenthron [ 08/Jul/14 ]
Given that we believe it's an issue in a different project (JCBC), fixed and release noted there, I think we can just close this. The only other possible action, up to you and your management, is trying to verify this is the actual cause a bit more thoroughly.




[MB-11664] Read only user has rebalance options after node failed over. Created: 07/Jul/14  Updated: 07/Jul/14

Status: Open
Project: Couchbase Server
Component/s: UI
Affects Version/s: 3.0
Fix Version/s: 3.0-Beta
Security Level: Public

Type: Bug Priority: Critical
Reporter: Patrick Varley Assignee: Pavel Blagodov
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: Centos6u5
4 node 1 node failed over.

Attachments: PNG File Screen Shot 2014-07-07 at 23.48.13.png     PNG File Screen Shot 2014-07-08 at 00.01.44.png    
Triage: Untriaged
Operating System: Centos 64-bit
Is this a Regression?: Unknown

 Description   
See the screenshots

Recreate:

Failover a node
Log in as the readonly user
Go to the server node tab

You will see the green bar asking if you want delta recovery or full recovery.
I believe the read only user should not see that green bar.

When you click on you do get "Request failed. Check logs."




[MB-11663] 2.5.0 & 3.0.0 are incompatible for rebalance: Server error during processing: ["web request failed", {path,"/pools/default/tasks"}, {type,exit}, {what, {{badmatch,false}, Created: 07/Jul/14  Updated: 10/Jul/14  Resolved: 08/Jul/14

Status: Closed
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Critical
Reporter: Andrei Baranouski Assignee: Aleksey Kondratenko
Resolution: Duplicate Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
steps:

1) 2.5.0-1059 cluster with 3 nodes: 10.3.4.144, 10.3.4.145,10.3.4.146;1 bucket in the cluster

2) rebalance in 3 nodes 3.0.0-918 10.3.4.147, 10.3.4.148,10.3.4.149

I see an error message that rebalance failed but cluster is balanced!

Event Module Code Server Node Time
Bucket "default" rebalance does not seem to be swap rebalance (repeated 1 times) ns_vbucket_mover000 ns_1@10.3.4.147 15:53:59 - Mon Jul 7, 2014
Started rebalancing bucket default (repeated 1 times) ns_rebalancer000 ns_1@10.3.4.147 15:53:59 - Mon Jul 7, 2014
Starting rebalance, KeepNodes = ['ns_1@10.3.4.144','ns_1@10.3.4.145',
'ns_1@10.3.4.146','ns_1@10.3.4.147'], EjectNodes = ['ns_1@10.3.4.149',
'ns_1@10.3.4.148'], Failed over and being ejected nodes = []; no delta recovery nodes
(repeated 1 times) ns_orchestrator004 ns_1@10.3.4.147 15:53:59 - Mon Jul 7, 2014
Rebalance exited with reason {badmatch,false}
(repeated 1 times) ns_orchestrator002 ns_1@10.3.4.147 15:53:59 - Mon Jul 7, 2014
<0.8365.4> exited with {badmatch,false} ns_vbucket_mover000 ns_1@10.3.4.147 15:53:35 - Mon Jul 7, 2014
Server error during processing: ["web request failed",
{path,"/pools/default/tasks"},
{type,exit},
{what,
{{badmatch,false},
{gen_server,call,
[{global,ns_rebalance_observer},
get_detailed_progress,10000]}}},
{trace,
[{gen_server,call,3},
{ns_rebalance_observer,
get_detailed_progress,0},
{ns_doctor,get_detailed_progress,0},
{ns_doctor,do_build_tasks_list,4},
{menelaus_web,handle_tasks,2},
{request_throttler,do_request,3},
{menelaus_web,loop,3},
{mochiweb_http,headers,5}]}] menelaus_web019 ns_1@10.3.4.144 15:53:30 - Mon Jul 7, 2014
Rebalance exited with reason {badmatch,false}
ns_orchestrator002 ns_1@10.3.4.147 15:53:30 - Mon Jul 7, 2014
<0.8189.4> exited with {badmatch,false} ns_vbucket_mover000 ns_1@10.3.4.147 15:53:30 - Mon Jul 7, 2014
Bucket "default" rebalance does not seem to be swap rebalance ns_vbucket_mover000 ns_1@10.3.4.147 15:53:30 - Mon Jul 7, 2014
Started rebalancing bucket default ns_rebalancer000 ns_1@10.3.4.147 15:53:29 - Mon Jul 7, 2014
Starting rebalance, KeepNodes = ['ns_1@10.3.4.144','ns_1@10.3.4.145',
'ns_1@10.3.4.146','ns_1@10.3.4.147'], EjectNodes = ['ns_1@10.3.4.149',
'ns_1@10.3.4.148'], Failed over and being ejected nodes = []; no delta recovery nodes
ns_orchestrator004 ns_1@10.3.4.147 15:53:29 - Mon Jul 7, 2014
Current master is older and I'll try to takeover (repeated 1 times) mb_master000 ns_1@10.3.4.147 15:51:59 - Mon Jul 7, 2014
Server error during processing: ["web request failed",
{path,"/pools/default/tasks"},
{type,exit},
{what,
{{badarg,
[{erlang,'++',
[{'EXIT',
{{function_clause,
[{janitor_agent,handle_call,
[{get_mass_upr_docs_estimate,[]},
{<13487.6956.4>,
#Ref<13487.0.6.116181>},
{state,"default",<13487.6944.4>,
#Ref<16766.0.16.54566>,[],
[missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
missing,missing,missing,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,replica,
replica,replica,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active,active,
active,active,active],
[undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,passive,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined],
0}]},
{gen_server,handle_msg,5},
{proc_lib,init_p_do_apply,3}]},
{gen_server,call,
[{'janitor_agent-default',
'ns_1@10.3.4.146'},
{get_mass_upr_docs_estimate,[]},
infinity]}}},
[{{'ns_1@10.3.4.147',35},{0,0,tap}},
{{'ns_1@10.3.4.147',36},{0,0,tap}},
{{'ns_1@10.3.4.147',37},{0,0,tap}},
{{'ns_1@10.3.4.147',38},{0,0,tap}},
{{'ns_1@10.3.4.147',39},{0,0,tap}},
{{'ns_1@10.3.4.147',40},{0,0,tap}},
{{'ns_1@10.3.4.147',41},{0,0,tap}},
{{'ns_1@10.3.4.147',42},{0,0,tap}},
{{'ns_1@10.3.4.147',43},{0,0,tap}},
{{'ns_1@10.3.4.147',44},{0,0,tap}},
{{'ns_1@10.3.4.147',45},{0,0,tap}},
{{'ns_1@10.3.4.147',46},{0,0,tap}},
{{'ns_1@10.3.4.147',47},{0,0,tap}},
{{'ns_1@10.3.4.147',48},{0,0,tap}},
{{'ns_1@10.3.4.147',49},{0,0,tap}},
{{'ns_1@10.3.4.147',50},{0,0,tap}},
{{'ns_1@10.3.4.147',51},{0,0,tap}},
{{'ns_1@10.3.4.147',52},{0,0,tap}},
{{'ns_1@10.3.4.147',53},{0,0,tap}},
{{'ns_1@10.3.4.147',54},{0,0,tap}},
{{'ns_1@10.3.4.147',55},{0,0,tap}},
{{'ns_1@10.3.4.147',56},{0,0,tap}},
{{'ns_1@10.3.4.147',57},{0,0,tap}},
{{'ns_1@10.3.4.147',58},{0,0,tap}},
{{'ns_1@10.3.4.147',59},{0,0,tap}},
{{'ns_1@10.3.4.147',60},{0,0,tap}},
{{'ns_1@10.3.4.147',61},{0,0,tap}},
{{'ns_1@10.3.4.147',62},{0,0,tap}},
{{'ns_1@10.3.4.147',63},{0,0,tap}},
{{'ns_1@10.3.4.147',64},{0,0,tap}},
{{'ns_1@10.3.4.147',65},{0,0,tap}},
{{'ns_1@10.3.4.147',66},{0,0,tap}},
{{'ns_1@10.3.4.147',67},{0,0,tap}},
{{'ns_1@10.3.4.147',68},{0,0,tap}},
{{'ns_1@10.3.4.147',205},{0,0,tap}},
{{'ns_1@10.3.4.147',206},{0,0,tap}},
{{'ns_1@10.3.4.147',207},{0,0,tap}},
{{'ns_1@10.3.4.147',208},{0,0,tap}},
{{'ns_1@10.3.4.147',209},{0,0,tap}},
{{'ns_1@10.3.4.147',210},{0,0,tap}},
{{'ns_1@10.3.4.147',211},{0,0,tap}},
{{'ns_1@10.3.4.147',212},{0,0,tap}},
{{'ns_1@10.3.4.147',213},{0,0,tap}},
{{'ns_1@10.3.4.147',214},{0,0,tap}},
{{'ns_1@10.3.4.147',215},{0,0,tap}},
{{'ns_1@10.3.4.147',216},{0,0,tap}},
{{'ns_1@10.3.4.147',217},{0,0,tap}},
{{'ns_1@10.3.4.147',218},{0,0,tap}},
{{'ns_1@10.3.4.147',219},{0,0,tap}},
{{'ns_1@10.3.4.147',220},{0,0,tap}},
{{'ns_1@10.3.4.147',221},{0,0,tap}},
{{'ns_1@10.3.4.147',222},{0,0,tap}},
{{'ns_1@10.3.4.147',223},{0,0,tap}},
{{'ns_1@10.3.4.147',224},{0,0,tap}},
{{'ns_1@10.3.4.147',225},{0,0,tap}},
{{'ns_1@10.3.4.147',226},{0,0,tap}},
{{'ns_1@10.3.4.147',227},{0,0,tap}},
{{'ns_1@10.3.4.147',228},{0,0,tap}},
{{'ns_1@10.3.4.147',229},{0,0,tap}},
{{'ns_1@10.3.4.147',230},{0,0,tap}},
{{'ns_1@10.3.4.147',231},{0,0,tap}},
{{'ns_1@10.3.4.147',232},{0,0,tap}},
{{'ns_1@10.3.4.147',233},{0,0,tap}},
{{'ns_1@10.3.4.147',234},{0,0,tap}},
{{'ns_1@10.3.4.147',235},{0,0,tap}},
{{'ns_1@10.3.4.147',236},{0,0,tap}},
{{'ns_1@10.3.4.147',237},{0,0,tap}},
{{'ns_1@10.3.4.147',238},{0,0,tap}},
{{'ns_1@10.3.4.147',239},{0,0,tap}},
{{'ns_1@10.3.4.147',377},{0,0,tap}},
{{'ns_1@10.3.4.147',378},{0,0,tap}},
{{'ns_1@10.3.4.147',379},{0,0,tap}},
{{'ns_1@10.3.4.147',380},{0,0,tap}},
{{'ns_1@10.3.4.147',381},{0,0,tap}},
{{'ns_1@10.3.4.147',382},{0,0,tap}},
{{'ns_1@10.3.4.147',383},{0,0,tap}},
{{'ns_1@10.3.4.147',384},{0,0,tap}},
{{'ns_1@10.3.4.147',385},{0,0,tap}},
{{'ns_1@10.3.4.147',386},{0,0,tap}},
{{'ns_1@10.3.4.147',387},{0,0,tap}},
{{'ns_1@10.3.4.147',388},{0,0,tap}},
{{'ns_1@10.3.4.147',389},{0,0,tap}},
{{'ns_1@10.3.4.147',390},{0,0,tap}},
{{'ns_1@10.3.4.147',391},{0,0,tap}},
{{'ns_1@10.3.4.147',392},{0,0,tap}},
{{'ns_1@10.3.4.147',393},{0,0,tap}},
{{'ns_1@10.3.4.147',394},{0,0,tap}},
{{'ns_1@10.3.4.147',395},{0,0,tap}},
{{'ns_1@10.3.4.147',396},{0,0,tap}},
{{'ns_1@10.3.4.147',397},{0,0,tap}},
{{'ns_1@10.3.4.147',398},{0,0,tap}},
{{'ns_1@10.3.4.147',399},{0,0,tap}},
{{'ns_1@10.3.4.147',400},{0,0,tap}},
{{'ns_1@10.3.4.147',401},{0,0,tap}},
{{'ns_1@10.3.4.147',402},{0,0,tap}},
{{'ns_1@10.3.4.147',403},{0,0,tap}},
{{'ns_1@10.3.4.147',404},{0,0,tap}},
{{'ns_1@10.3.4.147',405},{0,0,tap}},
{{'ns_1@10.3.4.147',406},{0,0,tap}},
{{'ns_1@10.3.4.147',407},{0,0,tap}},
{{'ns_1@10.3.4.147',408},{0,0,tap}},
{{'ns_1@10.3.4.147',409},{0,0,tap}},
{{'ns_1@10.3.4.147',410},{0,0,tap}},
{{'ns_1@10.3.4.147',479},{0,0,tap}},
{{'ns_1@10.3.4.147',480},{0,0,tap}},
{{'ns_1@10.3.4.147',481},{0,0,tap}},
{{'ns_1@10.3.4.147',482},{0,0,tap}},
{{'ns_1@10.3.4.147',483},{0,0,tap}},
{{'ns_1@10.3.4.147',484},{0,0,tap}},
{{'ns_1@10.3.4.147',485},{0,0,tap}},
{{'ns_1@10.3.4.147',486},{0,0,tap}},
{{'ns_1@10.3.4.147',487},{0,0,tap}},
{{'ns_1@10.3.4.147',488},{0,0,tap}},
{{'ns_1@10.3.4.147',489},{0,0,tap}},
{{'ns_1@10.3.4.147',490},{0,0,tap}},
{{'ns_1@10.3.4.147',491},{0,0,tap}},
{{'ns_1@10.3.4.147',492},{0,0,tap}},
{{'ns_1@10.3.4.147',493},{0,0,tap}},
{{'ns_1@10.3.4.147',494},{0,0,tap}},
{{'ns_1@10.3.4.147',495},{0,0,tap}},
{{'ns_1@10.3.4.147',496},{0,0,tap}},
{{'ns_1@10.3.4.147',497},{0,0,tap}},
{{'ns_1@10.3.4.147',498},{0,0,tap}},
{{'ns_1@10.3.4.147',499},{0,0,tap}},
{{'ns_1@10.3.4.147',500},{0,0,tap}},
{{'ns_1@10.3.4.147',501},{0,0,tap}},
{{'ns_1@10.3.4.147',502},{0,0,tap}},
{{'ns_1@10.3.4.147',503},{0,0,tap}},
{{'ns_1@10.3.4.147',504},{0,0,tap}},
{{'ns_1@10.3.4.147',505},{0,0,tap}},
{{'ns_1@10.3.4.147',506},{0,0,tap}},
{{'ns_1@10.3.4.147',507},{0,0,tap}},
{{'ns_1@10.3.4.147',508},{0,0,tap}},
{{'ns_1@10.3.4.147',509},{0,0,tap}},
{{'ns_1@10.3.4.147',510},{0,0,tap}},
{{'ns_1@10.3.4.147',511},{0,0,tap}},
{{'ns_1@10.3.4.147',512},{0,0,tap}},
{{'ns_1@10.3.4.147',615},{0,0,tap}},
{{'ns_1@10.3.4.147',616},{0,0,tap}},
{{'ns_1@10.3.4.147',617},{0,0,tap}},
{{'ns_1@10.3.4.147',618},{0,0,tap}},
{{'ns_1@10.3.4.147',619},{0,0,tap}},
{{'ns_1@10.3.4.147',620},{0,0,tap}},
{{'ns_1@10.3.4.147',621},{0,0,tap}},
{{'ns_1@10.3.4.147',622},{0,0,tap}},
{{'ns_1@10.3.4.147',623},{0,0,tap}},
{{'ns_1@10.3.4.147',624},{0,0,tap}},
{{'ns_1@10.3.4.147',625},{0,0,tap}},
{{'ns_1@10.3.4.147',626},{0,0,tap}},
{{'ns_1@10.3.4.147',627},{0,0,tap}},
{{'ns_1@10.3.4.147',628},{0,0,tap}},
{{'ns_1@10.3.4.147',629},{0,0,tap}},
{{'ns_1@10.3.4.147',630},{0,0,tap}},
{{'ns_1@10.3.4.147',631},{0,0,tap}},
{{'ns_1@10.3.4.147',632},{0,0,tap}},
{{'ns_1@10.3.4.147',633},{0,0,tap}},
{{'ns_1@10.3.4.147',634},{0,0,tap}},
{{'ns_1@10.3.4.147',635},{0,0,tap}},
{{'ns_1@10.3.4.147',636},{0,0,tap}},
{{'ns_1@10.3.4.147',637},{0,0,tap}},
{{'ns_1@10.3.4.147',638},{0,0,tap}},
{{'ns_1@10.3.4.147',639},{0,0,tap}},
{{'ns_1@10.3.4.147',640},{0,0,tap}},
{{'ns_1@10.3.4.147',641},{0,0,tap}},
{{'ns_1@10.3.4.147',642},{0,0,tap}},
{{'ns_1@10.3.4.147',643},{0,0,tap}},
{{'ns_1@10.3.4.147',644},{0,0,tap}},
{{'ns_1@10.3.4.147',645},{0,0,tap}},
{{'ns_1@10.3.4.147',646},{0,0,tap}},
{{'ns_1@10.3.4.147',647},{0,0,tap}},
{{'ns_1@10.3.4.147',648},{0,0,tap}},
{{'ns_1@10.3.4.147',717},{0,0,tap}},
{{'ns_1@10.3.4.147',718},{0,0,tap}},
{{'ns_1@10.3.4.147',719},{0,0,tap}},
{{'ns_1@10.3.4.147',720},{0,0,tap}},
{{'ns_1@10.3.4.147',721},{0,0,tap}},
{{'ns_1@10.3.4.147',722},{0,0,tap}},
{{'ns_1@10.3.4.147',723},{0,0,tap}},
{{'ns_1@10.3.4.147',724},{0,0,tap}},
{{'ns_1@10.3.4.147',725},{0,0,tap}},
{{'ns_1@10.3.4.147',726},{0,0,tap}},
{{'ns_1@10.3.4.147',727},{0,0,tap}},
{{'ns_1@10.3.4.147',728},{0,0,tap}},
{{'ns_1@10.3.4.147',729},{0,0,tap}},
{{'ns_1@10.3.4.147',730},{0,0,tap}},
{{'ns_1@10.3.4.147',731},{0,0,tap}},
{{'ns_1@10.3.4.147',732},{0,0,tap}},
{{'ns_1@10.3.4.147',733},{0,0,tap}},
{{'ns_1@10.3.4.147',734},{0,0,tap}},
{{'ns_1@10.3.4.147',735},{0,0,tap}},
{{'ns_1@10.3.4.147',736},{0,0,tap}},
{{'ns_1@10.3.4.147',737},{0,0,tap}},
{{'ns_1@10.3.4.147',738},{0,0,tap}},
{{'ns_1@10.3.4.147',739},{0,0,tap}},
{{'ns_1@10.3.4.147',740},{0,0,tap}},
{{'ns_1@10.3.4.147',741},{0,0,tap}},
{{'ns_1@10.3.4.147',742},{0,0,tap}},
{{'ns_1@10.3.4.147',743},{0,0,tap}},
{{'ns_1@10.3.4.147',744},{0,0,tap}},
{{'ns_1@10.3.4.147',745},{0,0,tap}},
{{'ns_1@10.3.4.147',746},{0,0,tap}},
{{'ns_1@10.3.4.147',747},{0,0,tap}},
{{'ns_1@10.3.4.147',748},{0,0,tap}},
{{'ns_1@10.3.4.147',749},{0,0,tap}},
{{'ns_1@10.3.4.147',750},{0,0,tap}},
{{'ns_1@10.3.4.147',751},{0,0,tap}},
{{'ns_1@10.3.4.147',752},{0,0,tap}},
{{'ns_1@10.3.4.147',753},{0,0,tap}},
{{'ns_1@10.3.4.147',754},{0,0,tap}},
{{'ns_1@10.3.4.147',755},{0,0,tap}},
{{'ns_1@10.3.4.147',756},{0,0,tap}},
{{'ns_1@10.3.4.147',757},{0,0,tap}},
{{'ns_1@10.3.4.147',758},{0,0,tap}},
{{'ns_1@10.3.4.147',759},{0,0,tap}},
{{'ns_1@10.3.4.147',760},{0,0,tap}},
{{'ns_1@10.3.4.147',761},{0,0,tap}},
{{'ns_1@10.3.4.147',762},{0,0,tap}},
{{'ns_1@10.3.4.147',763},{0,0,tap}},
{{'ns_1@10.3.4.147',764},{0,0,tap}},
{{'ns_1@10.3.4.147',765},{0,0,tap}},
{{'ns_1@10.3.4.147',766},{0,0,tap}},
{{'ns_1@10.3.4.147',767},{0,0,tap}},
{{'ns_1@10.3.4.147',768},{0,0,tap}},
{{'ns_1@10.3.4.147',769},{0,0,tap}},
{{'ns_1@10.3.4.147',770},{0,0,tap}},
{{'ns_1@10.3.4.147',771},{0,0,tap}},
{{'ns_1@10.3.4.147',772},{0,0,tap}},
{{'ns_1@10.3.4.147',773},{0,0,tap}},
{{'ns_1@10.3.4.147',774},{0,0,tap}},
{{'ns_1@10.3.4.147',775},{0,0,tap}},
{{'ns_1@10.3.4.147',776},{0,0,tap}},
{{'ns_1@10.3.4.147',777},{0,0,tap}},
{{'ns_1@10.3.4.147',778},{0,0,tap}},
{{'ns_1@10.3.4.147',779},{0,0,tap}},
{{'ns_1@10.3.4.147',780},{0,0,tap}},
{{'ns_1@10.3.4.147',781},{0,0,tap}},
{{'ns_1@10.3.4.147',782},{0,0,tap}},
{{'ns_1@10.3.4.147',783},{0,0,tap}},
{{'ns_1@10.3.4.147',784},{0,0,tap}},
{{'ns_1@10.3.4.147',785},{0,0,tap}},
{{'ns_1@10.3.4.147',786},{0,0,tap}},
{{'ns_1@10.3.4.147',787},{0,0,tap}},
{{'ns_1@10.3.4.147',788},{0,0,tap}},
{{'ns_1@10.3.4.147',789},{0,0,tap}},
{{'ns_1@10.3.4.147',790},{0,0,tap}},
{{'ns_1@10.3.4.147',791},{0,0,tap}},
{{'ns_1@10.3.4.147',792},{0,0,tap}},
{{'ns_1@10.3.4.147',793},{0,0,tap}},
{{'ns_1@10.3.4.147',794},{0,0,tap}},
{{'ns_1@10.3.4.147',795},{0,0,tap}},
{{'ns_1@10.3.4.147',796},{0,0,tap}},
{{'ns_1@10.3.4.147',797},{0,0,tap}},
{{'ns_1@10.3.4.147',798},{0,0,tap}},
{{'ns_1@10.3.4.147',799},{0,0,tap}},
{{'ns_1@10.3.4.147',800},{0,0,tap}},
{{'ns_1@10.3.4.147',801},{0,0,tap}},
{{'ns_1@10.3.4.147',802},{0,0,tap}},
{{'ns_1@10.3.4.147',803},{0,0,tap}},
{{'ns_1@10.3.4.147',804},{0,0,tap}},
{{'ns_1@10.3.4.147',805},{0,0,tap}},
{{'ns_1@10.3.4.147',806},{0,0,tap}},
{{'ns_1@10.3.4.147',807},{0,0,tap}},
{{'ns_1@10.3.4.147',808},{0,0,tap}},
{{'ns_1@10.3.4.147',809},{0,0,tap}},
{{'ns_1@10.3.4.147',810},{0,0,tap}},
{{'ns_1@10.3.4.147',811},{0,0,tap}},
{{'ns_1@10.3.4.147',812},{0,0,tap}},
{{'ns_1@10.3.4.147',813},{0,0,tap}},
{{'ns_1@10.3.4.147',814},{0,0,tap}},
{{'ns_1@10.3.4.147',815},{0,0,tap}},
{{'ns_1@10.3.4.147',816},{0,0,tap}},
{{'ns_1@10.3.4.147',817},{0,0,tap}},
{{'ns_1@10.3.4.147',818},{0,0,tap}},
{{'ns_1@10.3.4.147',819},{0,0,tap}},
{{'ns_1@10.3.4.147',820},{0,0,tap}},
{{'ns_1@10.3.4.147',821},{0,0,tap}},
{{'ns_1@10.3.4.147',822},{0,0,tap}},
{{'ns_1@10.3.4.147',823},{0,0,tap}},
{{'ns_1@10.3.4.147',824},{0,0,tap}},
{{'ns_1@10.3.4.147',825},{0,0,tap}},
{{'ns_1@10.3.4.147',826},{0,0,tap}},
{{'ns_1@10.3.4.147',827},{0,0,tap}},
{{'ns_1@10.3.4.147',828},{0,0,tap}},
{{'ns_1@10.3.4.147',829},{0,0,tap}},
{{'ns_1@10.3.4.147',830},{0,0,tap}},
{{'ns_1@10.3.4.147',831},{0,0,tap}},
{{'ns_1@10.3.4.147',832},{0,0,tap}},
{{'ns_1@10.3.4.147',833},{0,0,tap}},
{{'ns_1@10.3.4.147',834},{0,0,tap}},
{{'ns_1@10.3.4.147',835},{0,0,tap}},
{{'ns_1@10.3.4.147',836},{0,0,tap}},
{{'ns_1@10.3.4.147',837},{0,0,tap}},
{{'ns_1@10.3.4.147',838},{0,0,tap}},
{{'ns_1@10.3.4.147',839},{0,0,tap}},
{{'ns_1@10.3.4.147',840},{0,0,tap}},
{{'ns_1@10.3.4.147',841},{0,0,tap}},
{{'ns_1@10.3.4.147',842},{0,0,tap}},
{{'ns_1@10.3.4.147',843},{0,0,tap}},
{{'ns_1@10.3.4.147',844},{0,0,tap}},
{{'ns_1@10.3.4.147',845},{0,0,tap}},
{{'ns_1@10.3.4.147',846},{0,0,tap}},
{{'ns_1@10.3.4.147',847},{0,0,tap}},
{{'ns_1@10.3.4.147',848},{0,0,tap}},
{{'ns_1@10.3.4.147',849},{0,0,tap}},
{{'ns_1@10.3.4.147',850},{0,0,tap}},
{{'ns_1@10.3.4.147',851},{0,0,tap}},
{{'ns_1@10.3.4.147',852},{0,0,tap}},
{{'ns_1@10.3.4.147',853},{0,0,tap}},
{{'ns_1@10.3.4.147',956},{0,0,tap}},
{{'ns_1@10.3.4.147',957},{0,0,tap}},
{{'ns_1@10.3.4.147',958},{0,0,tap}},
{{'ns_1@10.3.4.147',959},{0,0,tap}},
{{'ns_1@10.3.4.147',960},{0,0,tap}},
{{'ns_1@10.3.4.147',961},{0,0,tap}},
{{'ns_1@10.3.4.147',962},{0,0,tap}},
{{'ns_1@10.3.4.147',963},{0,0,tap}},
{{'ns_1@10.3.4.147',964},{0,0,tap}},
{{'ns_1@10.3.4.147',965},{0,0,tap}},
{{'ns_1@10.3.4.147',966},{0,0,tap}},
{{'ns_1@10.3.4.147',967},{0,0,tap}},
{{'ns_1@10.3.4.147',968},{0,0,tap}},
{{'ns_1@10.3.4.147',969},{0,0,tap}},
{{'ns_1@10.3.4.147',970},{0,0,tap}},
{{'ns_1@10.3.4.147',971},{0,0,tap}},
{{'ns_1@10.3.4.147',972},{0,0,tap}},
{{'ns_1@10.3.4.147',973},{0,0,tap}},
{{'ns_1@10.3.4.147',974},{0,0,tap}},
{{'ns_1@10.3.4.147',975},{0,0,tap}},
{{'ns_1@10.3.4.147',976},{0,0,tap}},
{{'ns_1@10.3.4.147',977},{0,0,tap}},
{{'ns_1@10.3.4.147',978},{0,0,tap}},
{{'ns_1@10.3.4.147',979},{0,0,tap}},
{{'ns_1@10.3.4.147',980},{0,0,tap}},
{{'ns_1@10.3.4.147',981},{0,0,tap}},
{{'ns_1@10.3.4.147',982},{0,0,tap}},
{{'ns_1@10.3.4.147',983},{0,0,tap}},
{{'ns_1@10.3.4.147',984},{0,0,tap}},
{{'ns_1@10.3.4.147',985},{0,0,tap}},
{{'ns_1@10.3.4.147',986},{0,0,tap}},
{{'ns_1@10.3.4.147',987},{0,0,tap}},
{{'ns_1@10.3.4.147',988},{0,0,tap}},
{{'ns_1@10.3.4.147',989},{0,0,tap}},
{{'ns_1@10.3.4.148',69},{0,0,tap}},
{{'ns_1@10.3.4.148',70},{0,0,tap}},
{{'ns_1@10.3.4.148',71},{0,0,tap}},
{{'ns_1@10.3.4.148',72},{0,0,tap}},
{{'ns_1@10.3.4.148',73},{0,0,tap}},
{{'ns_1@10.3.4.148',74},{0,0,tap}},
{{'ns_1@10.3.4.148',75},{0,0,tap}},
{{'ns_1@10.3.4.148',76},{0,0,tap}},
{{'ns_1@10.3.4.148',77},{0,0,tap}},
{{'ns_1@10.3.4.148',78},{0,0,tap}},
{{'ns_1@10.3.4.148',79},{0,0,tap}},
{{'ns_1@10.3.4.148',80},{0,0,tap}},
{{'ns_1@10.3.4.148',81},{0,0,tap}},
{{'ns_1@10.3.4.148',82},{0,0,tap}},
{{'ns_1@10.3.4.148',83},{0,0,tap}},
{{'ns_1@10.3.4.148',84},{0,0,tap}},
{{'ns_1@10.3.4.148',85},{0,0,tap}},
{{'ns_1@10.3.4.148',86},{0,0,tap}},
{{'ns_1@10.3.4.148',87},{0,0,tap}},
{{'ns_1@10.3.4.148',88},{0,0,tap}},
{{'ns_1@10.3.4.148',89},{0,0,tap}},
{{'ns_1@10.3.4.148',90},{0,0,tap}},
{{'ns_1@10.3.4.148',91},{0,0,tap}},
{{'ns_1@10.3.4.148',92},{0,0,tap}},
{{'ns_1@10.3.4.148',93},{0,0,tap}},
{{'ns_1@10.3.4.148',94},{0,0,tap}},
{{'ns_1@10.3.4.148',95},{0,0,tap}},
{{'ns_1@10.3.4.148',96},{0,0,tap}},
{{'ns_1@10.3.4.148',97},{0,0,tap}},
{{'ns_1@10.3.4.148',98},{0,0,tap}},
{{'ns_1@10.3.4.148',99},{0,0,tap}},
{{'ns_1@10.3.4.148',100},{0,0,tap}},
{{'ns_1@10.3.4.148',101},{0,0,tap}},
{{'ns_1@10.3.4.148',102},{0,0,tap}},
{{'ns_1@10.3.4.148',137},{0,0,tap}},
{{'ns_1@10.3.4.148',138},{0,0,tap}},
{{'ns_1@10.3.4.148',139},{0,0,tap}},
{{'ns_1@10.3.4.148',140},{0,0,tap}},
{{'ns_1@10.3.4.148',141},{0,0,tap}},
{{'ns_1@10.3.4.148',142},{0,0,tap}},
{{'ns_1@10.3.4.148',143},{0,0,tap}},
{{'ns_1@10.3.4.148',144},{0,0,tap}},
{{'ns_1@10.3.4.148',145},{0,0,tap}},
{{'ns_1@10.3.4.148',146},{0,0,tap}},
{{'ns_1@10.3.4.148',147},{0,0,tap}},
{{'ns_1@10.3.4.148',148},{0,0,tap}},
{{'ns_1@10.3.4.148',149},{0,0,tap}},
{{'ns_1@10.3.4.148',150},{0,0,tap}},
{{'ns_1@10.3.4.148',151},{0,0,tap}},
{{'ns_1@10.3.4.148',152},{0,0,tap}},
{{'ns_1@10.3.4.148',153},{0,0,tap}},
{{'ns_1@10.3.4.148',154},{0,0,tap}},
{{'ns_1@10.3.4.148',155},{0,0,tap}},
{{'ns_1@10.3.4.148',156},{0,0,tap}},
{{'ns_1@10.3.4.148',157},{0,0,tap}},
{{'ns_1@10.3.4.148',158},{0,0,tap}},
{{'ns_1@10.3.4.148',159},{0,0,tap}},
{{'ns_1@10.3.4.148',160},{0,0,tap}},
{{'ns_1@10.3.4.148',161},{0,0,tap}},
{{'ns_1@10.3.4.148',162},{0,0,tap}},
{{'ns_1@10.3.4.148',163},{0,0,tap}},
{{'ns_1@10.3.4.148',164},{0,0,tap}},
{{'ns_1@10.3.4.148',165},{0,0,tap}},
{{'ns_1@10.3.4.148',166},{0,0,tap}},
{{'ns_1@10.3.4.148',167},{0,0,tap}},
{{'ns_1@10.3.4.148',168},{0,0,tap}},
{{'ns_1@10.3.4.148',169},{0,0,tap}},
{{'ns_1@10.3.4.148',170},{0,0,tap}},
{{'ns_1@10.3.4.148',240},{0,0,tap}},
{{'ns_1@10.3.4.148',241},{0,0,tap}},
{{'ns_1@10.3.4.148',242},{0,0,tap}},
{{'ns_1@10.3.4.148',243},{0,0,tap}},
{{'ns_1@10.3.4.148',244},{0,0,tap}},
{{'ns_1@10.3.4.148',245},{0,0,tap}},
{{'ns_1@10.3.4.148',246},{0,0,tap}},
{{'ns_1@10.3.4.148',247},{0,0,tap}},
{{'ns_1@10.3.4.148',248},{0,0,tap}},
{{'ns_1@10.3.4.148',249},{0,0,tap}},
{{'ns_1@10.3.4.148',250},{0,0,tap}},
{{'ns_1@10.3.4.148',251},{0,0,tap}},
{{'ns_1@10.3.4.148',252},{0,0,tap}},
{{'ns_1@10.3.4.148',253},{0,0,tap}},
{{'ns_1@10.3.4.148',254},{0,0,tap}},
{{'ns_1@10.3.4.148',255},{0,0,tap}},
{{'ns_1@10.3.4.148',256},{0,0,tap}},
{{'ns_1@10.3.4.148',257},{0,0,tap}},
{{'ns_1@10.3.4.148',258},{0,0,tap}},
{{'ns_1@10.3.4.148',259},{0,0,tap}},
{{'ns_1@10.3.4.148',260},{0,0,tap}},
{{'ns_1@10.3.4.148',261},{0,0,tap}},
{{'ns_1@10.3.4.148',262},{0,0,tap}},
{{'ns_1@10.3.4.148',263},{0,0,tap}},
{{'ns_1@10.3.4.148',264},{0,0,tap}},
{{'ns_1@10.3.4.148',265},{0,0,tap}},
{{'ns_1@10.3.4.148',266},{0,0,tap}},
{{'ns_1@10.3.4.148',267},{0,0,tap}},
{{'ns_1@10.3.4.148',268},{0,0,tap}},
{{'ns_1@10.3.4.148',269},{0,0,tap}},
{{'ns_1@10.3.4.148',270},{0,0,tap}},
{{'ns_1@10.3.4.148',271},{0,0,tap}},
{{'ns_1@10.3.4.148',272},{0,0,tap}},
{{'ns_1@10.3.4.148',273},{0,0,tap}},
{{'ns_1@10.3.4.148',411},{0,0,tap}},
{{'ns_1@10.3.4.148',412},{0,0,tap}},
{{'ns_1@10.3.4.148',413},{0,0,tap}},
{{'ns_1@10.3.4.148',414},{0,0,tap}},
{{'ns_1@10.3.4.148',415},{0,0,tap}},
{{'ns_1@10.3.4.148',416},{0,0,tap}},
{{'ns_1@10.3.4.148',417},{0,0,tap}},
{{'ns_1@10.3.4.148',418},{0,0,tap}},
{{'ns_1@10.3.4.148',419},{0,0,tap}},
{{'ns_1@10.3.4.148',420},{0,0,tap}},
{{'ns_1@10.3.4.148',421},{0,0,tap}},
{{'ns_1@10.3.4.148',422},{0,0,tap}},
{{'ns_1@10.3.4.148',423},{0,0,tap}},
{{'ns_1@10.3.4.148',424},{0,0,tap}},
{{'ns_1@10.3.4.148',425},{0,0,tap}},
{{'ns_1@10.3.4.148',426},{0,0,tap}},
{{'ns_1@10.3.4.148',427},{0,0,tap}},
{{'ns_1@10.3.4.148',428},{0,0,tap}},
{{'ns_1@10.3.4.148',429},{0,0,tap}},
{{'ns_1@10.3.4.148',430},{0,0,tap}},
{{'ns_1@10.3.4.148',431},{0,0,tap}},
{{'ns_1@10.3.4.148',432},{0,0,tap}},
{{'ns_1@10.3.4.148',433},{0,0,tap}},
{{'ns_1@10.3.4.148',434},{0,0,tap}},
{{'ns_1@10.3.4.148',435},{0,0,tap}},
{{'ns_1@10.3.4.148',436},{0,0,tap}},
{{'ns_1@10.3.4.148',437},{0,0,tap}},
{{'ns_1@10.3.4.148',438},{0,0,tap}},
{{'ns_1@10.3.4.148',439},{0,0,tap}},
{{'ns_1@10.3.4.148',440},{0,0,tap}},
{{'ns_1@10.3.4.148',441},{0,0,tap}},
{{'ns_1@10.3.4.148',442},{0,0,tap}},
{{'ns_1@10.3.4.148',443},{0,0,tap}},
{{'ns_1@10.3.4.148',444},{0,0,tap}},
{{'ns_1@10.3.4.148',649},{0,0,tap}},
{{'ns_1@10.3.4.148',650},{0,0,tap}},
{{'ns_1@10.3.4.148',651},{0,0,tap}},
{{'ns_1@10.3.4.148',652},{0,0,tap}},
{{'ns_1@10.3.4.148',653},{0,0,tap}},
{{'ns_1@10.3.4.148',654},{0,0,tap}},
{{'ns_1@10.3.4.148',655},{0,0,tap}},
{{'ns_1@10.3.4.148',656},{0,0,tap}},
{{'ns_1@10.3.4.148',657},{0,0,tap}},
{{'ns_1@10.3.4.148',658},{0,0,tap}},
{{'ns_1@10.3.4.148',659},{0,0,tap}},
{{'ns_1@10.3.4.148',660},{0,0,tap}},
{{'ns_1@10.3.4.148',661},{0,0,tap}},
{{'ns_1@10.3.4.148',662},{0,0,tap}},
{{'ns_1@10.3.4.148',663},{0,0,tap}},
{{'ns_1@10.3.4.148',664},{0,0,tap}},
{{'ns_1@10.3.4.148',665},{0,0,tap}},
{{'ns_1@10.3.4.148',666},{0,0,tap}},
{{'ns_1@10.3.4.148',667},{0,0,tap}},
{{'ns_1@10.3.4.148',668},{0,0,tap}},
{{'ns_1@10.3.4.148',669},{0,0,tap}},
{{'ns_1@10.3.4.148',670},{0,0,tap}},
{{'ns_1@10.3.4.148',671},{0,0,tap}},
{{'ns_1@10.3.4.148',672},{0,0,tap}},
{{'ns_1@10.3.4.148',673},{0,0,tap}},
{{'ns_1@10.3.4.148',674},{0,0,tap}},
{{'ns_1@10.3.4.148',675},{0,0,tap}},
{{'ns_1@10.3.4.148',676},{0,0,tap}},
{{'ns_1@10.3.4.148',677},{0,0,tap}},
{{'ns_1@10.3.4.148',678},{0,0,tap}},
{{'ns_1@10.3.4.148',679},{0,0,tap}},
{{'ns_1@10.3.4.148',680},{0,0,tap}},
{{'ns_1@10.3.4.148',681},{0,0,tap}},
{{'ns_1@10.3.4.148',682},{0,0,tap}},
{{'ns_1@10.3.4.148',786},{0,0,tap}},
{{'ns_1@10.3.4.148',787},{0,0,tap}},
{{'ns_1@10.3.4.148',788},{0,0,tap}},
{{'ns_1@10.3.4.148',789},{0,0,tap}},
{{'ns_1@10.3.4.148',790},{0,0,tap}},
{{'ns_1@10.3.4.148',791},{0,0,tap}},
{{'ns_1@10.3.4.148',792},{0,0,tap}},
{{'ns_1@10.3.4.148',793},{0,0,tap}},
{{'ns_1@10.3.4.148',794},{0,0,tap}},
{{'ns_1@10.3.4.148',795},{0,0,tap}},
{{'ns_1@10.3.4.148',796},{0,0,tap}},
{{'ns_1@10.3.4.148',797},{0,0,tap}},
{{'ns_1@10.3.4.148',798},{0,0,tap}},
{{'ns_1@10.3.4.148',799},{0,0,tap}},
{{'ns_1@10.3.4.148',800},{0,0,tap}},
{{'ns_1@10.3.4.148',801},{0,0,tap}},
{{'ns_1@10.3.4.148',802},{0,0,tap}},
{{'ns_1@10.3.4.148',803},{0,0,tap}},
{{'ns_1@10.3.4.148',804},{0,0,tap}},
{{'ns_1@10.3.4.148',805},{0,0,tap}},
{{'ns_1@10.3.4.148',806},{0,0,tap}},
{{'ns_1@10.3.4.148',807},{0,0,tap}},
{{'ns_1@10.3.4.148',808},{0,0,tap}},
{{'ns_1@10.3.4.148',809},{0,0,tap}},
{{'ns_1@10.3.4.148',810},{0,0,tap}},
{{'ns_1@10.3.4.148',811},{0,0,tap}},
{{'ns_1@10.3.4.148',812},{0,0,tap}},
{{'ns_1@10.3.4.148',813},{0,0,tap}},
{{'ns_1@10.3.4.148',814},{0,0,tap}},
{{'ns_1@10.3.4.148',815},{0,0,tap}},
{{'ns_1@10.3.4.148',816},{0,0,tap}},
{{'ns_1@10.3.4.148',817},{0,0,tap}},
{{'ns_1@10.3.4.148',818},{0,0,tap}},
{{'ns_1@10.3.4.148',819},{0,0,tap}},
{{'ns_1@10.3.4.148',888},{0,0,tap}},
{{'ns_1@10.3.4.148',889},{0,0,tap}},
{{'ns_1@10.3.4.148',890},{0,0,tap}},
{{'ns_1@10.3.4.148',891},{0,0,tap}},
{{'ns_1@10.3.4.148',892},{0,0,tap}},
{{'ns_1@10.3.4.148',893},{0,0,tap}},
{{'ns_1@10.3.4.148',894},{0,0,tap}},
{{'ns_1@10.3.4.148',895},{0,0,tap}},
{{'ns_1@10.3.4.148',896},{0,0,tap}},
{{'ns_1@10.3.4.148',897},{0,0,tap}},
{{'ns_1@10.3.4.148',898},{0,0,tap}},
{{'ns_1@10.3.4.148',899},{0,0,tap}},
{{'ns_1@10.3.4.148',900},{0,0,tap}},
{{'ns_1@10.3.4.148',901},{0,0,tap}},
{{'ns_1@10.3.4.148',902},{0,0,tap}},
{{'ns_1@10.3.4.148',903},{0,0,tap}},
{{'ns_1@10.3.4.148',904},{0,0,tap}},
{{'ns_1@10.3.4.148',905},{0,0,tap}},
{{'ns_1@10.3.4.148',906},{0,0,tap}},
{{'ns_1@10.3.4.148',907},{0,0,tap}},
{{'ns_1@10.3.4.148',908},{0,0,tap}},
{{'ns_1@10.3.4.148',909},{0,0,tap}},
{{'ns_1@10.3.4.148',910},{0,0,tap}},
{{'ns_1@10.3.4.148',911},{0,0,tap}},
{{'ns_1@10.3.4.148',912},{0,0,tap}},
{{'ns_1@10.3.4.148',913},{0,0,tap}},
{{'ns_1@10.3.4.148',914},{0,0,tap}},
{{'ns_1@10.3.4.148',915},{0,0,tap}},
{{'ns_1@10.3.4.148',916},{0,0,tap}},
{{'ns_1@10.3.4.148',917},{0,0,tap}},
{{'ns_1@10.3.4.148',918},{0,0,tap}},
{{'ns_1@10.3.4.148',919},{0,0,tap}},
{{'ns_1@10.3.4.148',920},{0,0,tap}},
{{'ns_1@10.3.4.148',921},{0,0,tap}},
{{'ns_1@10.3.4.148',922},{0,0,tap}},
{{'ns_1@10.3.4.148',923},{0,0,tap}},
{{'ns_1@10.3.4.148',924},{0,0,tap}},
{{'ns_1@10.3.4.148',925},{0,0,tap}},
{{'ns_1@10.3.4.148',926},{0,0,tap}},
{{'ns_1@10.3.4.148',927},{0,0,tap}},
{{'ns_1@10.3.4.148',928},{0,0,tap}},
{{'ns_1@10.3.4.148',929},{0,0,tap}},
{{'ns_1@10.3.4.148',930},{0,0,tap}},
{{'ns_1@10.3.4.148',931},{0,0,tap}},
{{'ns_1@10.3.4.148',932},{0,0,tap}},
{{'ns_1@10.3.4.148',933},{0,0,tap}},
{{'ns_1@10.3.4.148',934},{0,0,tap}},
{{'ns_1@10.3.4.148',935},{0,0,tap}},
{{'ns_1@10.3.4.148',936},{0,0,tap}},
{{'ns_1@10.3.4.148',937},{0,0,tap}},
{{'ns_1@10.3.4.148',938},{0,0,tap}},
{{'ns_1@10.3.4.148',939},{0,0,tap}},
{{'ns_1@10.3.4.148',940},{0,0,tap}},
{{'ns_1@10.3.4.148',941},{0,0,tap}},
{{'ns_1@10.3.4.148',942},{0,0,tap}},
{{'ns_1@10.3.4.148',943},{0,0,tap}},
{{'ns_1@10.3.4.148',944},{0,0,tap}},
{{'ns_1@10.3.4.148',945},{0,0,tap}},
{{'ns_1@10.3.4.148',946},{0,0,tap}},
{{'ns_1@10.3.4.148',947},{0,0,tap}},
{{'ns_1@10.3.4.148',948},{0,0,tap}},
{{'ns_1@10.3.4.148',949},{0,0,tap}},
{{'ns_1@10.3.4.148',950},{0,0,tap}},
{{'ns_1@10.3.4.148',951},{0,0,tap}},
{{'ns_1@10.3.4.148',952},{0,0,tap}},
{{'ns_1@10.3.4.148',953},{0,0,tap}},
{{'ns_1@10.3.4.148',954},{0,0,tap}},
{{'ns_1@10.3.4.148',955},{0,0,tap}},
{{'ns_1@10.3.4.148',956},{0,0,tap}},
{{'ns_1@10.3.4.148',957},{0,0,tap}},
{{'ns_1@10.3.4.148',958},{0,0,tap}},
{{'ns_1@10.3.4.148',959},{0,0,tap}},
{{'ns_1@10.3.4.148',960},{0,0,tap}},
{{'ns_1@10.3.4.148',961},{0,0,tap}},
{{'ns_1@10.3.4.148',962},{0,0,tap}},
{{'ns_1@10.3.4.148',963},{0,0,tap}},
{{'ns_1@10.3.4.148',964},{0,0,tap}},
{{'ns_1@10.3.4.148',965},{0,0,tap}},
{{'ns_1@10.3.4.148',966},{0,0,tap}},
{{'ns_1@10.3.4.148',967},{0,0,tap}},
{{'ns_1@10.3.4.148',968},{0,0,tap}},
{{'ns_1@10.3.4.148',969},{0,0,tap}},
{{'ns_1@10.3.4.148',970},{0,0,tap}},
{{'ns_1@10.3.4.148',971},{0,0,tap}},
{{'ns_1@10.3.4.148',972},{0,0,tap}},
{{'ns_1@10.3.4.148',973},{0,0,tap}},
{{'ns_1@10.3.4.148',974},{0,0,tap}},
{{'ns_1@10.3.4.148',975},{0,0,tap}},
{{'ns_1@10.3.4.148',976},{0,0,tap}},
{{'ns_1@10.3.4.148',977},{0,0,tap}},
{{'ns_1@10.3.4.148',978},{0,0,tap}},
{{'ns_1@10.3.4.148',979},{0,0,tap}},
{{'ns_1@10.3.4.148',980},{0,0,tap}},
{{'ns_1@10.3.4.148',981},{0,0,tap}},
{{'ns_1@10.3.4.148',982},{0,0,tap}},
{{'ns_1@10.3.4.148',983},{0,0,tap}},
{{'ns_1@10.3.4.148',984},{0,0,tap}},
{{'ns_1@10.3.4.148',985},{0,0,tap}},
{{'ns_1@10.3.4.148',986},{0,0,tap}},
{{'ns_1@10.3.4.148',987},{0,0,tap}},
{{'ns_1@10.3.4.148',988},{0,0,tap}},
{{'ns_1@10.3.4.148',989},{0,0,tap}},
{{'ns_1@10.3.4.148',990},{0,0,tap}},
{{'ns_1@10.3.4.148',991},{0,0,tap}},
{{'ns_1@10.3.4.148',992},{0,0,tap}},
{{'ns_1@10.3.4.148',993},{0,0,tap}},
{{'ns_1@10.3.4.148',994},{0,0,tap}},
{{'ns_1@10.3.4.148',995},{0,0,tap}},
{{'ns_1@10.3.4.148',996},{0,0,tap}},
{{'ns_1@10.3.4.148',997},{0,0,tap}},
{{'ns_1@10.3.4.148',998},{0,0,tap}},
{{'ns_1@10.3.4.148',999},{0,0,tap}},
{{'ns_1@10.3.4.148',1000},{0,0,tap}},
{{'ns_1@10.3.4.148',1001},{0,0,tap}},
{{'ns_1@10.3.4.148',1002},{0,0,tap}},
{{'ns_1@10.3.4.148',1003},{0,0,tap}},
{{'ns_1@10.3.4.148',1004},{0,0,tap}},
{{'ns_1@10.3.4.148',1005},{0,0,tap}},
{{'ns_1@10.3.4.148',1006},{0,0,tap}},
{{'ns_1@10.3.4.148',1007},{0,0,tap}},
{{'ns_1@10.3.4.148',1008},{0,0,tap}},
{{'ns_1@10.3.4.148',1009},{0,0,tap}},
{{'ns_1@10.3.4.148',1010},{0,0,tap}},
{{'ns_1@10.3.4.148',1011},{0,0,tap}},
{{'ns_1@10.3.4.148',1012},{0,0,tap}},
{{'ns_1@10.3.4.148',1013},{0,0,tap}},
{{'ns_1@10.3.4.148',1014},{0,0,tap}},
{{'ns_1@10.3.4.148',1015},{0,0,tap}},
{{'ns_1@10.3.4.148',1016},{0,0,tap}},
{{'ns_1@10.3.4.148',1017},{0,0,tap}},
{{'ns_1@10.3.4.148',1018},{0,0,tap}},
{{'ns_1@10.3.4.148',1019},{0,0,tap}},
{{'ns_1@10.3.4.148',1020},{0,0,tap}},
{{'ns_1@10.3.4.148',1021},{0,0,tap}},
{{'ns_1@10.3.4.148',1022},{0,0,tap}},
{{'ns_1@10.3.4.148',1023},{0,0,tap}},
{{'ns_1@10.3.4.149',103},{0,0,tap}},
{{'ns_1@10.3.4.149',104},{0,0,tap}},
{{'ns_1@10.3.4.149',105},{0,0,tap}},
{{'ns_1@10.3.4.149',106},{0,0,tap}},
{{'ns_1@10.3.4.149',107},{0,0,tap}},
{{'ns_1@10.3.4.149',108},{0,0,tap}},
{{'ns_1@10.3.4.149',109},{0,0,tap}},
{{'ns_1@10.3.4.149',110},{0,0,tap}},
{{'ns_1@10.3.4.149',111},{0,0,tap}},
{{'ns_1@10.3.4.149',112},{0,0,tap}},
{{'ns_1@10.3.4.149',113},{0,0,tap}},
{{'ns_1@10.3.4.149',114},{0,0,tap}},
{{'ns_1@10.3.4.149',115},{0,0,tap}},
{{'ns_1@10.3.4.149',116},{0,0,tap}},
{{'ns_1@10.3.4.149',117},{0,0,tap}},
{{'ns_1@10.3.4.149',118},{0,0,tap}},
{{'ns_1@10.3.4.149',119},{0,0,tap}},
{{'ns_1@10.3.4.149',120},{0,0,tap}},
{{'ns_1@10.3.4.149',121},{0,0,tap}},
{{'ns_1@10.3.4.149',122},{0,0,tap}},
{{'ns_1@10.3.4.149',123},{0,0,tap}},
{{'ns_1@10.3.4.149',124},{0,0,tap}},
{{'ns_1@10.3.4.149',125},{0,0,tap}},
{{'ns_1@10.3.4.149',126},{0,0,tap}},
{{'ns_1@10.3.4.149',127},{0,0,tap}},
{{'ns_1@10.3.4.149',128},{0,0,tap}},
{{'ns_1@10.3.4.149',129},{0,0,tap}},
{{'ns_1@10.3.4.149',130},{0,0,tap}},
{{'ns_1@10.3.4.149',131},{0,0,tap}},
{{'ns_1@10.3.4.149',132},{0,0,tap}},
{{'ns_1@10.3.4.149',133},{0,0,tap}},
{{'ns_1@10.3.4.149',134},{0,0,tap}},
{{'ns_1@10.3.4.149',135},{0,0,tap}},
{{'ns_1@10.3.4.149',136},{0,0,tap}},
{{'ns_1@10.3.4.149',274},{0,0,tap}},
{{'ns_1@10.3.4.149',275},{0,0,tap}},
{{'ns_1@10.3.4.149',276},{0,0,tap}},
{{'ns_1@10.3.4.149',277},{0,0,tap}},
{{'ns_1@10.3.4.149',278},{0,0,tap}},
{{'ns_1@10.3.4.149',279},{0,0,tap}},
{{'ns_1@10.3.4.149',280},{0,0,tap}},
{{'ns_1@10.3.4.149',281},{0,0,tap}},
{{'ns_1@10.3.4.149',282},{0,0,tap}},
{{'ns_1@10.3.4.149',283},{0,0,tap}},
{{'ns_1@10.3.4.149',284},{0,0,tap}},
{{'ns_1@10.3.4.149',285},{0,0,tap}},
{{'ns_1@10.3.4.149',286},{0,0,tap}},
{{'ns_1@10.3.4.149',287},{0,0,tap}},
{{'ns_1@10.3.4.149',288},{0,0,tap}},
{{'ns_1@10.3.4.149',289},{0,0,tap}},
{{'ns_1@10.3.4.149',290},{0,0,tap}},
{{'ns_1@10.3.4.149',291},{0,0,tap}},
{{'ns_1@10.3.4.149',292},{0,0,tap}},
{{'ns_1@10.3.4.149',293},{0,0,tap}},
{{'ns_1@10.3.4.149',294},{0,0,tap}},
{{'ns_1@10.3.4.149',295},{0,0,tap}},
{{'ns_1@10.3.4.149',296},{0,0,tap}},
{{'ns_1@10.3.4.149',297},{0,0,tap}},
{{'ns_1@10.3.4.149',298},{0,0,tap}},
{{'ns_1@10.3.4.149',299},{0,0,tap}},
{{'ns_1@10.3.4.149',300},{0,0,tap}},
{{'ns_1@10.3.4.149',301},{0,0,tap}},
{{'ns_1@10.3.4.149',302},{0,0,tap}},
{{'ns_1@10.3.4.149',303},{0,0,tap}},
{{'ns_1@10.3.4.149',304},{0,0,tap}},
{{'ns_1@10.3.4.149',305},{0,0,tap}},
{{'ns_1@10.3.4.149',306},{0,0,tap}},
{{'ns_1@10.3.4.149',307},{0,0,tap}},
{{'ns_1@10.3.4.149',308},{0,0,tap}},
{{'ns_1@10.3.4.149',309},{0,0,tap}},
{{'ns_1@10.3.4.149',310},{0,0,tap}},
{{'ns_1@10.3.4.149',311},{0,0,tap}},
{{'ns_1@10.3.4.149',312},{0,0,tap}},
{{'ns_1@10.3.4.149',313},{0,0,tap}},
{{'ns_1@10.3.4.149',314},{0,0,tap}},
{{'ns_1@10.3.4.149',315},{0,0,tap}},
{{'ns_1@10.3.4.149',316},{0,0,tap}},
{{'ns_1@10.3.4.149',317},{0,0,tap}},
{{'ns_1@10.3.4.149',318},{0,0,tap}},
{{'ns_1@10.3.4.149',319},{0,0,tap}},
{{'ns_1@10.3.4.149',320},{0,0,tap}},
{{'ns_1@10.3.4.149',321},{0,0,tap}},
{{'ns_1@10.3.4.149',322},{0,0,tap}},
{{'ns_1@10.3.4.149',323},{0,0,tap}},
{{'ns_1@10.3.4.149',324},{0,0,tap}},
{{'ns_1@10.3.4.149',325},{0,0,tap}},
{{'ns_1@10.3.4.149',326},{0,0,tap}},
{{'ns_1@10.3.4.149',327},{0,0,tap}},
{{'ns_1@10.3.4.149',328},{0,0,tap}},
{{'ns_1@10.3.4.149',329},{0,0,tap}},
{{'ns_1@10.3.4.149',330},{0,0,tap}},
{{'ns_1@10.3.4.149',331},{0,0,tap}},
{{'ns_1@10.3.4.149',332},{0,0,tap}},
{{'ns_1@10.3.4.149',333},{0,0,tap}},
{{'ns_1@10.3.4.149',334},{0,0,tap}},
{{'ns_1@10.3.4.149',335},{0,0,tap}},
{{'ns_1@10.3.4.149',336},{0,0,tap}},
{{'ns_1@10.3.4.149',337},{0,0,tap}},
{{'ns_1@10.3.4.149',338},{0,0,tap}},
{{'ns_1@10.3.4.149',339},{0,0,tap}},
{{'ns_1@10.3.4.149',340},{0,0,tap}},
{{'ns_1@10.3.4.149',341},{0,0,tap}},
{{'ns_1@10.3.4.149',445},{0,0,tap}},
{{'ns_1@10.3.4.149',446},{0,0,tap}},
{{'ns_1@10.3.4.149',447},{0,0,tap}},
{{'ns_1@10.3.4.149',448},{0,0,tap}},
{{'ns_1@10.3.4.149',449},{0,0,tap}},
{{'ns_1@10.3.4.149',450},{0,0,tap}},
{{'ns_1@10.3.4.149',451},{0,0,tap}},
{{'ns_1@10.3.4.149',452},{0,0,tap}},
{{'ns_1@10.3.4.149',453},{0,0,tap}},
{{'ns_1@10.3.4.149',454},{0,0,tap}},
{{'ns_1@10.3.4.149',455},{0,0,tap}},
{{'ns_1@10.3.4.149',456},{0,0,tap}},
{{'ns_1@10.3.4.149',457},{0,0,tap}},
{{'ns_1@10.3.4.149',458},{0,0,tap}},
{{'ns_1@10.3.4.149',459},{0,0,tap}},
{{'ns_1@10.3.4.149',460},{0,0,tap}},
{{'ns_1@10.3.4.149',461},{0,0,tap}},
{{'ns_1@10.3.4.149',462},{0,0,tap}},
{{'ns_1@10.3.4.149',463},{0,0,tap}},
{{'ns_1@10.3.4.149',464},{0,0,tap}},
{{'ns_1@10.3.4.149',465},{0,0,tap}},
{{'ns_1@10.3.4.149',466},{0,0,tap}},
{{'ns_1@10.3.4.149',467},{0,0,tap}},
{{'ns_1@10.3.4.149',468},{0,0,tap}},
{{'ns_1@10.3.4.149',469},{0,0,tap}},
{{'ns_1@10.3.4.149',470},{0,0,tap}},
{{'ns_1@10.3.4.149',471},{0,0,tap}},
{{'ns_1@10.3.4.149',472},{0,0,tap}},
{{'ns_1@10.3.4.149',473},{0,0,tap}},
{{'ns_1@10.3.4.149',474},{0,0,tap}},
{{'ns_1@10.3.4.149',475},{0,0,tap}},
{{'ns_1@10.3.4.149',476},{0,0,tap}},
{{'ns_1@10.3.4.149',477},{0,0,tap}},
{{'ns_1@10.3.4.149',478},{0,0,tap}},
{{'ns_1@10.3.4.149',547},{0,0,tap}},
{{'ns_1@10.3.4.149',548},{0,0,tap}},
{{'ns_1@10.3.4.149',549},{0,0,tap}},
{{'ns_1@10.3.4.149',550},{0,0,tap}},
{{'ns_1@10.3.4.149',551},{0,0,tap}},
{{'ns_1@10.3.4.149',552},{0,0,tap}},
{{'ns_1@10.3.4.149',553},{0,0,tap}},
{{'ns_1@10.3.4.149',554},{0,0,tap}},
{{'ns_1@10.3.4.149',555},{0,0,tap}},
{{'ns_1@10.3.4.149',556},{0,0,tap}},
{{'ns_1@10.3.4.149',557},{0,0,tap}},
{{'ns_1@10.3.4.149',558},{0,0,tap}},
{{'ns_1@10.3.4.149',559},{0,0,tap}},
{{'ns_1@10.3.4.149',560},{0,0,tap}},
{{'ns_1@10.3.4.149',561},{0,0,tap}},
{{'ns_1@10.3.4.149',562},{0,0,tap}},
{{'ns_1@10.3.4.149',563},{0,0,tap}},
{{'ns_1@10.3.4.149',564},{0,0,tap}},
{{'ns_1@10.3.4.149',565},{0,0,tap}},
{{'ns_1@10.3.4.149',566},{0,0,tap}},
{{'ns_1@10.3.4.149',567},{0,0,tap}},
{{'ns_1@10.3.4.149',568},{0,0,tap}},
{{'ns_1@10.3.4.149',569},{0,0,tap}},
{{'ns_1@10.3.4.149',570},{0,0,tap}},
{{'ns_1@10.3.4.149',571},{0,0,tap}},
{{'ns_1@10.3.4.149',572},{0,0,tap}},
{{'ns_1@10.3.4.149',573},{0,0,tap}},
{{'ns_1@10.3.4.149',574},{0,0,tap}},
{{'ns_1@10.3.4.149',575},{0,0,tap}},
{{'ns_1@10.3.4.149',576},{0,0,tap}},
{{'ns_1@10.3.4.149',577},{0,0,tap}},
{{'ns_1@10.3.4.149',578},{0,0,tap}},
{{'ns_1@10.3.4.149',579},{0,0,tap}},
{{'ns_1@10.3.4.149',580},{0,0,tap}},
{{'ns_1@10.3.4.149',581},{0,0,tap}},
{{'ns_1@10.3.4.149',582},{0,0,tap}},
{{'ns_1@10.3.4.149',583},{0,0,tap}},
{{'ns_1@10.3.4.149',584},{0,0,tap}},
{{'ns_1@10.3.4.149',585},{0,0,tap}},
{{'ns_1@10.3.4.149',586},{0,0,tap}},
{{'ns_1@10.3.4.149',587},{0,0,tap}},
{{'ns_1@10.3.4.149',588},{0,0,tap}},
{{'ns_1@10.3.4.149',589},{0,0,tap}},
{{'ns_1@10.3.4.149',590},{0,0,tap}},
{{'ns_1@10.3.4.149',591},{0,0,tap}},
{{'ns_1@10.3.4.149',592},{0,0,tap}},
{{'ns_1@10.3.4.149',593},{0,0,tap}},
{{'ns_1@10.3.4.149',594},{0,0,tap}},
{{'ns_1@10.3.4.149',595},{0,0,tap}},
{{'ns_1@10.3.4.149',596},{0,0,tap}},
{{'ns_1@10.3.4.149',597},{0,0,tap}},
{{'ns_1@10.3.4.149',598},{0,0,tap}},
{{'ns_1@10.3.4.149',599},{0,0,tap}},
{{'ns_1@10.3.4.149',600},{0,0,tap}},
{{'ns_1@10.3.4.149',601},{0,0,tap}},
{{'ns_1@10.3.4.149',602},{0,0,tap}},
{{'ns_1@10.3.4.149',603},{0,0,tap}},
{{'ns_1@10.3.4.149',604},{0,0,tap}},
{{'ns_1@10.3.4.149',605},{0,0,tap}},
{{'ns_1@10.3.4.149',606},{0,0,tap}},
{{'ns_1@10.3.4.149',607},{0,0,tap}},
{{'ns_1@10.3.4.149',608},{0,0,tap}},
{{'ns_1@10.3.4.149',609},{0,0,tap}},
{{'ns_1@10.3.4.149',610},{0,0,tap}},
{{'ns_1@10.3.4.149',611},{0,0,tap}},
{{'ns_1@10.3.4.149',612},{0,0,tap}},
{{'ns_1@10.3.4.149',613},{0,0,tap}},
{{'ns_1@10.3.4.149',614},{0,0,tap}},
{{'ns_1@10.3.4.149',615},{0,0,tap}},
{{'ns_1@10.3.4.149',616},{0,0,tap}},
{{'ns_1@10.3.4.149',617},{0,0,tap}},
{{'ns_1@10.3.4.149',618},{0,0,tap}},
{{'ns_1@10.3.4.149',619},{0,0,tap}},
{{'ns_1@10.3.4.149',620},{0,0,tap}},
{{'ns_1@10.3.4.149',621},{0,0,tap}},
{{'ns_1@10.3.4.149',622},{0,0,tap}},
{{'ns_1@10.3.4.149',623},{0,0,tap}},
{{'ns_1@10.3.4.149',624},{0,0,tap}},
{{'ns_1@10.3.4.149',625},{0,0,tap}},
{{'ns_1@10.3.4.149',626},{0,0,tap}},
{{'ns_1@10.3.4.149',627},{0,0,tap}},
{{'ns_1@10.3.4.149',628},{0,0,tap}},
{{'ns_1@10.3.4.149',629},{0,0,tap}},
{{'ns_1@10.3.4.149',630},{0,0,tap}},
{{'ns_1@10.3.4.149',631},{0,0,tap}},
{{'ns_1@10.3.4.149',632},{0,0,tap}},
{{'ns_1@10.3.4.149',633},{0,0,tap}},
{{'ns_1@10.3.4.149',634},{0,0,tap}},
{{'ns_1@10.3.4.149',635},{0,0,tap}},
{{'ns_1@10.3.4.149',636},{0,0,tap}},
{{'ns_1@10.3.4.149',637},{0,0,tap}},
{{'ns_1@10.3.4.149',638},{0,0,tap}},
{{'ns_1@10.3.4.149',639},{0,0,tap}},
{{'ns_1@10.3.4.149',640},{0,0,tap}},
{{'ns_1@10.3.4.149',641},{0,0,tap}},
{{'ns_1@10.3.4.149',642},{0,0,tap}},
{{'ns_1@10.3.4.149',643},{0,0,tap}},
{{'ns_1@10.3.4.149',644},{0,0,tap}},
{{'ns_1@10.3.4.149',645},{0,0,tap}},
{{'ns_1@10.3.4.149',646},{0,0,tap}},
{{'ns_1@10.3.4.149',647},{0,0,tap}},
{{'ns_1@10.3.4.149',648},{0,0,tap}},
{{'ns_1@10.3.4.149',649},{0,0,tap}},
{{'ns_1@10.3.4.149',650},{0,0,tap}},
{{'ns_1@10.3.4.149',651},{0,0,tap}},
{{'ns_1@10.3.4.149',652},{0,0,tap}},
{{'ns_1@10.3.4.149',653},{0,0,tap}},
{{'ns_1@10.3.4.149',654},{0,0,tap}},
{{'ns_1@10.3.4.149',655},{0,0,tap}},
{{'ns_1@10.3.4.149',656},{0,0,tap}},
{{'ns_1@10.3.4.149',657},{0,0,tap}},
{{'ns_1@10.3.4.149',658},{0,0,tap}},
{{'ns_1@10.3.4.149',659},{0,0,tap}},
{{'ns_1@10.3.4.149',660},{0,0,tap}},
{{'ns_1@10.3.4.149',661},{0,0,tap}},
{{'ns_1@10.3.4.149',662},{0,0,tap}},
{{'ns_1@10.3.4.149',663},{0,0,tap}},
{{'ns_1@10.3.4.149',664},{0,0,tap}},
{{'ns_1@10.3.4.149',665},{0,0,tap}},
{{'ns_1@10.3.4.149',666},{0,0,tap}},
{{'ns_1@10.3.4.149',667},{0,0,tap}},
{{'ns_1@10.3.4.149',668},{0,0,tap}},
{{'ns_1@10.3.4.149',669},{0,0,tap}},
{{'ns_1@10.3.4.149',670},{0,0,tap}},
{{'ns_1@10.3.4.149',671},{0,0,tap}},
{{'ns_1@10.3.4.149',672},{0,0,tap}},
{{'ns_1@10.3.4.149',673},{0,0,tap}},
{{'ns_1@10.3.4.149',674},{0,0,tap}},
{{'ns_1@10.3.4.149',675},{0,0,tap}},
{{'ns_1@10.3.4.149',676},{0,0,tap}},
{{'ns_1@10.3.4.149',677},{0,0,tap}},
{{'ns_1@10.3.4.149',678},{0,0,tap}},
{{'ns_1@10.3.4.149',679},{0,0,tap}},
{{'ns_1@10.3.4.149',680},{0,0,tap}},
{{'ns_1@10.3.4.149',681},{0,0,tap}},
{{'ns_1@10.3.4.149',682},{0,0,tap}},
{{'ns_1@10.3.4.149',820},{0,0,tap}},
{{'ns_1@10.3.4.149',821},{0,0,tap}},
{{'ns_1@10.3.4.149',822},{0,0,tap}},
{{'ns_1@10.3.4.149',823},{0,0,tap}},
{{'ns_1@10.3.4.149',824},{0,0,tap}},
{{'ns_1@10.3.4.149',825},{0,0,tap}},
{{'ns_1@10.3.4.149',826},{0,0,tap}},
{{'ns_1@10.3.4.149',827},{0,0,tap}},
{{'ns_1@10.3.4.149',828},{0,0,tap}},
{{'ns_1@10.3.4.149',829},{0,0,tap}},
{{'ns_1@10.3.4.149',830},{0,0,tap}},
{{'ns_1@10.3.4.149',831},{0,0,tap}},
{{'ns_1@10.3.4.149',832},{0,0,tap}},
{{'ns_1@10.3.4.149',833},{0,0,tap}},
{{'ns_1@10.3.4.149',834},{0,0,tap}},
{{'ns_1@10.3.4.149',835},{0,0,tap}},
{{'ns_1@10.3.4.149',836},{0,0,tap}},
{{'ns_1@10.3.4.149',837},{0,0,tap}},
{{'ns_1@10.3.4.149',838},{0,0,tap}},
{{'ns_1@10.3.4.149',839},{0,0,tap}},
{{'ns_1@10.3.4.149',840},{0,0,tap}},
{{'ns_1@10.3.4.149',841},{0,0,tap}},
{{'ns_1@10.3.4.149',842},{0,0,tap}},
{{'ns_1@10.3.4.149',843},{0,0,tap}},
{{'ns_1@10.3.4.149',844},{0,0,tap}},
{{'ns_1@10.3.4.149',845},{0,0,tap}},
{{'ns_1@10.3.4.149',846},{0,0,tap}},
{{'ns_1@10.3.4.149',847},{0,0,tap}},
{{'ns_1@10.3.4.149',848},{0,0,tap}},
{{'ns_1@10.3.4.149',849},{0,0,tap}},
{{'ns_1@10.3.4.149',850},{0,0,tap}},
{{'ns_1@10.3.4.149',851},{0,0,tap}},
{{'ns_1@10.3.4.149',852},{0,0,tap}},
{{'ns_1@10.3.4.149',853},{0,0,tap}},
{{'ns_1@10.3.4.149',990},{0,0,tap}},
{{'ns_1@10.3.4.149',991},{0,0,tap}},
{{'ns_1@10.3.4.149',992},{0,0,tap}},
{{'ns_1@10.3.4.149',993},{0,0,tap}},
{{'ns_1@10.3.4.149',994},{0,0,tap}},
{{'ns_1@10.3.4.149',995},{0,0,tap}},
{{'ns_1@10.3.4.149',996},{0,0,tap}},
{{'ns_1@10.3.4.149',997},{0,0,tap}},
{{'ns_1@10.3.4.149',998},{0,0,tap}},
{{'ns_1@10.3.4.149',999},{0,0,tap}},
{{'ns_1@10.3.4.149',1000},{0,0,tap}},
{{'ns_1@10.3.4.149',1001},{0,0,tap}},
{{'ns_1@10.3.4.149',1002},{0,0,tap}},
{{'ns_1@10.3.4.149',1003},{0,0,tap}},
{{'ns_1@10.3.4.149',1004},{0,0,tap}},
{{'ns_1@10.3.4.149',1005},{0,0,tap}},
{{'ns_1@10.3.4.149',1006},{0,0,tap}},
{{'ns_1@10.3.4.149',1007},{0,0,tap}},
{{'ns_1@10.3.4.149',1008},{0,0,tap}},
{{'ns_1@10.3.4.149',1009},{0,0,tap}},
{{'ns_1@10.3.4.149',1010},{0,0,tap}},
{{'ns_1@10.3.4.149',1011},{0,0,tap}},
{{'ns_1@10.3.4.149',1012},{0,0,tap}},
{{'ns_1@10.3.4.149',1013},{0,0,tap}},
{{'ns_1@10.3.4.149',1014},{0,0,tap}},
{{'ns_1@10.3.4.149',1015},{0,0,tap}},
{{'ns_1@10.3.4.149',1016},{0,0,tap}},
{{'ns_1@10.3.4.149',1017},{0,0,tap}},
{{'ns_1@10.3.4.149',1018},{0,0,tap}},
{{'ns_1@10.3.4.149',1019},{0,0,tap}},
{{'ns_1@10.3.4.149',1020},{0,0,tap}},
{{'ns_1@10.3.4.149',1021},{0,0,tap}},
{{'ns_1@10.3.4.149',1022},{0,0,tap}},
{{'ns_1@10.3.4.149',1023},{0,0,tap}}]],
[]},
{lists,append,1,
[{file,"lists.erl"},{line,125}]},
{lists,append,1,
[{file,"lists.erl"},{line,125}]},
{ns_rebalance_observer,
initiate_bucket_rebalance,2,
[{file,"src/ns_rebalance_observer.erl"},
{line,209}]},
{ns_rebalance_observer,
handle_set_ff_map,2,
[{file,"src/ns_rebalance_observer.erl"},
{line,257}]},
{ns_rebalance_observer,handle_cast,2,
[{file,"src/ns_rebalance_observer.erl"},
{line,105}]},
{gen_server,handle_msg,5,
[{file,"gen_server.erl"},{line,604}]},
{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,239}]}]},
{gen_server,call,
[{global,ns_rebalance_observer},
get_detailed_progress,10000]}}},
{trace,
[{gen_server,call,3},
{ns_rebalance_observer,
get_detailed_progress,0},
{ns_doctor,get_detailed_progress,0},
{ns_doctor,do_build_tasks_list,4},
{menelaus_web,handle_tasks,2},
{request_throttler,do_request,3},
{menelaus_web,loop,3},
{mochiweb_http,headers,5}]}] menelaus_web019 ns_1@10.3.4.144 15:51:29 - Mon Jul 7, 2014
Rebalance exited with reason {badarg,
[{erlang,'++',
[{'EXIT',
{{function_clause,
[{janitor_agent,handle_call,
[{get_mass_upr_docs_estimate,[]},
{<0.6956.4>,#Ref<0.0.6.116181>},
{state,"default",<0.6944.4>,
#Ref<17585.0.16.54566>,[],
[missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,missing,missing,missing,
missing,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,replica,
replica,replica,replica,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active,
active,active,active,active,active],
[undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,passive,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined,undefined,undefined,
undefined],
0}]},
{gen_server,handle_msg,5},
{proc_lib,init_p_do_apply,3}]},
{gen_server,call,
[{'janitor_agent-default',
'ns_1@10.3.4.146'},
{get_mass_upr_docs_estimate,[]},
infinity]}}},
[{{'ns_1@10.3.4.147',35},{0,0,tap}},
{{'ns_1@10.3.4.147',36},{0,0,tap}},
{{'ns_1@10.3.4.147',37},{0,0,tap}},
{{'ns_1@10.3.4.147',38},{0,0,tap}},
{{'ns_1@10.3.4.147',39},{0,0,tap}},
{{'ns_1@10.3.4.147',40},{0,0,tap}},
{{'ns_1@10.3.4.147',41},{0,0,tap}},
{{'ns_1@10.3.4.147',42},{0,0,tap}},
{{'ns_1@10.3.4.147',43},{0,0,tap}},
{{'ns_1@10.3.4.147',44},{0,0,tap}},
{{'ns_1@10.3.4.147',45},{0,0,tap}},
{{'ns_1@10.3.4.147',46},{0,0,tap}},
{{'ns_1@10.3.4.147',47},{0,0,tap}},
{{'ns_1@10.3.4.147',48},{0,0,tap}},
{{'ns_1@10.3.4.147',49},{0,0,tap}},
{{'ns_1@10.3.4.147',50},{0,0,tap}},
{{'ns_1@10.3.4.147',51},{0,0,tap}},
{{'ns_1@10.3.4.147',52},{0,0,tap}},
{{'ns_1@10.3.4.147',53},{0,0,tap}},
{{'ns_1@10.3.4.147',54},{0,0,tap}},
{{'ns_1@10.3.4.147',55},{0,0,tap}},
{{'ns_1@10.3.4.147',56},{0,0,tap}},
{{'ns_1@10.3.4.147',57},{0,0,tap}},
{{'ns_1@10.3.4.147',58},{0,0,tap}},
{{'ns_1@10.3.4.147',59},{0,0,tap}},
{{'ns_1@10.3.4.147',60},{0,0,tap}},
{{'ns_1@10.3.4.147',61},{0,0,tap}},
{{'ns_1@10.3.4.147',62},{0,0,tap}},
{{'ns_1@10.3.4.147',63},{0,0,tap}},
{{'ns_1@10.3.4.147',64},{0,0,tap}},
{{'ns_1@10.3.4.147',65},{0,0,tap}},
{{'ns_1@10.3.4.147',66},{0,0,tap}},
{{'ns_1@10.3.4.147',67},{0,0,tap}},
{{'ns_1@10.3.4.147',68},{0,0,tap}},
{{'ns_1@10.3.4.147',205},{0,0,tap}},
{{'ns_1@10.3.4.147',206},{0,0,tap}},
{{'ns_1@10.3.4.147',207},{0,0,tap}},
{{'ns_1@10.3.4.147',208},{0,0,tap}},
{{'ns_1@10.3.4.147',209},{0,0,tap}},
{{'ns_1@10.3.4.147',210},{0,0,tap}},
{{'ns_1@10.3.4.147',211},{0,0,tap}},
{{'ns_1@10.3.4.147',212},{0,0,tap}},
{{'ns_1@10.3.4.147',213},{0,0,tap}},
{{'ns_1@10.3.4.147',214},{0,0,tap}},
{{'ns_1@10.3.4.147',215},{0,0,tap}},
{{'ns_1@10.3.4.147',216},{0,0,tap}},
{{'ns_1@10.3.4.147',217},{0,0,tap}},
{{'ns_1@10.3.4.147',218},{0,0,tap}},
{{'ns_1@10.3.4.147',219},{0,0,tap}},
{{'ns_1@10.3.4.147',220},{0,0,tap}},
{{'ns_1@10.3.4.147',221},{0,0,tap}},
{{'ns_1@10.3.4.147',222},{0,0,tap}},
{{'ns_1@10.3.4.147',223},{0,0,tap}},
{{'ns_1@10.3.4.147',224},{0,0,tap}},
{{'ns_1@10.3.4.147',225},{0,0,tap}},
{{'ns_1@10.3.4.147',226},{0,0,tap}},
{{'ns_1@10.3.4.147',227},{0,0,tap}},
{{'ns_1@10.3.4.147',228},{0,0,tap}},
{{'ns_1@10.3.4.147',229},{0,0,tap}},
{{'ns_1@10.3.4.147',230},{0,0,tap}},
{{'ns_1@10.3.4.147',231},{0,0,tap}},
{{'ns_1@10.3.4.147',232},{0,0,tap}},
{{'ns_1@10.3.4.147',233},{0,0,tap}},
{{'ns_1@10.3.4.147',234},{0,0,tap}},
{{'ns_1@10.3.4.147',235},{0,0,tap}},
{{'ns_1@10.3.4.147',236},{0,0,tap}},
{{'ns_1@10.3.4.147',237},{0,0,tap}},
{{'ns_1@10.3.4.147',238},{0,0,tap}},
{{'ns_1@10.3.4.147',239},{0,0,tap}},
{{'ns_1@10.3.4.147',377},{0,0,tap}},
{{'ns_1@10.3.4.147',378},{0,0,tap}},
{{'ns_1@10.3.4.147',379},{0,0,tap}},
{{'ns_1@10.3.4.147',380},{0,0,tap}},
{{'ns_1@10.3.4.147',381},{0,0,tap}},
{{'ns_1@10.3.4.147',382},{0,0,tap}},
{{'ns_1@10.3.4.147',383},{0,0,tap}},
{{'ns_1@10.3.4.147',384},{0,0,tap}},
{{'ns_1@10.3.4.147',385},{0,0,tap}},
{{'ns_1@10.3.4.147',386},{0,0,tap}},
{{'ns_1@10.3.4.147',387},{0,0,tap}},
{{'ns_1@10.3.4.147',388},{0,0,tap}},
{{'ns_1@10.3.4.147',389},{0,0,tap}},
{{'ns_1@10.3.4.147',390},{0,0,tap}},
{{'ns_1@10.3.4.147',391},{0,0,tap}},
{{'ns_1@10.3.4.147',392},{0,0,tap}},
{{'ns_1@10.3.4.147',393},{0,0,tap}},
{{'ns_1@10.3.4.147',394},{0,0,tap}},
{{'ns_1@10.3.4.147',395},{0,0,tap}},
{{'ns_1@10.3.4.147',396},{0,0,tap}},
{{'ns_1@10.3.4.147',397},{0,0,tap}},
{{'ns_1@10.3.4.147',398},{0,0,tap}},
{{'ns_1@10.3.4.147',399},{0,0,tap}},
{{'ns_1@10.3.4.147',400},{0,0,tap}},
{{'ns_1@10.3.4.147',401},{0,0,tap}},
{{'ns_1@10.3.4.147',402},{0,0,tap}},
{{'ns_1@10.3.4.147',403},{0,0,tap}},
{{'ns_1@10.3.4.147',404},{0,0,tap}},
{{'ns_1@10.3.4.147',405},{0,0,tap}},
{{'ns_1@10.3.4.147',406},{0,0,tap}},
{{'ns_1@10.3.4.147',407},{0,0,tap}},
{{'ns_1@10.3.4.147',408},{0,0,tap}},
{{'ns_1@10.3.4.147',409},{0,0,tap}},
{{'ns_1@10.3.4.147',410},{0,0,tap}},
{{'ns_1@10.3.4.147',479},{0,0,tap}},
{{'ns_1@10.3.4.147',480},{0,0,tap}},
{{'ns_1@10.3.4.147',481},{0,0,tap}},
{{'ns_1@10.3.4.147',482},{0,0,tap}},
{{'ns_1@10.3.4.147',483},{0,0,tap}},
{{'ns_1@10.3.4.147',484},{0,0,tap}},
{{'ns_1@10.3.4.147',485},{0,0,tap}},
{{'ns_1@10.3.4.147',486},{0,0,tap}},
{{'ns_1@10.3.4.147',487},{0,0,tap}},
{{'ns_1@10.3.4.147',488},{0,0,tap}},
{{'ns_1@10.3.4.147',489},{0,0,tap}},
{{'ns_1@10.3.4.147',490},{0,0,tap}},
{{'ns_1@10.3.4.147',491},{0,0,tap}},
{{'ns_1@10.3.4.147',492},{0,0,tap}},
{{'ns_1@10.3.4.147',493},{0,0,tap}},
{{'ns_1@10.3.4.147',494},{0,0,tap}},
{{'ns_1@10.3.4.147',495},{0,0,tap}},
{{'ns_1@10.3.4.147',496},{0,0,tap}},
{{'ns_1@10.3.4.147',497},{0,0,tap}},
{{'ns_1@10.3.4.147',498},{0,0,tap}},
{{'ns_1@10.3.4.147',499},{0,0,tap}},
{{'ns_1@10.3.4.147',500},{0,0,tap}},
{{'ns_1@10.3.4.147',501},{0,0,tap}},
{{'ns_1@10.3.4.147',502},{0,0,tap}},
{{'ns_1@10.3.4.147',503},{0,0,tap}},
{{'ns_1@10.3.4.147',504},{0,0,tap}},
{{'ns_1@10.3.4.147',505},{0,0,tap}},
{{'ns_1@10.3.4.147',506},{0,0,tap}},
{{'ns_1@10.3.4.147',507},{0,0,tap}},
{{'ns_1@10.3.4.147',508},{0,0,tap}},
{{'ns_1@10.3.4.147',509},{0,0,tap}},
{{'ns_1@10.3.4.147',510},{0,0,tap}},
{{'ns_1@10.3.4.147',511},{0,0,tap}},
{{'ns_1@10.3.4.147',512},{0,0,tap}},
{{'ns_1@10.3.4.147',615},{0,0,tap}},
{{'ns_1@10.3.4.147',616},{0,0,tap}},
{{'ns_1@10.3.4.147',617},{0,0,tap}},
{{'ns_1@10.3.4.147',618},{0,0,tap}},
{{'ns_1@10.3.4.147',619},{0,0,tap}},
{{'ns_1@10.3.4.147',620},{0,0,tap}},
{{'ns_1@10.3.4.147',621},{0,0,tap}},
{{'ns_1@10.3.4.147',622},{0,0,tap}},
{{'ns_1@10.3.4.147',623},{0,0,tap}},
{{'ns_1@10.3.4.147',624},{0,0,tap}},
{{'ns_1@10.3.4.147',625},{0,0,tap}},
{{'ns_1@10.3.4.147',626},{0,0,tap}},
{{'ns_1@10.3.4.147',627},{0,0,tap}},
{{'ns_1@10.3.4.147',628},{0,0,tap}},
{{'ns_1@10.3.4.147',629},{0,0,tap}},
{{'ns_1@10.3.4.147',630},{0,0,tap}},
{{'ns_1@10.3.4.147',631},{0,0,tap}},
{{'ns_1@10.3.4.147',632},{0,0,tap}},
{{'ns_1@10.3.4.147',633},{0,0,tap}},
{{'ns_1@10.3.4.147',634},{0,0,tap}},
{{'ns_1@10.3.4.147',635},{0,0,tap}},
{{'ns_1@10.3.4.147',636},{0,0,tap}},
{{'ns_1@10.3.4.147',637},{0,0,tap}},
{{'ns_1@10.3.4.147',638},{0,0,tap}},
{{'ns_1@10.3.4.147',639},{0,0,tap}},
{{'ns_1@10.3.4.147',640},{0,0,tap}},
{{'ns_1@10.3.4.147',641},{0,0,tap}},
{{'ns_1@10.3.4.147',642},{0,0,tap}},
{{'ns_1@10.3.4.147',643},{0,0,tap}},
{{'ns_1@10.3.4.147',644},{0,0,tap}},
{{'ns_1@10.3.4.147',645},{0,0,tap}},
{{'ns_1@10.3.4.147',646},{0,0,tap}},
{{'ns_1@10.3.4.147',647},{0,0,tap}},
{{'ns_1@10.3.4.147',648},{0,0,tap}},
{{'ns_1@10.3.4.147',717},{0,0,tap}},
{{'ns_1@10.3.4.147',718},{0,0,tap}},
{{'ns_1@10.3.4.147',719},{0,0,tap}},
{{'ns_1@10.3.4.147',720},{0,0,tap}},
{{'ns_1@10.3.4.147',721},{0,0,tap}},
{{'ns_1@10.3.4.147',722},{0,0,tap}},
{{'ns_1@10.3.4.147',723},{0,0,tap}},
{{'ns_1@10.3.4.147',724},{0,0,tap}},
{{'ns_1@10.3.4.147',725},{0,0,tap}},
{{'ns_1@10.3.4.147',726},{0,0,tap}},
{{'ns_1@10.3.4.147',727},{0,0,tap}},
{{'ns_1@10.3.4.147',728},{0,0,tap}},
{{'ns_1@10.3.4.147',729},{0,0,tap}},
{{'ns_1@10.3.4.147',730},{0,0,tap}},
{{'ns_1@10.3.4.147',731},{0,0,tap}},
{{'ns_1@10.3.4.147',732},{0,0,tap}},
{{'ns_1@10.3.4.147',733},{0,0,tap}},
{{'ns_1@10.3.4.147',734},{0,0,tap}},
{{'ns_1@10.3.4.147',735},{0,0,tap}},
{{'ns_1@10.3.4.147',736},{0,0,tap}},
{{'ns_1@10.3.4.147',737},{0,0,tap}},
{{'ns_1@10.3.4.147',738},{0,0,tap}},
{{'ns_1@10.3.4.147',739},{0,0,tap}},
{{'ns_1@10.3.4.147',740},{0,0,tap}},
{{'ns_1@10.3.4.147',741},{0,0,tap}},
{{'ns_1@10.3.4.147',742},{0,0,tap}},
{{'ns_1@10.3.4.147',743},{0,0,tap}},
{{'ns_1@10.3.4.147',744},{0,0,tap}},
{{'ns_1@10.3.4.147',745},{0,0,tap}},
{{'ns_1@10.3.4.147',746},{0,0,tap}},
{{'ns_1@10.3.4.147',747},{0,0,tap}},
{{'ns_1@10.3.4.147',748},{0,0,tap}},
{{'ns_1@10.3.4.147',749},{0,0,tap}},
{{'ns_1@10.3.4.147',750},{0,0,tap}},
{{'ns_1@10.3.4.147',751},{0,0,tap}},
{{'ns_1@10.3.4.147',752},{0,0,tap}},
{{'ns_1@10.3.4.147',753},{0,0,tap}},
{{'ns_1@10.3.4.147',754},{0,0,tap}},
{{'ns_1@10.3.4.147',755},{0,0,tap}},
{{'ns_1@10.3.4.147',756},{0,0,tap}},
{{'ns_1@10.3.4.147',757},{0,0,tap}},
{{'ns_1@10.3.4.147',758},{0,0,tap}},
{{'ns_1@10.3.4.147',759},{0,0,tap}},
{{'ns_1@10.3.4.147',760},{0,0,tap}},
{{'ns_1@10.3.4.147',761},{0,0,tap}},
{{'ns_1@10.3.4.147',762},{0,0,tap}},
{{'ns_1@10.3.4.147',763},{0,0,tap}},
{{'ns_1@10.3.4.147',764},{0,0,tap}},
{{'ns_1@10.3.4.147',765},{0,0,tap}},
{{'ns_1@10.3.4.147',766},{0,0,tap}},
{{'ns_1@10.3.4.147',767},{0,0,tap}},
{{'ns_1@10.3.4.147',768},{0,0,tap}},
{{'ns_1@10.3.4.147',769},{0,0,tap}},
{{'ns_1@10.3.4.147',770},{0,0,tap}},
{{'ns_1@10.3.4.147',771},{0,0,tap}},
{{'ns_1@10.3.4.147',772},{0,0,tap}},
{{'ns_1@10.3.4.147',773},{0,0,tap}},
{{'ns_1@10.3.4.147',774},{0,0,tap}},
{{'ns_1@10.3.4.147',775},{0,0,tap}},
{{'ns_1@10.3.4.147',776},{0,0,tap}},
{{'ns_1@10.3.4.147',777},{0,0,tap}},
{{'ns_1@10.3.4.147',778},{0,0,tap}},
{{'ns_1@10.3.4.147',779},{0,0,tap}},
{{'ns_1@10.3.4.147',780},{0,0,tap}},
{{'ns_1@10.3.4.147',781},{0,0,tap}},
{{'ns_1@10.3.4.147',782},{0,0,tap}},
{{'ns_1@10.3.4.147',783},{0,0,tap}},
{{'ns_1@10.3.4.147',784},{0,0,tap}},
{{'ns_1@10.3.4.147',785},{0,0,tap}},
{{'ns_1@10.3.4.147',786},{0,0,tap}},
{{'ns_1@10.3.4.147',787},{0,0,tap}},
{{'ns_1@10.3.4.147',788},{0,0,tap}},
{{'ns_1@10.3.4.147',789},{0,0,tap}},
{{'ns_1@10.3.4.147',790},{0,0,tap}},
{{'ns_1@10.3.4.147',791},{0,0,tap}},
{{'ns_1@10.3.4.147',792},{0,0,tap}},
{{'ns_1@10.3.4.147',793},{0,0,tap}},
{{'ns_1@10.3.4.147',794},{0,0,tap}},
{{'ns_1@10.3.4.147',795},{0,0,tap}},
{{'ns_1@10.3.4.147',796},{0,0,tap}},
{{'ns_1@10.3.4.147',797},{0,0,tap}},
{{'ns_1@10.3.4.147',798},{0,0,tap}},
{{'ns_1@10.3.4.147',799},{0,0,tap}},
{{'ns_1@10.3.4.147',800},{0,0,tap}},
{{'ns_1@10.3.4.147',801},{0,0,tap}},
{{'ns_1@10.3.4.147',802},{0,0,tap}},
{{'ns_1@10.3.4.147',803},{0,0,tap}},
{{'ns_1@10.3.4.147',804},{0,0,tap}},
{{'ns_1@10.3.4.147',805},{0,0,tap}},
{{'ns_1@10.3.4.147',806},{0,0,tap}},
{{'ns_1@10.3.4.147',807},{0,0,tap}},
{{'ns_1@10.3.4.147',808},{0,0,tap}},
{{'ns_1@10.3.4.147',809},{0,0,tap}},
{{'ns_1@10.3.4.147',810},{0,0,tap}},
{{'ns_1@10.3.4.147',811},{0,0,tap}},
{{'ns_1@10.3.4.147',812},{0,0,tap}},
{{'ns_1@10.3.4.147',813},{0,0,tap}},
{{'ns_1@10.3.4.147',814},{0,0,tap}},
{{'ns_1@10.3.4.147',815},{0,0,tap}},
{{'ns_1@10.3.4.147',816},{0,0,tap}},
{{'ns_1@10.3.4.147',817},{0,0,tap}},
{{'ns_1@10.3.4.147',818},{0,0,tap}},
{{'ns_1@10.3.4.147',819},{0,0,tap}},
{{'ns_1@10.3.4.147',820},{0,0,tap}},
{{'ns_1@10.3.4.147',821},{0,0,tap}},
{{'ns_1@10.3.4.147',822},{0,0,tap}},
{{'ns_1@10.3.4.147',823},{0,0,tap}},
{{'ns_1@10.3.4.147',824},{0,0,tap}},
{{'ns_1@10.3.4.147',825},{0,0,tap}},
{{'ns_1@10.3.4.147',826},{0,0,tap}},
{{'ns_1@10.3.4.147',827},{0,0,tap}},
{{'ns_1@10.3.4.147',828},{0,0,tap}},
{{'ns_1@10.3.4.147',829},{0,0,tap}},
{{'ns_1@10.3.4.147',830},{0,0,tap}},
{{'ns_1@10.3.4.147',831},{0,0,tap}},
{{'ns_1@10.3.4.147',832},{0,0,tap}},
{{'ns_1@10.3.4.147',833},{0,0,tap}},
{{'ns_1@10.3.4.147',834},{0,0,tap}},
{{'ns_1@10.3.4.147',835},{0,0,tap}},
{{'ns_1@10.3.4.147',836},{0,0,tap}},
{{'ns_1@10.3.4.147',837},{0,0,tap}},
{{'ns_1@10.3.4.147',838},{0,0,tap}},
{{'ns_1@10.3.4.147',839},{0,0,tap}},
{{'ns_1@10.3.4.147',840},{0,0,tap}},
{{'ns_1@10.3.4.147',841},{0,0,tap}},
{{'ns_1@10.3.4.147',842},{0,0,tap}},
{{'ns_1@10.3.4.147',843},{0,0,tap}},
{{'ns_1@10.3.4.147',844},{0,0,tap}},
{{'ns_1@10.3.4.147',845},{0,0,tap}},
{{'ns_1@10.3.4.147',846},{0,0,tap}},
{{'ns_1@10.3.4.147',847},{0,0,tap}},
{{'ns_1@10.3.4.147',848},{0,0,tap}},
{{'ns_1@10.3.4.147',849},{0,0,tap}},
{{'ns_1@10.3.4.147',850},{0,0,tap}},
{{'ns_1@10.3.4.147',851},{0,0,tap}},
{{'ns_1@10.3.4.147',852},{0,0,tap}},
{{'ns_1@10.3.4.147',853},{0,0,tap}},
{{'ns_1@10.3.4.147',956},{0,0,tap}},
{{'ns_1@10.3.4.147',957},{0,0,tap}},
{{'ns_1@10.3.4.147',958},{0,0,tap}},
{{'ns_1@10.3.4.147',959},{0,0,tap}},
{{'ns_1@10.3.4.147',960},{0,0,tap}},
{{'ns_1@10.3.4.147',961},{0,0,tap}},
{{'ns_1@10.3.4.147',962},{0,0,tap}},
{{'ns_1@10.3.4.147',963},{0,0,tap}},
{{'ns_1@10.3.4.147',964},{0,0,tap}},
{{'ns_1@10.3.4.147',965},{0,0,tap}},
{{'ns_1@10.3.4.147',966},{0,0,tap}},
{{'ns_1@10.3.4.147',967},{0,0,tap}},
{{'ns_1@10.3.4.147',968},{0,0,tap}},
{{'ns_1@10.3.4.147',969},{0,0,tap}},
{{'ns_1@10.3.4.147',970},{0,0,tap}},
{{'ns_1@10.3.4.147',971},{0,0,tap}},
{{'ns_1@10.3.4.147',972},{0,0,tap}},
{{'ns_1@10.3.4.147',973},{0,0,tap}},
{{'ns_1@10.3.4.147',974},{0,0,tap}},
{{'ns_1@10.3.4.147',975},{0,0,tap}},
{{'ns_1@10.3.4.147',976},{0,0,tap}},
{{'ns_1@10.3.4.147',977},{0,0,tap}},
{{'ns_1@10.3.4.147',978},{0,0,tap}},
{{'ns_1@10.3.4.147',979},{0,0,tap}},
{{'ns_1@10.3.4.147',980},{0,0,tap}},
{{'ns_1@10.3.4.147',981},{0,0,tap}},
{{'ns_1@10.3.4.147',982},{0,0,tap}},
{{'ns_1@10.3.4.147',983},{0,0,tap}},
{{'ns_1@10.3.4.147',984},{0,0,tap}},
{{'ns_1@10.3.4.147',985},{0,0,tap}},
{{'ns_1@10.3.4.147',986},{0,0,tap}},
{{'ns_1@10.3.4.147',987},{0,0,tap}},
{{'ns_1@10.3.4.147',988},{0,0,tap}},
{{'ns_1@10.3.4.147',989},{0,0,tap}},
{{'ns_1@10.3.4.148',69},{0,0,tap}},
{{'ns_1@10.3.4.148',70},{0,0,tap}},
{{'ns_1@10.3.4.148',71},{0,0,tap}},
{{'ns_1@10.3.4.148',72},{0,0,tap}},
{{'ns_1@10.3.4.148',73},{0,0,tap}},
{{'ns_1@10.3.4.148',74},{0,0,tap}},
{{'ns_1@10.3.4.148',75},{0,0,tap}},
{{'ns_1@10.3.4.148',76},{0,0,tap}},
{{'ns_1@10.3.4.148',77},{0,0,tap}},
{{'ns_1@10.3.4.148',78},{0,0,tap}},
{{'ns_1@10.3.4.148',79},{0,0,tap}},
{{'ns_1@10.3.4.148',80},{0,0,tap}},
{{'ns_1@10.3.4.148',81},{0,0,tap}},
{{'ns_1@10.3.4.148',82},{0,0,tap}},
{{'ns_1@10.3.4.148',83},{0,0,tap}},
{{'ns_1@10.3.4.148',84},{0,0,tap}},
{{'ns_1@10.3.4.148',85},{0,0,tap}},
{{'ns_1@10.3.4.148',86},{0,0,tap}},
{{'ns_1@10.3.4.148',87},{0,0,tap}},
{{'ns_1@10.3.4.148',88},{0,0,tap}},
{{'ns_1@10.3.4.148',89},{0,0,tap}},
{{'ns_1@10.3.4.148',90},{0,0,tap}},
{{'ns_1@10.3.4.148',91},{0,0,tap}},
{{'ns_1@10.3.4.148',92},{0,0,tap}},
{{'ns_1@10.3.4.148',93},{0,0,tap}},
{{'ns_1@10.3.4.148',94},{0,0,tap}},
{{'ns_1@10.3.4.148',95},{0,0,tap}},
{{'ns_1@10.3.4.148',96},{0,0,tap}},
{{'ns_1@10.3.4.148',97},{0,0,tap}},
{{'ns_1@10.3.4.148',98},{0,0,tap}},
{{'ns_1@10.3.4.148',99},{0,0,tap}},
{{'ns_1@10.3.4.148',100},{0,0,tap}},
{{'ns_1@10.3.4.148',101},{0,0,tap}},
{{'ns_1@10.3.4.148',102},{0,0,tap}},
{{'ns_1@10.3.4.148',137},{0,0,tap}},
{{'ns_1@10.3.4.148',138},{0,0,tap}},
{{'ns_1@10.3.4.148',139},{0,0,tap}},
{{'ns_1@10.3.4.148',140},{0,0,tap}},
{{'ns_1@10.3.4.148',141},{0,0,tap}},
{{'ns_1@10.3.4.148',142},{0,0,tap}},
{{'ns_1@10.3.4.148',143},{0,0,tap}},
{{'ns_1@10.3.4.148',144},{0,0,tap}},
{{'ns_1@10.3.4.148',145},{0,0,tap}},
{{'ns_1@10.3.4.148',146},{0,0,tap}},
{{'ns_1@10.3.4.148',147},{0,0,tap}},
{{'ns_1@10.3.4.148',148},{0,0,tap}},
{{'ns_1@10.3.4.148',149},{0,0,tap}},
{{'ns_1@10.3.4.148',150},{0,0,tap}},
{{'ns_1@10.3.4.148',151},{0,0,tap}},
{{'ns_1@10.3.4.148',152},{0,0,tap}},
{{'ns_1@10.3.4.148',153},{0,0,tap}},
{{'ns_1@10.3.4.148',154},{0,0,tap}},
{{'ns_1@10.3.4.148',155},{0,0,tap}},
{{'ns_1@10.3.4.148',156},{0,0,tap}},
{{'ns_1@10.3.4.148',157},{0,0,tap}},
{{'ns_1@10.3.4.148',158},{0,0,tap}},
{{'ns_1@10.3.4.148',159},{0,0,tap}},
{{'ns_1@10.3.4.148',160},{0,0,tap}},
{{'ns_1@10.3.4.148',161},{0,0,tap}},
{{'ns_1@10.3.4.148',162},{0,0,tap}},
{{'ns_1@10.3.4.148',163},{0,0,tap}},
{{'ns_1@10.3.4.148',164},{0,0,tap}},
{{'ns_1@10.3.4.148',165},{0,0,tap}},
{{'ns_1@10.3.4.148',166},{0,0,tap}},
{{'ns_1@10.3.4.148',167},{0,0,tap}},
{{'ns_1@10.3.4.148',168},{0,0,tap}},
{{'ns_1@10.3.4.148',169},{0,0,tap}},
{{'ns_1@10.3.4.148',170},{0,0,tap}},
{{'ns_1@10.3.4.148',240},{0,0,tap}},
{{'ns_1@10.3.4.148',241},{0,0,tap}},
{{'ns_1@10.3.4.148',242},{0,0,tap}},
{{'ns_1@10.3.4.148',243},{0,0,tap}},
{{'ns_1@10.3.4.148',244},{0,0,tap}},
{{'ns_1@10.3.4.148',245},{0,0,tap}},
{{'ns_1@10.3.4.148',246},{0,0,tap}},
{{'ns_1@10.3.4.148',247},{0,0,tap}},
{{'ns_1@10.3.4.148',248},{0,0,tap}},
{{'ns_1@10.3.4.148',249},{0,0,tap}},
{{'ns_1@10.3.4.148',250},{0,0,tap}},
{{'ns_1@10.3.4.148',251},{0,0,tap}},
{{'ns_1@10.3.4.148',252},{0,0,tap}},
{{'ns_1@10.3.4.148',253},{0,0,tap}},
{{'ns_1@10.3.4.148',254},{0,0,tap}},
{{'ns_1@10.3.4.148',255},{0,0,tap}},
{{'ns_1@10.3.4.148',256},{0,0,tap}},
{{'ns_1@10.3.4.148',257},{0,0,tap}},
{{'ns_1@10.3.4.148',258},{0,0,tap}},
{{'ns_1@10.3.4.148',259},{0,0,tap}},
{{'ns_1@10.3.4.148',260},{0,0,tap}},
{{'ns_1@10.3.4.148',261},{0,0,tap}},
{{'ns_1@10.3.4.148',262},{0,0,tap}},
{{'ns_1@10.3.4.148',263},{0,0,tap}},
{{'ns_1@10.3.4.148',264},{0,0,tap}},
{{'ns_1@10.3.4.148',265},{0,0,tap}},
{{'ns_1@10.3.4.148',266},{0,0,tap}},
{{'ns_1@10.3.4.148',267},{0,0,tap}},
{{'ns_1@10.3.4.148',268},{0,0,tap}},
{{'ns_1@10.3.4.148',269},{0,0,tap}},
{{'ns_1@10.3.4.148',270},{0,0,tap}},
{{'ns_1@10.3.4.148',271},{0,0,tap}},
{{'ns_1@10.3.4.148',272},{0,0,tap}},
{{'ns_1@10.3.4.148',273},{0,0,tap}},
{{'ns_1@10.3.4.148',411},{0,0,tap}},
{{'ns_1@10.3.4.148',412},{0,0,tap}},
{{'ns_1@10.3.4.148',413},{0,0,tap}},
{{'ns_1@10.3.4.148',414},{0,0,tap}},
{{'ns_1@10.3.4.148',415},{0,0,tap}},
{{'ns_1@10.3.4.148',416},{0,0,tap}},
{{'ns_1@10.3.4.148',417},{0,0,tap}},
{{'ns_1@10.3.4.148',418},{0,0,tap}},
{{'ns_1@10.3.4.148',419},{0,0,tap}},
{{'ns_1@10.3.4.148',420},{0,0,tap}},
{{'ns_1@10.3.4.148',421},{0,0,tap}},
{{'ns_1@10.3.4.148',422},{0,0,tap}},
{{'ns_1@10.3.4.148',423},{0,0,tap}},
{{'ns_1@10.3.4.148',424},{0,0,tap}},
{{'ns_1@10.3.4.148',425},{0,0,tap}},
{{'ns_1@10.3.4.148',426},{0,0,tap}},
{{'ns_1@10.3.4.148',427},{0,0,tap}},
{{'ns_1@10.3.4.148',428},{0,0,tap}},
{{'ns_1@10.3.4.148',429},{0,0,tap}},
{{'ns_1@10.3.4.148',430},{0,0,tap}},
{{'ns_1@10.3.4.148',431},{0,0,tap}},
{{'ns_1@10.3.4.148',432},{0,0,tap}},
{{'ns_1@10.3.4.148',433},{0,0,tap}},
{{'ns_1@10.3.4.148',434},{0,0,tap}},
{{'ns_1@10.3.4.148',435},{0,0,tap}},
{{'ns_1@10.3.4.148',436},{0,0,tap}},
{{'ns_1@10.3.4.148',437},{0,0,tap}},
{{'ns_1@10.3.4.148',438},{0,0,tap}},
{{'ns_1@10.3.4.148',439},{0,0,tap}},
{{'ns_1@10.3.4.148',440},{0,0,tap}},
{{'ns_1@10.3.4.148',441},{0,0,tap}},
{{'ns_1@10.3.4.148',442},{0,0,tap}},
{{'ns_1@10.3.4.148',443},{0,0,tap}},
{{'ns_1@10.3.4.148',444},{0,0,tap}},
{{'ns_1@10.3.4.148',649},{0,0,tap}},
{{'ns_1@10.3.4.148',650},{0,0,tap}},
{{'ns_1@10.3.4.148',651},{0,0,tap}},
{{'ns_1@10.3.4.148',652},{0,0,tap}},
{{'ns_1@10.3.4.148',653},{0,0,tap}},
{{'ns_1@10.3.4.148',654},{0,0,tap}},
{{'ns_1@10.3.4.148',655},{0,0,tap}},
{{'ns_1@10.3.4.148',656},{0,0,tap}},
{{'ns_1@10.3.4.148',657},{0,0,tap}},
{{'ns_1@10.3.4.148',658},{0,0,tap}},
{{'ns_1@10.3.4.148',659},{0,0,tap}},
{{'ns_1@10.3.4.148',660},{0,0,tap}},
{{'ns_1@10.3.4.148',661},{0,0,tap}},
{{'ns_1@10.3.4.148',662},{0,0,tap}},
{{'ns_1@10.3.4.148',663},{0,0,tap}},
{{'ns_1@10.3.4.148',664},{0,0,tap}},
{{'ns_1@10.3.4.148',665},{0,0,tap}},
{{'ns_1@10.3.4.148',666},{0,0,tap}},
{{'ns_1@10.3.4.148',667},{0,0,tap}},
{{'ns_1@10.3.4.148',668},{0,0,tap}},
{{'ns_1@10.3.4.148',669},{0,0,tap}},
{{'ns_1@10.3.4.148',670},{0,0,tap}},
{{'ns_1@10.3.4.148',671},{0,0,tap}},
{{'ns_1@10.3.4.148',672},{0,0,tap}},
{{'ns_1@10.3.4.148',673},{0,0,tap}},
{{'ns_1@10.3.4.148',674},{0,0,tap}},
{{'ns_1@10.3.4.148',675},{0,0,tap}},
{{'ns_1@10.3.4.148',676},{0,0,tap}},
{{'ns_1@10.3.4.148',677},{0,0,tap}},
{{'ns_1@10.3.4.148',678},{0,0,tap}},
{{'ns_1@10.3.4.148',679},{0,0,tap}},
{{'ns_1@10.3.4.148',680},{0,0,tap}},
{{'ns_1@10.3.4.148',681},{0,0,tap}},
{{'ns_1@10.3.4.148',682},{0,0,tap}},
{{'ns_1@10.3.4.148',786},{0,0,tap}},
{{'ns_1@10.3.4.148',787},{0,0,tap}},
{{'ns_1@10.3.4.148',788},{0,0,tap}},
{{'ns_1@10.3.4.148',789},{0,0,tap}},
{{'ns_1@10.3.4.148',790},{0,0,tap}},
{{'ns_1@10.3.4.148',791},{0,0,tap}},
{{'ns_1@10.3.4.148',792},{0,0,tap}},
{{'ns_1@10.3.4.148',793},{0,0,tap}},
{{'ns_1@10.3.4.148',794},{0,0,tap}},
{{'ns_1@10.3.4.148',795},{0,0,tap}},
{{'ns_1@10.3.4.148',796},{0,0,tap}},
{{'ns_1@10.3.4.148',797},{0,0,tap}},
{{'ns_1@10.3.4.148',798},{0,0,tap}},
{{'ns_1@10.3.4.148',799},{0,0,tap}},
{{'ns_1@10.3.4.148',800},{0,0,tap}},
{{'ns_1@10.3.4.148',801},{0,0,tap}},
{{'ns_1@10.3.4.148',802},{0,0,tap}},
{{'ns_1@10.3.4.148',803},{0,0,tap}},
{{'ns_1@10.3.4.148',804},{0,0,tap}},
{{'ns_1@10.3.4.148',805},{0,0,tap}},
{{'ns_1@10.3.4.148',806},{0,0,tap}},
{{'ns_1@10.3.4.148',807},{0,0,tap}},
{{'ns_1@10.3.4.148',808},{0,0,tap}},
{{'ns_1@10.3.4.148',809},{0,0,tap}},
{{'ns_1@10.3.4.148',810},{0,0,tap}},
{{'ns_1@10.3.4.148',811},{0,0,tap}},
{{'ns_1@10.3.4.148',812},{0,0,tap}},
{{'ns_1@10.3.4.148',813},{0,0,tap}},
{{'ns_1@10.3.4.148',814},{0,0,tap}},
{{'ns_1@10.3.4.148',815},{0,0,tap}},
{{'ns_1@10.3.4.148',816},{0,0,tap}},
{{'ns_1@10.3.4.148',817},{0,0,tap}},
{{'ns_1@10.3.4.148',818},{0,0,tap}},
{{'ns_1@10.3.4.148',819},{0,0,tap}},
{{'ns_1@10.3.4.148',888},{0,0,tap}},
{{'ns_1@10.3.4.148',889},{0,0,tap}},
{{'ns_1@10.3.4.148',890},{0,0,tap}},
{{'ns_1@10.3.4.148',891},{0,0,tap}},
{{'ns_1@10.3.4.148',892},{0,0,tap}},
{{'ns_1@10.3.4.148',893},{0,0,tap}},
{{'ns_1@10.3.4.148',894},{0,0,tap}},
{{'ns_1@10.3.4.148',895},{0,0,tap}},
{{'ns_1@10.3.4.148',896},{0,0,tap}},
{{'ns_1@10.3.4.148',897},{0,0,tap}},
{{'ns_1@10.3.4.148',898},{0,0,tap}},
{{'ns_1@10.3.4.148',899},{0,0,tap}},
{{'ns_1@10.3.4.148',900},{0,0,tap}},
{{'ns_1@10.3.4.148',901},{0,0,tap}},
{{'ns_1@10.3.4.148',902},{0,0,tap}},
{{'ns_1@10.3.4.148',903},{0,0,tap}},
{{'ns_1@10.3.4.148',904},{0,0,tap}},
{{'ns_1@10.3.4.148',905},{0,0,tap}},
{{'ns_1@10.3.4.148',906},{0,0,tap}},
{{'ns_1@10.3.4.148',907},{0,0,tap}},
{{'ns_1@10.3.4.148',908},{0,0,tap}},
{{'ns_1@10.3.4.148',909},{0,0,tap}},
{{'ns_1@10.3.4.148',910},{0,0,tap}},
{{'ns_1@10.3.4.148',911},{0,0,tap}},
{{'ns_1@10.3.4.148',912},{0,0,tap}},
{{'ns_1@10.3.4.148',913},{0,0,tap}},
{{'ns_1@10.3.4.148',914},{0,0,tap}},
{{'ns_1@10.3.4.148',915},{0,0,tap}},
{{'ns_1@10.3.4.148',916},{0,0,tap}},
{{'ns_1@10.3.4.148',917},{0,0,tap}},
{{'ns_1@10.3.4.148',918},{0,0,tap}},
{{'ns_1@10.3.4.148',919},{0,0,tap}},
{{'ns_1@10.3.4.148',920},{0,0,tap}},
{{'ns_1@10.3.4.148',921},{0,0,tap}},
{{'ns_1@10.3.4.148',922},{0,0,tap}},
{{'ns_1@10.3.4.148',923},{0,0,tap}},
{{'ns_1@10.3.4.148',924},{0,0,tap}},
{{'ns_1@10.3.4.148',925},{0,0,tap}},
{{'ns_1@10.3.4.148',926},{0,0,tap}},
{{'ns_1@10.3.4.148',927},{0,0,tap}},
{{'ns_1@10.3.4.148',928},{0,0,tap}},
{{'ns_1@10.3.4.148',929},{0,0,tap}},
{{'ns_1@10.3.4.148',930},{0,0,tap}},
{{'ns_1@10.3.4.148',931},{0,0,tap}},
{{'ns_1@10.3.4.148',932},{0,0,tap}},
{{'ns_1@10.3.4.148',933},{0,0,tap}},
{{'ns_1@10.3.4.148',934},{0,0,tap}},
{{'ns_1@10.3.4.148',935},{0,0,tap}},
{{'ns_1@10.3.4.148',936},{0,0,tap}},
{{'ns_1@10.3.4.148',937},{0,0,tap}},
{{'ns_1@10.3.4.148',938},{0,0,tap}},
{{'ns_1@10.3.4.148',939},{0,0,tap}},
{{'ns_1@10.3.4.148',940},{0,0,tap}},
{{'ns_1@10.3.4.148',941},{0,0,tap}},
{{'ns_1@10.3.4.148',942},{0,0,tap}},
{{'ns_1@10.3.4.148',943},{0,0,tap}},
{{'ns_1@10.3.4.148',944},{0,0,tap}},
{{'ns_1@10.3.4.148',945},{0,0,tap}},
{{'ns_1@10.3.4.148',946},{0,0,tap}},
{{'ns_1@10.3.4.148',947},{0,0,tap}},
{{'ns_1@10.3.4.148',948},{0,0,tap}},
{{'ns_1@10.3.4.148',949},{0,0,tap}},
{{'ns_1@10.3.4.148',950},{0,0,tap}},
{{'ns_1@10.3.4.148',951},{0,0,tap}},
{{'ns_1@10.3.4.148',952},{0,0,tap}},
{{'ns_1@10.3.4.148',953},{0,0,tap}},
{{'ns_1@10.3.4.148',954},{0,0,tap}},
{{'ns_1@10.3.4.148',955},{0,0,tap}},
{{'ns_1@10.3.4.148',956},{0,0,tap}},
{{'ns_1@10.3.4.148',957},{0,0,tap}},
{{'ns_1@10.3.4.148',958},{0,0,tap}},
{{'ns_1@10.3.4.148',959},{0,0,tap}},
{{'ns_1@10.3.4.148',960},{0,0,tap}},
{{'ns_1@10.3.4.148',961},{0,0,tap}},
{{'ns_1@10.3.4.148',962},{0,0,tap}},
{{'ns_1@10.3.4.148',963},{0,0,tap}},
{{'ns_1@10.3.4.148',964},{0,0,tap}},
{{'ns_1@10.3.4.148',965},{0,0,tap}},
{{'ns_1@10.3.4.148',966},{0,0,tap}},
{{'ns_1@10.3.4.148',967},{0,0,tap}},
{{'ns_1@10.3.4.148',968},{0,0,tap}},
{{'ns_1@10.3.4.148',969},{0,0,tap}},
{{'ns_1@10.3.4.148',970},{0,0,tap}},
{{'ns_1@10.3.4.148',971},{0,0,tap}},
{{'ns_1@10.3.4.148',972},{0,0,tap}},
{{'ns_1@10.3.4.148',973},{0,0,tap}},
{{'ns_1@10.3.4.148',974},{0,0,tap}},
{{'ns_1@10.3.4.148',975},{0,0,tap}},
{{'ns_1@10.3.4.148',976},{0,0,tap}},
{{'ns_1@10.3.4.148',977},{0,0,tap}},
{{'ns_1@10.3.4.148',978},{0,0,tap}},
{{'ns_1@10.3.4.148',979},{0,0,tap}},
{{'ns_1@10.3.4.148',980},{0,0,tap}},
{{'ns_1@10.3.4.148',981},{0,0,tap}},
{{'ns_1@10.3.4.148',982},{0,0,tap}},
{{'ns_1@10.3.4.148',983},{0,0,tap}},
{{'ns_1@10.3.4.148',984},{0,0,tap}},
{{'ns_1@10.3.4.148',985},{0,0,tap}},
{{'ns_1@10.3.4.148',986},{0,0,tap}},
{{'ns_1@10.3.4.148',987},{0,0,tap}},
{{'ns_1@10.3.4.148',988},{0,0,tap}},
{{'ns_1@10.3.4.148',989},{0,0,tap}},
{{'ns_1@10.3.4.148',990},{0,0,tap}},
{{'ns_1@10.3.4.148',991},{0,0,tap}},
{{'ns_1@10.3.4.148',992},{0,0,tap}},
{{'ns_1@10.3.4.148',993},{0,0,tap}},
{{'ns_1@10.3.4.148',994},{0,0,tap}},
{{'ns_1@10.3.4.148',995},{0,0,tap}},
{{'ns_1@10.3.4.148',996},{0,0,tap}},
{{'ns_1@10.3.4.148',997},{0,0,tap}},
{{'ns_1@10.3.4.148',998},{0,0,tap}},
{{'ns_1@10.3.4.148',999},{0,0,tap}},
{{'ns_1@10.3.4.148',1000},{0,0,tap}},
{{'ns_1@10.3.4.148',1001},{0,0,tap}},
{{'ns_1@10.3.4.148',1002},{0,0,tap}},
{{'ns_1@10.3.4.148',1003},{0,0,tap}},
{{'ns_1@10.3.4.148',1004},{0,0,tap}},
{{'ns_1@10.3.4.148',1005},{0,0,tap}},
{{'ns_1@10.3.4.148',1006},{0,0,tap}},
{{'ns_1@10.3.4.148',1007},{0,0,tap}},
{{'ns_1@10.3.4.148',1008},{0,0,tap}},
{{'ns_1@10.3.4.148',1009},{0,0,tap}},
{{'ns_1@10.3.4.148',1010},{0,0,tap}},
{{'ns_1@10.3.4.148',1011},{0,0,tap}},
{{'ns_1@10.3.4.148',1012},{0,0,tap}},
{{'ns_1@10.3.4.148',1013},{0,0,tap}},
{{'ns_1@10.3.4.148',1014},{0,0,tap}},
{{'ns_1@10.3.4.148',1015},{0,0,tap}},
{{'ns_1@10.3.4.148',1016},{0,0,tap}},
{{'ns_1@10.3.4.148',1017},{0,0,tap}},
{{'ns_1@10.3.4.148',1018},{0,0,tap}},
{{'ns_1@10.3.4.148',1019},{0,0,tap}},
{{'ns_1@10.3.4.148',1020},{0,0,tap}},
{{'ns_1@10.3.4.148',1021},{0,0,tap}},
{{'ns_1@10.3.4.148',1022},{0,0,tap}},
{{'ns_1@10.3.4.148',1023},{0,0,tap}},
{{'ns_1@10.3.4.149',103},{0,0,tap}},
{{'ns_1@10.3.4.149',104},{0,0,tap}},
{{'ns_1@10.3.4.149',105},{0,0,tap}},
{{'ns_1@10.3.4.149',106},{0,0,tap}},
{{'ns_1@10.3.4.149',107},{0,0,tap}},
{{'ns_1@10.3.4.149',108},{0,0,tap}},
{{'ns_1@10.3.4.149',109},{0,0,tap}},
{{'ns_1@10.3.4.149',110},{0,0,tap}},
{{'ns_1@10.3.4.149',111},{0,0,tap}},
{{'ns_1@10.3.4.149',112},{0,0,tap}},
{{'ns_1@10.3.4.149',113},{0,0,tap}},
{{'ns_1@10.3.4.149',114},{0,0,tap}},
{{'ns_1@10.3.4.149',115},{0,0,tap}},
{{'ns_1@10.3.4.149',116},{0,0,tap}},
{{'ns_1@10.3.4.149',117},{0,0,tap}},
{{'ns_1@10.3.4.149',118},{0,0,tap}},
{{'ns_1@10.3.4.149',119},{0,0,tap}},
{{'ns_1@10.3.4.149',120},{0,0,tap}},
{{'ns_1@10.3.4.149',121},{0,0,tap}},
{{'ns_1@10.3.4.149',122},{0,0,tap}},
{{'ns_1@10.3.4.149',123},{0,0,tap}},
{{'ns_1@10.3.4.149',124},{0,0,tap}},
{{'ns_1@10.3.4.149',125},{0,0,tap}},
{{'ns_1@10.3.4.149',126},{0,0,tap}},
{{'ns_1@10.3.4.149',127},{0,0,tap}},
{{'ns_1@10.3.4.149',128},{0,0,tap}},
{{'ns_1@10.3.4.149',129},{0,0,tap}},
{{'ns_1@10.3.4.149',130},{0,0,tap}},
{{'ns_1@10.3.4.149',131},{0,0,tap}},
{{'ns_1@10.3.4.149',132},{0,0,tap}},
{{'ns_1@10.3.4.149',133},{0,0,tap}},
{{'ns_1@10.3.4.149',134},{0,0,tap}},
{{'ns_1@10.3.4.149',135},{0,0,tap}},
{{'ns_1@10.3.4.149',136},{0,0,tap}},
{{'ns_1@10.3.4.149',274},{0,0,tap}},
{{'ns_1@10.3.4.149',275},{0,0,tap}},
{{'ns_1@10.3.4.149',276},{0,0,tap}},
{{'ns_1@10.3.4.149',277},{0,0,tap}},
{{'ns_1@10.3.4.149',278},{0,0,tap}},
{{'ns_1@10.3.4.149',279},{0,0,tap}},
{{'ns_1@10.3.4.149',280},{0,0,tap}},
{{'ns_1@10.3.4.149',281},{0,0,tap}},
{{'ns_1@10.3.4.149',282},{0,0,tap}},
{{'ns_1@10.3.4.149',283},{0,0,tap}},
{{'ns_1@10.3.4.149',284},{0,0,tap}},
{{'ns_1@10.3.4.149',285},{0,0,tap}},
{{'ns_1@10.3.4.149',286},{0,0,tap}},
{{'ns_1@10.3.4.149',287},{0,0,tap}},
{{'ns_1@10.3.4.149',288},{0,0,tap}},
{{'ns_1@10.3.4.149',289},{0,0,tap}},
{{'ns_1@10.3.4.149',290},{0,0,tap}},
{{'ns_1@10.3.4.149',291},{0,0,tap}},
{{'ns_1@10.3.4.149',292},{0,0,tap}},
{{'ns_1@10.3.4.149',293},{0,0,tap}},
{{'ns_1@10.3.4.149',294},{0,0,tap}},
{{'ns_1@10.3.4.149',295},{0,0,tap}},
{{'ns_1@10.3.4.149',296},{0,0,tap}},
{{'ns_1@10.3.4.149',297},{0,0,tap}},
{{'ns_1@10.3.4.149',298},{0,0,tap}},
{{'ns_1@10.3.4.149',299},{0,0,tap}},
{{'ns_1@10.3.4.149',300},{0,0,tap}},
{{'ns_1@10.3.4.149',301},{0,0,tap}},
{{'ns_1@10.3.4.149',302},{0,0,tap}},
{{'ns_1@10.3.4.149',303},{0,0,tap}},
{{'ns_1@10.3.4.149',304},{0,0,tap}},
{{'ns_1@10.3.4.149',305},{0,0,tap}},
{{'ns_1@10.3.4.149',306},{0,0,tap}},
{{'ns_1@10.3.4.149',307},{0,0,tap}},
{{'ns_1@10.3.4.149',308},{0,0,tap}},
{{'ns_1@10.3.4.149',309},{0,0,tap}},
{{'ns_1@10.3.4.149',310},{0,0,tap}},
{{'ns_1@10.3.4.149',311},{0,0,tap}},
{{'ns_1@10.3.4.149',312},{0,0,tap}},
{{'ns_1@10.3.4.149',313},{0,0,tap}},
{{'ns_1@10.3.4.149',314},{0,0,tap}},
{{'ns_1@10.3.4.149',315},{0,0,tap}},
{{'ns_1@10.3.4.149',316},{0,0,tap}},
{{'ns_1@10.3.4.149',317},{0,0,tap}},
{{'ns_1@10.3.4.149',318},{0,0,tap}},
{{'ns_1@10.3.4.149',319},{0,0,tap}},
{{'ns_1@10.3.4.149',320},{0,0,tap}},
{{'ns_1@10.3.4.149',321},{0,0,tap}},
{{'ns_1@10.3.4.149',322},{0,0,tap}},
{{'ns_1@10.3.4.149',323},{0,0,tap}},
{{'ns_1@10.3.4.149',324},{0,0,tap}},
{{'ns_1@10.3.4.149',325},{0,0,tap}},
{{'ns_1@10.3.4.149',326},{0,0,tap}},
{{'ns_1@10.3.4.149',327},{0,0,tap}},
{{'ns_1@10.3.4.149',328},{0,0,tap}},
{{'ns_1@10.3.4.149',329},{0,0,tap}},
{{'ns_1@10.3.4.149',330},{0,0,tap}},
{{'ns_1@10.3.4.149',331},{0,0,tap}},
{{'ns_1@10.3.4.149',332},{0,0,tap}},
{{'ns_1@10.3.4.149',333},{0,0,tap}},
{{'ns_1@10.3.4.149',334},{0,0,tap}},
{{'ns_1@10.3.4.149',335},{0,0,tap}},
{{'ns_1@10.3.4.149',336},{0,0,tap}},
{{'ns_1@10.3.4.149',337},{0,0,tap}},
{{'ns_1@10.3.4.149',338},{0,0,tap}},
{{'ns_1@10.3.4.149',339},{0,0,tap}},
{{'ns_1@10.3.4.149',340},{0,0,tap}},
{{'ns_1@10.3.4.149',341},{0,0,tap}},
{{'ns_1@10.3.4.149',445},{0,0,tap}},
{{'ns_1@10.3.4.149',446},{0,0,tap}},
{{'ns_1@10.3.4.149',447},{0,0,tap}},
{{'ns_1@10.3.4.149',448},{0,0,tap}},
{{'ns_1@10.3.4.149',449},{0,0,tap}},
{{'ns_1@10.3.4.149',450},{0,0,tap}},
{{'ns_1@10.3.4.149',451},{0,0,tap}},
{{'ns_1@10.3.4.149',452},{0,0,tap}},
{{'ns_1@10.3.4.149',453},{0,0,tap}},
{{'ns_1@10.3.4.149',454},{0,0,tap}},
{{'ns_1@10.3.4.149',455},{0,0,tap}},
{{'ns_1@10.3.4.149',456},{0,0,tap}},
{{'ns_1@10.3.4.149',457},{0,0,tap}},
{{'ns_1@10.3.4.149',458},{0,0,tap}},
{{'ns_1@10.3.4.149',459},{0,0,tap}},
{{'ns_1@10.3.4.149',460},{0,0,tap}},
{{'ns_1@10.3.4.149',461},{0,0,tap}},
{{'ns_1@10.3.4.149',462},{0,0,tap}},
{{'ns_1@10.3.4.149',463},{0,0,tap}},
{{'ns_1@10.3.4.149',464},{0,0,tap}},
{{'ns_1@10.3.4.149',465},{0,0,tap}},
{{'ns_1@10.3.4.149',466},{0,0,tap}},
{{'ns_1@10.3.4.149',467},{0,0,tap}},
{{'ns_1@10.3.4.149',468},{0,0,tap}},
{{'ns_1@10.3.4.149',469},{0,0,tap}},
{{'ns_1@10.3.4.149',470},{0,0,tap}},
{{'ns_1@10.3.4.149',471},{0,0,tap}},
{{'ns_1@10.3.4.149',472},{0,0,tap}},
{{'ns_1@10.3.4.149',473},{0,0,tap}},
{{'ns_1@10.3.4.149',474},{0,0,tap}},
{{'ns_1@10.3.4.149',475},{0,0,tap}},
{{'ns_1@10.3.4.149',476},{0,0,tap}},
{{'ns_1@10.3.4.149',477},{0,0,tap}},
{{'ns_1@10.3.4.149',478},{0,0,tap}},
{{'ns_1@10.3.4.149',547},{0,0,tap}},
{{'ns_1@10.3.4.149',548},{0,0,tap}},
{{'ns_1@10.3.4.149',549},{0,0,tap}},
{{'ns_1@10.3.4.149',550},{0,0,tap}},
{{'ns_1@10.3.4.149',551},{0,0,tap}},
{{'ns_1@10.3.4.149',552},{0,0,tap}},
{{'ns_1@10.3.4.149',553},{0,0,tap}},
{{'ns_1@10.3.4.149',554},{0,0,tap}},
{{'ns_1@10.3.4.149',555},{0,0,tap}},
{{'ns_1@10.3.4.149',556},{0,0,tap}},
{{'ns_1@10.3.4.149',557},{0,0,tap}},
{{'ns_1@10.3.4.149',558},{0,0,tap}},
{{'ns_1@10.3.4.149',559},{0,0,tap}},
{{'ns_1@10.3.4.149',560},{0,0,tap}},
{{'ns_1@10.3.4.149',561},{0,0,tap}},
{{'ns_1@10.3.4.149',562},{0,0,tap}},
{{'ns_1@10.3.4.149',563},{0,0,tap}},
{{'ns_1@10.3.4.149',564},{0,0,tap}},
{{'ns_1@10.3.4.149',565},{0,0,tap}},
{{'ns_1@10.3.4.149',566},{0,0,tap}},
{{'ns_1@10.3.4.149',567},{0,0,tap}},
{{'ns_1@10.3.4.149',568},{0,0,tap}},
{{'ns_1@10.3.4.149',569},{0,0,tap}},
{{'ns_1@10.3.4.149',570},{0,0,tap}},
{{'ns_1@10.3.4.149',571},{0,0,tap}},
{{'ns_1@10.3.4.149',572},{0,0,tap}},
{{'ns_1@10.3.4.149',573},{0,0,tap}},
{{'ns_1@10.3.4.149',574},{0,0,tap}},
{{'ns_1@10.3.4.149',575},{0,0,tap}},
{{'ns_1@10.3.4.149',576},{0,0,tap}},
{{'ns_1@10.3.4.149',577},{0,0,tap}},
{{'ns_1@10.3.4.149',578},{0,0,tap}},
{{'ns_1@10.3.4.149',579},{0,0,tap}},
{{'ns_1@10.3.4.149',580},{0,0,tap}},
{{'ns_1@10.3.4.149',581},{0,0,tap}},
{{'ns_1@10.3.4.149',582},{0,0,tap}},
{{'ns_1@10.3.4.149',583},{0,0,tap}},
{{'ns_1@10.3.4.149',584},{0,0,tap}},
{{'ns_1@10.3.4.149',585},{0,0,tap}},
{{'ns_1@10.3.4.149',586},{0,0,tap}},
{{'ns_1@10.3.4.149',587},{0,0,tap}},
{{'ns_1@10.3.4.149',588},{0,0,tap}},
{{'ns_1@10.3.4.149',589},{0,0,tap}},
{{'ns_1@10.3.4.149',590},{0,0,tap}},
{{'ns_1@10.3.4.149',591},{0,0,tap}},
{{'ns_1@10.3.4.149',592},{0,0,tap}},
{{'ns_1@10.3.4.149',593},{0,0,tap}},
{{'ns_1@10.3.4.149',594},{0,0,tap}},
{{'ns_1@10.3.4.149',595},{0,0,tap}},
{{'ns_1@10.3.4.149',596},{0,0,tap}},
{{'ns_1@10.3.4.149',597},{0,0,tap}},
{{'ns_1@10.3.4.149',598},{0,0,tap}},
{{'ns_1@10.3.4.149',599},{0,0,tap}},
{{'ns_1@10.3.4.149',600},{0,0,tap}},
{{'ns_1@10.3.4.149',601},{0,0,tap}},
{{'ns_1@10.3.4.149',602},{0,0,tap}},
{{'ns_1@10.3.4.149',603},{0,0,tap}},
{{'ns_1@10.3.4.149',604},{0,0,tap}},
{{'ns_1@10.3.4.149',605},{0,0,tap}},
{{'ns_1@10.3.4.149',606},{0,0,tap}},
{{'ns_1@10.3.4.149',607},{0,0,tap}},
{{'ns_1@10.3.4.149',608},{0,0,tap}},
{{'ns_1@10.3.4.149',609},{0,0,tap}},
{{'ns_1@10.3.4.149',610},{0,0,tap}},
{{'ns_1@10.3.4.149',611},{0,0,tap}},
{{'ns_1@10.3.4.149',612},{0,0,tap}},
{{'ns_1@10.3.4.149',613},{0,0,tap}},
{{'ns_1@10.3.4.149',614},{0,0,tap}},
{{'ns_1@10.3.4.149',615},{0,0,tap}},
{{'ns_1@10.3.4.149',616},{0,0,tap}},
{{'ns_1@10.3.4.149',617},{0,0,tap}},
{{'ns_1@10.3.4.149',618},{0,0,tap}},
{{'ns_1@10.3.4.149',619},{0,0,tap}},
{{'ns_1@10.3.4.149',620},{0,0,tap}},
{{'ns_1@10.3.4.149',621},{0,0,tap}},
{{'ns_1@10.3.4.149',622},{0,0,tap}},
{{'ns_1@10.3.4.149',623},{0,0,tap}},
{{'ns_1@10.3.4.149',624},{0,0,tap}},
{{'ns_1@10.3.4.149',625},{0,0,tap}},
{{'ns_1@10.3.4.149',626},{0,0,tap}},
{{'ns_1@10.3.4.149',627},{0,0,tap}},
{{'ns_1@10.3.4.149',628},{0,0,tap}},
{{'ns_1@10.3.4.149',629},{0,0,tap}},
{{'ns_1@10.3.4.149',630},{0,0,tap}},
{{'ns_1@10.3.4.149',631},{0,0,tap}},
{{'ns_1@10.3.4.149',632},{0,0,tap}},
{{'ns_1@10.3.4.149',633},{0,0,tap}},
{{'ns_1@10.3.4.149',634},{0,0,tap}},
{{'ns_1@10.3.4.149',635},{0,0,tap}},
{{'ns_1@10.3.4.149',636},{0,0,tap}},
{{'ns_1@10.3.4.149',637},{0,0,tap}},
{{'ns_1@10.3.4.149',638},{0,0,tap}},
{{'ns_1@10.3.4.149',639},{0,0,tap}},
{{'ns_1@10.3.4.149',640},{0,0,tap}},
{{'ns_1@10.3.4.149',641},{0,0,tap}},
{{'ns_1@10.3.4.149',642},{0,0,tap}},
{{'ns_1@10.3.4.149',643},{0,0,tap}},
{{'ns_1@10.3.4.149',644},{0,0,tap}},
{{'ns_1@10.3.4.149',645},{0,0,tap}},
{{'ns_1@10.3.4.149',646},{0,0,tap}},
{{'ns_1@10.3.4.149',647},{0,0,tap}},
{{'ns_1@10.3.4.149',648},{0,0,tap}},
{{'ns_1@10.3.4.149',649},{0,0,tap}},
{{'ns_1@10.3.4.149',650},{0,0,tap}},
{{'ns_1@10.3.4.149',651},{0,0,tap}},
{{'ns_1@10.3.4.149',652},{0,0,tap}},
{{'ns_1@10.3.4.149',653},{0,0,tap}},
{{'ns_1@10.3.4.149',654},{0,0,tap}},
{{'ns_1@10.3.4.149',655},{0,0,tap}},
{{'ns_1@10.3.4.149',656},{0,0,tap}},
{{'ns_1@10.3.4.149',657},{0,0,tap}},
{{'ns_1@10.3.4.149',658},{0,0,tap}},
{{'ns_1@10.3.4.149',659},{0,0,tap}},
{{'ns_1@10.3.4.149',660},{0,0,tap}},
{{'ns_1@10.3.4.149',661},{0,0,tap}},
{{'ns_1@10.3.4.149',662},{0,0,tap}},
{{'ns_1@10.3.4.149',663},{0,0,tap}},
{{'ns_1@10.3.4.149',664},{0,0,tap}},
{{'ns_1@10.3.4.149',665},{0,0,tap}},
{{'ns_1@10.3.4.149',666},{0,0,tap}},
{{'ns_1@10.3.4.149',667},{0,0,tap}},
{{'ns_1@10.3.4.149',668},{0,0,tap}},
{{'ns_1@10.3.4.149',669},{0,0,tap}},
{{'ns_1@10.3.4.149',670},{0,0,tap}},
{{'ns_1@10.3.4.149',671},{0,0,tap}},
{{'ns_1@10.3.4.149',672},{0,0,tap}},
{{'ns_1@10.3.4.149',673},{0,0,tap}},
{{'ns_1@10.3.4.149',674},{0,0,tap}},
{{'ns_1@10.3.4.149',675},{0,0,tap}},
{{'ns_1@10.3.4.149',676},{0,0,tap}},
{{'ns_1@10.3.4.149',677},{0,0,tap}},
{{'ns_1@10.3.4.149',678},{0,0,tap}},
{{'ns_1@10.3.4.149',679},{0,0,tap}},
{{'ns_1@10.3.4.149',680},{0,0,tap}},
{{'ns_1@10.3.4.149',681},{0,0,tap}},
{{'ns_1@10.3.4.149',682},{0,0,tap}},
{{'ns_1@10.3.4.149',820},{0,0,tap}},
{{'ns_1@10.3.4.149',821},{0,0,tap}},
{{'ns_1@10.3.4.149',822},{0,0,tap}},
{{'ns_1@10.3.4.149',823},{0,0,tap}},
{{'ns_1@10.3.4.149',824},{0,0,tap}},
{{'ns_1@10.3.4.149',825},{0,0,tap}},
{{'ns_1@10.3.4.149',826},{0,0,tap}},
{{'ns_1@10.3.4.149',827},{0,0,tap}},
{{'ns_1@10.3.4.149',828},{0,0,tap}},
{{'ns_1@10.3.4.149',829},{0,0,tap}},
{{'ns_1@10.3.4.149',830},{0,0,tap}},
{{'ns_1@10.3.4.149',831},{0,0,tap}},
{{'ns_1@10.3.4.149',832},{0,0,tap}},
{{'ns_1@10.3.4.149',833},{0,0,tap}},
{{'ns_1@10.3.4.149',834},{0,0,tap}},
{{'ns_1@10.3.4.149',835},{0,0,tap}},
{{'ns_1@10.3.4.149',836},{0,0,tap}},
{{'ns_1@10.3.4.149',837},{0,0,tap}},
{{'ns_1@10.3.4.149',838},{0,0,tap}},
{{'ns_1@10.3.4.149',839},{0,0,tap}},
{{'ns_1@10.3.4.149',840},{0,0,tap}},
{{'ns_1@10.3.4.149',841},{0,0,tap}},
{{'ns_1@10.3.4.149',842},{0,0,tap}},
{{'ns_1@10.3.4.149',843},{0,0,tap}},
{{'ns_1@10.3.4.149',844},{0,0,tap}},
{{'ns_1@10.3.4.149',845},{0,0,tap}},
{{'ns_1@10.3.4.149',846},{0,0,tap}},
{{'ns_1@10.3.4.149',847},{0,0,tap}},
{{'ns_1@10.3.4.149',848},{0,0,tap}},
{{'ns_1@10.3.4.149',849},{0,0,tap}},
{{'ns_1@10.3.4.149',850},{0,0,tap}},
{{'ns_1@10.3.4.149',851},{0,0,tap}},
{{'ns_1@10.3.4.149',852},{0,0,tap}},
{{'ns_1@10.3.4.149',853},{0,0,tap}},
{{'ns_1@10.3.4.149',990},{0,0,tap}},
{{'ns_1@10.3.4.149',991},{0,0,tap}},
{{'ns_1@10.3.4.149',992},{0,0,tap}},
{{'ns_1@10.3.4.149',993},{0,0,tap}},
{{'ns_1@10.3.4.149',994},{0,0,tap}},
{{'ns_1@10.3.4.149',995},{0,0,tap}},
{{'ns_1@10.3.4.149',996},{0,0,tap}},
{{'ns_1@10.3.4.149',997},{0,0,tap}},
{{'ns_1@10.3.4.149',998},{0,0,tap}},
{{'ns_1@10.3.4.149',999},{0,0,tap}},
{{'ns_1@10.3.4.149',1000},{0,0,tap}},
{{'ns_1@10.3.4.149',1001},{0,0,tap}},
{{'ns_1@10.3.4.149',1002},{0,0,tap}},
{{'ns_1@10.3.4.149',1003},{0,0,tap}},
{{'ns_1@10.3.4.149',1004},{0,0,tap}},
{{'ns_1@10.3.4.149',1005},{0,0,tap}},
{{'ns_1@10.3.4.149',1006},{0,0,tap}},
{{'ns_1@10.3.4.149',1007},{0,0,tap}},
{{'ns_1@10.3.4.149',1008},{0,0,tap}},
{{'ns_1@10.3.4.149',1009},{0,0,tap}},
{{'ns_1@10.3.4.149',1010},{0,0,tap}},
{{'ns_1@10.3.4.149',1011},{0,0,tap}},
{{'ns_1@10.3.4.149',1012},{0,0,tap}},
{{'ns_1@10.3.4.149',1013},{0,0,tap}},
{{'ns_1@10.3.4.149',1014},{0,0,tap}},
{{'ns_1@10.3.4.149',1015},{0,0,tap}},
{{'ns_1@10.3.4.149',1016},{0,0,tap}},
{{'ns_1@10.3.4.149',1017},{0,0,tap}},
{{'ns_1@10.3.4.149',1018},{0,0,tap}},
{{'ns_1@10.3.4.149',1019},{0,0,tap}},
{{'ns_1@10.3.4.149',1020},{0,0,tap}},
{{'ns_1@10.3.4.149',1021},{0,0,tap}},
{{'ns_1@10.3.4.149',1022},{0,0,tap}},
{{'ns_1@10.3.4.149',1023},{0,0,tap}}]],
[]},
{lists,append,1,
[{file,"lists.erl"},{line,125}]},
{lists,append,1,
[{file,"lists.erl"},{line,125}]},
{ns_rebalance_observer,
initiate_bucket_rebalance,2,
[{file,"src/ns_rebalance_observer.erl"},
{line,209}]},
{ns_rebalance_observer,handle_set_ff_map,2,
[{file,"src/ns_rebalance_observer.erl"},
{line,257}]},
{ns_rebalance_observer,handle_cast,2,
[{file,"src/ns_rebalance_observer.erl"},
{line,105}]},
{gen_server,handle_msg,5,
[{file,"gen_server.erl"},{line,604}]},
{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,239}]}]}
ns_orchestrator002 ns_1@10.3.4.147 15:51:29 - Mon Jul 7, 2014
Bucket "default" rebalance does not seem to be swap rebalance ns_vbucket_mover000 ns_1@10.3.4.147 15:51:29 - Mon Jul 7, 2014
Bucket "default" loaded on node 'ns_1@10.3.4.149' in 0 seconds. ns_memcached000 ns_1@10.3.4.149 15:51:29 - Mon Jul 7, 2014
Bucket "default" loaded on node 'ns_1@10.3.4.147' in 0 seconds. ns_memcached000 ns_1@10.3.4.147 15:51:28 - Mon Jul 7, 2014
Bucket "default" loaded on node 'ns_1@10.3.4.148' in 0 seconds. ns_memcached000 ns_1@10.3.4.148 15:51:28 - Mon Jul 7, 2014
Started rebalancing bucket default ns_rebalancer000 ns_1@10.3.4.147 15:51:28 - Mon Jul 7, 2014
Starting rebalance, KeepNodes = ['ns_1@10.3.4.144','ns_1@10.3.4.145',
'ns_1@10.3.4.146','ns_1@10.3.4.147',
'ns_1@10.3.4.148','ns_1@10.3.4.149'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes
ns_orchestrator004 ns_1@10.3.4.147 15:51:28 - Mon Jul 7, 2014


3) rebalance out 2 3.0.0-918 nodes: 10.3.4.148, 10.3.4.149

Bucket "default" rebalance does not seem to be swap rebalance (repeated 1 times) ns_vbucket_mover000 ns_1@10.3.4.147 15:53:59 - Mon Jul 7, 2014
Started rebalancing bucket default (repeated 1 times) ns_rebalancer000 ns_1@10.3.4.147 15:53:59 - Mon Jul 7, 2014
Starting rebalance, KeepNodes = ['ns_1@10.3.4.144','ns_1@10.3.4.145',
'ns_1@10.3.4.146','ns_1@10.3.4.147'], EjectNodes = ['ns_1@10.3.4.149',
'ns_1@10.3.4.148'], Failed over and being ejected nodes = []; no delta recovery nodes
(repeated 1 times) ns_orchestrator004 ns_1@10.3.4.147 15:53:59 - Mon Jul 7, 2014
Rebalance exited with reason {badmatch,false}
(repeated 1 times) ns_orchestrator002 ns_1@10.3.4.147 15:53:59 - Mon Jul 7, 2014
<0.8365.4> exited with {badmatch,false} ns_vbucket_mover000 ns_1@10.3.4.147 15:53:35 - Mon Jul 7, 2014
Server error during processing: ["web request failed",
{path,"/pools/default/tasks"},
{type,exit},
{what,
{{badmatch,false},
{gen_server,call,
[{global,ns_rebalance_observer},
get_detailed_progress,10000]}}},
{trace,
[{gen_server,call,3},
{ns_rebalance_observer,
get_detailed_progress,0},
{ns_doctor,get_detailed_progress,0},
{ns_doctor,do_build_tasks_list,4},
{menelaus_web,handle_tasks,2},
{request_throttler,do_request,3},
{menelaus_web,loop,3},
{mochiweb_http,headers,5}]}] menelaus_web019 ns_1@10.3.4.144 15:53:30 - Mon Jul 7, 2014
Rebalance exited with reason {badmatch,false}
ns_orchestrator002 ns_1@10.3.4.147 15:53:30 - Mon Jul 7, 2014
<0.8189.4> exited with {badmatch,false} ns_vbucket_mover000 ns_1@10.3.4.147 15:53:30 - Mon Jul 7, 2014
Bucket "default" rebalance does not seem to be swap rebalance ns_vbucket_mover000 ns_1@10.3.4.147 15:53:30 - Mon Jul 7, 2014
Started rebalancing bucket default ns_rebalancer000 ns_1@10.3.4.147 15:53:29 - Mon Jul 7, 2014
Starting rebalance, KeepNodes = ['ns_1@10.3.4.144','ns_1@10.3.4.145',
'ns_1@10.3.4.146','ns_1@10.3.4.147'], EjectNodes = ['ns_1@10.3.4.149',
'ns_1@10.3.4.148'], Failed over and being ejected nodes = []; no delta recovery nodes
ns_orchestrator004 ns_1@10.3.4.147 15:53:29 - Mon Jul 7, 2014


cluster http://10.3.4.144:8091/index.html will be available a few hours

 Comments   
Comment by Andrei Baranouski [ 07/Jul/14 ]
https://s3.amazonaws.com/bugdb/jira/MB-11663/70b725d2/10.3.4.144-772014-1559-diag.zip
https://s3.amazonaws.com/bugdb/jira/MB-11663/70b725d2/10.3.4.145-772014-162-diag.zip
https://s3.amazonaws.com/bugdb/jira/MB-11663/70b725d2/10.3.4.146-772014-160-diag.zip
https://s3.amazonaws.com/bugdb/jira/MB-11663/70b725d2/10.3.4.147-772014-165-diag.zip
https://s3.amazonaws.com/bugdb/jira/MB-11663/70b725d2/10.3.4.148-772014-163-diag.zip
https://s3.amazonaws.com/bugdb/jira/MB-11663/70b725d2/10.3.4.149-772014-167-diag.zip
Comment by Aleksey Kondratenko [ 08/Jul/14 ]
MB-11647




[MB-11662] [System test] Indexing gets stuck, post swap-rebalance Created: 07/Jul/14  Updated: 08/Jul/14  Resolved: 08/Jul/14

Status: Resolved
Project: Couchbase Server
Component/s: view-engine
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Blocker
Reporter: Ketaki Gangal Assignee: Nimish Gupta
Resolution: Fixed Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: Build : 3.0.0-918-rel ( beta candidate)

Triage: Untriaged
Operating System: Centos 64-bit
Is this a Regression?: Unknown

 Description   
1. Setup a 7 node cluster, 2 buckets, 1 ddoc X 2 views per bucket
2. Load items
3. Swap rebalance out orchestrator

Indexing on ddoc2 is stuck, couchdb logs show error as below -

--
[couchdb:error,2014-07-07T16:02:13.025,ns_1@10.6.2.166:<0.2445.41>:couch_log:error:42]Set view `saslbucket`, replica group `_design/ddoc2`, doc loader error
error: {badmatch,undefined}
stacktrace: [{couch_upr_client,enum_docs_since,8,
                 [{file,
                      "/buildbot/build_slave/centos-5-x64-300-builder/build/build/couchdb/src/couch_upr/src/couch_upr_client.erl"},
                  {line,175}]},
             {couch_set_view_updater,'-load_changes/8-fun-2-',12,
                 [{file,
                      "/buildbot/build_slave/centos-5-x64-300-builder/build/build/couchdb/src/couch_set_view/src/couch_set_view_updater.erl"},
                  {line,529}]},
             {lists,foldl,3,[{file,"lists.erl"},{line,1248}]},
             {couch_set_view_updater,load_changes,8,
--


 Comments   
Comment by Ketaki Gangal [ 07/Jul/14 ]
Link to logs https://s3.amazonaws.com/bugdb/11662/11662.tar
Comment by Sarath Lakshman [ 08/Jul/14 ]
Probably, http://review.couchbase.org/#/c/39203/ should fix this. I just had a glance and looks like it is the case handled by this fix.
Nimish, Please verify if this is the case.
Comment by Nimish Gupta [ 08/Jul/14 ]
Thanks Sarath. I checked the logs and in between partition is missed, delta log was applied. So your code changes ideally should fix this issue.




[MB-11661] mem_used in increasing and dropping in basic setup with 5 buckets Created: 07/Jul/14  Updated: 12/Jul/14

Status: Open
Project: Couchbase Server
Component/s: couchbase-bucket
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Blocker
Reporter: Pavel Paulau Assignee: Pavel Paulau
Resolution: Unresolved Votes: 0
Labels: performance, releasenote
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: Build 3.0.0-928

Platform = Physical
OS = CentOS 6.5
CPU = Intel Xeon E5-2680 v2 (40 vCPU)
Memory = 256 GB
Disk = RAID 10 SSD

Attachments: PNG File bucket-1-mem_used-cluster-wide.png     PNG File bucket-2-mem_used.png     PNG File bucket-4-mem_used.png     PNG File memcached_rss-172.23.100.17.png     PNG File memcached_rss-172.23.100.18.png     PNG File mem_used_2.5.1_vs_3.0.0.png    
Triage: Untriaged
Operating System: Centos 64-bit
Link to Log File, atop/blg, CBCollectInfo, Core dump: http://ci.sc.couchbase.com/job/perf-dev/479/artifact/
Is this a Regression?: Yes

 Description   
2 nodes, 5 buckets, 200K x 1KB docs per bucket, 2K updates per bucket.

You can see that mem_used for bucket-1 increased from ~600M to ~1250MB after 5 hours.

It doesn't look like a fragmentation issue, at least allocator stats don't indicate that:

MALLOC: 1575414200 ( 1502.4 MiB) Bytes in use by application
MALLOC: + 24248320 ( 23.1 MiB) Bytes in page heap freelist
MALLOC: + 77763952 ( 74.2 MiB) Bytes in central cache freelist
MALLOC: + 3931648 ( 3.7 MiB) Bytes in transfer cache freelist
MALLOC: + 27337432 ( 26.1 MiB) Bytes in thread cache freelists
MALLOC: + 7663776 ( 7.3 MiB) Bytes in malloc metadata
MALLOC: ------------
MALLOC: = 1716359328 ( 1636.8 MiB) Actual memory used (physical + swap)
MALLOC: + 1581056 ( 1.5 MiB) Bytes released to OS (aka unmapped)
MALLOC: ------------
MALLOC: = 1717940384 ( 1638.4 MiB) Virtual address space used
MALLOC:
MALLOC: 94773 Spans in use
MALLOC: 36 Thread heaps in use
MALLOC: 8192 Tcmalloc page size

Please notice that actual RAM usage (RSS) is pretty stable.

Another issue is dropping mem_used for bucket-2 and bucket-4 and these errors:

Mon Jul 7 10:24:59.559952 PDT 3: (bucket-2) Total memory in memoryDeallocated() >= GIGANTOR !!! Disable the memory tracker...
Mon Jul 7 10:54:58.109779 PDT 3: (bucket-4) Total memory in memoryDeallocated() >= GIGANTOR !!! Disable the memory tracker...
Mon Jul 7 10:54:58.109779 PDT 3: (bucket-4) Total memory in memoryDeallocated() >= GIGANTOR !!! Disable the memory tracker...


 Comments   
Comment by Matt Ingenthron [ 07/Jul/14 ]
Any time you see GIGANTOR, that indicates a stats underflow. That was added back in the 1.7 days to try to catch these kinds of underflow allocation problems early.
Comment by Pavel Paulau [ 07/Jul/14 ]
Just a comparison with 2.5.1.
Comment by Chiyoung Seo [ 12/Jul/14 ]
Pavel,

It seems to me that the root cause of this issue is the same as MB-11553

I made some fixes for MB-11553 and this ticket:

http://review.couchbase.org/#/c/39333/
http://review.couchbase.org/#/c/39334/
http://review.couchbase.org/#/c/39335/
http://review.couchbase.org/#/c/39339/

These are all fixes to the UPR-related regressions.

I will create a toy build so that you can run the same test again.
Comment by Pavel Paulau [ 12/Jul/14 ]
Toy build based on:

http://review.couchbase.org/#/c/39333/
http://review.couchbase.org/#/c/39334/
http://review.couchbase.org/#/c/39335/
http://review.couchbase.org/#/c/39339/

couchbase-server-community_cent58-3.0.0-toy-couchstore-x86_64_3.0.0-776-toy.rpm




[MB-11660] Regression: Old version of document returned after compaction Created: 07/Jul/14  Updated: 08/Jul/14  Resolved: 08/Jul/14

Status: Resolved
Project: Couchbase Server
Component/s: forestdb
Affects Version/s: 2.5.1
Fix Version/s: bug-backlog
Security Level: Public

Type: Bug Priority: Blocker
Reporter: Jens Alfke Assignee: Jung-Sang Ahn
Resolution: Fixed Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
The Couchbase Lite unit tests are failing with the latest commit of ForestDB (40a2305) because an fdb_get call is returning the wrong data. This does not happen with the previous commit (706e66f).

Steps:
1. Save the same document three times, each time with different data, and each time in a separate transaction (fdb_begin_transaction, fdb_put, fdb_end_transaction)
2. Compact the database manually (calling fdb_compact)
3. Get the document

Result:
ForestDB returns the second version of the document, not the newest (third).

 Comments   
Comment by Jens Alfke [ 07/Jul/14 ]
If I skip the fdb_compact call, the problem doesn't occur.
Comment by Chiyoung Seo [ 07/Jul/14 ]
Jung-Sang,

Can you please take a look at this issue today? Both Sundar and I are still busy with addressing 3.0 blockers.

Thanks,
Comment by Jung-Sang Ahn [ 07/Jul/14 ]
http://review.couchbase.org/#/c/39199/




[MB-11659] {UPR}:: After Rebalance-in-out stop, hit the issue Memcached error #1 'Not found': Not found for vbucket Created: 07/Jul/14  Updated: 07/Jul/14  Resolved: 07/Jul/14

Status: Resolved
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Major
Reporter: Parag Agarwal Assignee: Andrei Baranouski
Resolution: Incomplete Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: 10.6.2.144-150

Triage: Untriaged
Operating System: Centos 64-bit
Is this a Regression?: Yes

 Description   
3.0.0-918, centos 6x, vbuckets = 1024

1. Create 4 node cluster (10.6.2.144-147)
2. Add default bucket
3. Add 100K items to bucket
4. Swap 2 nodes in the cluster (10.6.2.146, 10.6.2.147 out, and 10.6.2.148, 10.6.2.149 in)
5. During rebalance stop it
6. Verify the items present in the cluster

Step 6 hit the exception

MemcachedError: Memcached error #1 'Not found': Not found for vbucket :965 to mc 10.6.2.147:11210


Test case:: /testrunner -i ../palm.ini get-cbcollect-info=False,get-logs=False,stop-on-failure=False,get-coredumps=True,force_kill_memached=False,verify_unacked_bytes=True -t rebalance.rebalanceinout.RebalanceInOutTests.start_stop_rebalance_in_out,nodes_init=4,nodes_in=2,nodes_out=2,num_replicas=1,items=100000,skip_cleanup=True,GROUP=IN_OUT;BASIC;P0

FYI: Test case passes for vbuckets = 128. Sometimes does pass for 1024 as well. But hits quite often


 Comments   
Comment by Parag Agarwal [ 07/Jul/14 ]
https://s3.amazonaws.com/bugdb/jira/MB-11659/fail_vbucket_not_found.tar.gz
Comment by Aleksey Kondratenko [ 07/Jul/14 ]
not a bug
Comment by Parag Agarwal [ 07/Jul/14 ]
why
Comment by Aleksey Kondratenko [ 07/Jul/14 ]
I'll leave it as exercise for the reader
Comment by Parag Agarwal [ 07/Jul/14 ]
Andrei fixed the test case: http://review.couchbase.org/#/c/39184/

However, we still fail with timeout when reading stats for total items expected in present nodes.

Traceback (most recent call last):
  File "pytests/rebalance/rebalanceinout.py", line 207, in start_stop_rebalance_in_out
    self.verify_cluster_stats(result_nodes)
  File "pytests/basetestcase.py", line 627, in verify_cluster_stats
    self._verify_stats_all_buckets(servers, timeout=(timeout or 120))
  File "pytests/basetestcase.py", line 411, in _verify_stats_all_buckets
    raise Exception("unable to get expected stats during {0} sec".format(timeout))
Exception: unable to get expe

The bug is assigned to Andrei




[MB-11658] ns_server does not issue drop_deletes flag for metadata purge of deleted item. Created: 07/Jul/14  Updated: 09/Jul/14  Resolved: 09/Jul/14

Status: Closed
Project: Couchbase Server
Component/s: couchbase-bucket, ns_server, storage-engine
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Critical
Reporter: Venu Uppalapati Assignee: Venu Uppalapati
Resolution: Fixed Votes: 0
Labels: releasenote
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: Ubuntu 64 bit

Attachments: Zip Archive cbcollect_mpi.zip    
Triage: Triaged
Is this a Regression?: Unknown

 Description   
Steps to reproduce:
0)create default bucket and set the Metadata Purge Interval from UI to 0.04==1hour. No views are created.
1)Insert a document with key "fooaaa". this is inserted in vbucket 14.
2)delete the document.
3) using ./couch_dbdump tool get the document metadata.
./couch_dbdump /opt/couchbase/var/lib/couchbase/data/default/14.couch.1
cas: 18425762490303526, expiry: 1404756141, flags: 0
    doc deleted
4)wait for 1 hour and generated workload (with different keys) such that auto compaction is triggered.
5) using ./couch_dbdump tool it is seen that metadata is still present in newly compacted file i.e tombstone was not dropped.
./couch_dbdump /opt/couchbase/var/lib/couchbase/data/default/14.couch.2
cas: 18425762490303526, expiry: 1404756141, flags: 0
    doc deleted

 Comments   
Comment by Artem Stemkovski [ 07/Jul/14 ]
the problem is that the new implementation of the compaction interpretes drop_deletes flag not the same way as the old compactor did.

The logic of old compactor was:
if drop_deletes flag is true => delete all tombstones
false => use ts and seqno to figure out which tombstones we should retain

The new compactor won't delete any tombstones if drop_deletes is false

rerouting the bug to ep-engine team
Comment by Venu Uppalapati [ 07/Jul/14 ]
The design doc already mentioned that the drop_deletes flag will be considered as a boolean "A boolean (0 or 1) variable to indicate if deleted items are to be dropped"
https://docs.google.com/document/d/1sWt27YFBC3Jbqg03_1fSjSpOl_r7WsmRtlqyC81w0QY/
Comment by Artem Stemkovski [ 07/Jul/14 ]
That means that we unfortunately overlooked this item when we reviewed design doc. The intention was to make it work the same way as couchdb compactor did. With the new design the option when drop_deletes=false is useless and ns_server will never set this flag to false.

So I still suggest to change the design doc and ep_engine code so it matches the old compactor's behavior.
Comment by Sundar Sridharan [ 07/Jul/14 ]
Thanks for the clarification Artem, fix uploaded for review at http://review.couchbase.org/39183
Comment by Venu Uppalapati [ 07/Jul/14 ]
Hi Artem, could you please provide more clarity on "ns_server will never set this flag to false"? In my understanding, ns_server needs to set this false when we want to preserve any tombstone(s) into the newly compacted file.
Comment by Venu Uppalapati [ 07/Jul/14 ]
I think I got what you meant by the comment.
Comment by Artem Stemkovski [ 07/Jul/14 ]
We currently don't have such option in ns_server as "preserve any tombstone". The only thing you can do is to specify large purge interval.
Comment by Sundar Sridharan [ 07/Jul/14 ]
Venu, could you please verify if the behavior is back to the way Artem mentions now that the fix has been merged? thanks
Comment by Venu Uppalapati [ 07/Jul/14 ]
Hi Artem, my understanding from speaking to Alk last week was that ns_server tweaks both the purge interval(purgebeforeTS) that you mention above as well as purgebeforeSeqNo(SafeSeqnos) to determine the tombstones that are purged. Am I missing something?
Comment by Venu Uppalapati [ 07/Jul/14 ]
verified the fix with build 932.
1)Issue compaction with drop deletes = false && purgebeforeTS<exptime(document) && purgebeforeSeqno = 0
  result:tombstone dropped
2) Issue compaction with drop deletes = true && purgebeforeTS>exptime(document) && purgebeforeSeqno < seqno(document)
  result:tombstone dropped
Comment by Venu Uppalapati [ 08/Jul/14 ]
release noting for Beta 1.
Comment by Chiyoung Seo [ 08/Jul/14 ]
Venu,

Please close this ticket after tagging it with release note.




[MB-11657] Document new log file reports.log Created: 07/Jul/14  Updated: 07/Jul/14

Status: Open
Project: Couchbase Server
Component/s: documentation
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Task Priority: Major
Reporter: Anil Kumar Assignee: Ruth Harris
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
We added a new log file "reports.log" in 3.0. This log file will only contain "progress" report and "crash" reports for Erlang process.




[MB-11656] Make sure that n1ql services will be exposed over server config Created: 07/Jul/14  Updated: 08/Jul/14

Status: Open
Project: Couchbase Server
Component/s: query
Affects Version/s: 3.0
Fix Version/s: cbq-DP4
Security Level: Public

Type: Task Priority: Major
Reporter: Michael Nitschinger Assignee: Ilam Siva
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
Hey Ilam,

this ticket is just here to ensure or otherwise clarify on how the N1QL nodes will expose their location. Hopefully as expected through the ns_server config with its newly added service-by node locations.




Move operations stats out of memcached and into the engines (MB-7761)

[MB-11655] Stats: Getandlock doesn't show up in any operations Created: 07/Jul/14  Updated: 07/Jul/14

Status: Open
Project: Couchbase Server
Component/s: couchbase-bucket, UI
Affects Version/s: 2.0
Fix Version/s: 3.0.1
Security Level: Public

Type: Technical task Priority: Minor
Reporter: Perry Krug Assignee: Mike Wiederhold
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
This might need some support from the ep-engine side as well, but when generating get-and-lock operations, they do not register in the total ops/sec nor in the gets/sec




[MB-11653] "couchbase-cli cluster-init" reinitializes cluster though output shows only "ERROR: option cluster-init-ramsize is not specified" Created: 07/Jul/14  Updated: 07/Jul/14

Status: Open
Project: Couchbase Server
Component/s: tools
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Major
Reporter: Andrei Baranouski Assignee: Bin Cui
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: 3.0.0-918

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
probably we already had a similar ticket....


root@localhost bin]# /opt/couchbase/bin/couchbase-cli cluster-init --cluster=localhost -u Administrator -p password --cluster-init-username=Administrator --cluster-init-password=password --cluster-init-port=8091
ERROR: option cluster-init-ramsize is not specified

node has been initialized with cluster-init-port=8091



[root@localhost bin]# /opt/couchbase/bin/couchbase-cli cluster-init --cluster=localhost -u Administrator -p password --cluster-init-username=Administrator --cluster-init-password=password --cluster-init-port=8092
ERROR: unable to init/modify localhost (400) Bad Request
[u'Port is already in use']

it's expected behaviour


[root@localhost bin]# /opt/couchbase/bin/couchbase-cli cluster-init --cluster=localhost -u Administrator -p password --cluster-init-username=Administrator --cluster-init-password=password --cluster-init-port=8093
ERROR: option cluster-init-ramsize is not specified

so, the cluster is available by 8093 port
[root@localhost bin]# curl http://10.3.4.144:8093/index.html
<!DOCTYPE html>


so, the main problem is that we don't get any message that cluster reinitialized, only error message indicates that the command was not successful





[MB-11652] "Item Count" jumps during rebalance without any data load Created: 07/Jul/14  Updated: 07/Jul/14  Resolved: 07/Jul/14

Status: Resolved
Project: Couchbase Server
Component/s: UI
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Major
Reporter: Andrei Baranouski Assignee: Aleksey Kondratenko
Resolution: Won't Fix Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: 3.0.0-918

Attachments: PNG File Screen Shot 2014-07-07 at 2.11.57 PM.png     PNG File Screen Shot 2014-07-07 at 2.12.11 PM.png     PNG File Screen Shot 2014-07-07 at 2.12.19 PM.png     PNG File Screen Shot 2014-07-07 at 2.12.32 PM.png     PNG File Screen Shot 2014-07-07 at 2.12.52 PM.png     PNG File Screen Shot 2014-07-07 at 2.13.03 PM.png    
Triage: Untriaged
Is this a Regression?: Unknown

 Description   
steps:
1.load beer-sample(it contains 7303 items)
2. rebalance in 2 nodes

results: on UI I see that "Items Count" constantly slightly varies

 Comments   
Comment by Aleksey Kondratenko [ 07/Jul/14 ]
My understanding is that you'll be able to see it with 2.x and 1.8.x as well.

This is because our items count on ui as approximation and aggregation of counts from all nodes. And that all nodes are sending their counts at possibly different times causing skew particularly during rebalance.




[MB-11651] able to flush non sasl bucket via Rest without authentification, couchbase-cli doesn't allow it Created: 07/Jul/14  Updated: 07/Jul/14  Resolved: 07/Jul/14

Status: Resolved
Project: Couchbase Server
Component/s: ns_server, tools
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Major
Reporter: Andrei Baranouski Assignee: Steve Yen
Resolution: Won't Fix Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: 3.0.0-918

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
not sure, that we should allow to flush non sasl buckets without authentification. Alk, this feature is correct?
then couchbase-cli also should allow

[root@cen-0422 testrunner]# curl -X POST -I http://10.3.4.144:8091/pools/default/buckets/default/controller/doFlush
HTTP/1.1 200 OK
Server: Couchbase Server
Pragma: no-cache
Date: Mon, 07 Jul 2014 10:46:28 GMT
Content-Length: 0
Cache-Control: no-cache


./couchbase-cli bucket-flush -c localhost:8091 --bucket=default
ERROR: Username cannot be empty. Please use -u or set environment variable CB_REST_USERNAME.

 Comments   
Comment by Aleksey Kondratenko [ 07/Jul/14 ]
Works as designed. As part of 2.0.1 I think or 1.8.1 or something we were explicitly asked to allow flush with bucket credentials.
Comment by Andrei Baranouski [ 07/Jul/14 ]
how about differences with couchbase-cli?
Comment by Aleksey Kondratenko [ 07/Jul/14 ]
Bother Bin about that.
Comment by Steve Yen [ 07/Jul/14 ]
Didn't know that no-auth flush for non-SASL buckets was allowed.

Inadvertent flush has bitten users in the past, so given there's a REST/curl-based workaround, I don't mind the couchbase-cli making the user have to work & think about it slightly more before flushing all their bucket's data.




[MB-11650] Standard Online Upgrade can use Swap method after initial rebalance Created: 07/Jul/14  Updated: 08/Jul/14

Status: Open
Project: Couchbase Server
Component/s: documentation
Affects Version/s: 2.0, 2.0.1, 2.1.0, 2.2.0, 2.1.1, 2.5.0, 2.5.1
Fix Version/s: feature-backlog
Security Level: Public

Type: Improvement Priority: Minor
Reporter: Mark Woosey Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: documentation
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   
Currently the online documentation for upgrading (http://docs.couchbase.com/couchbase-manual-2.5/cb-install/#upgrading) specifies that for an online standard upgrade, a cluster must be rebalanced when a node is removed, then again when that same node is removed, before rebalancing a further time when removing the next node.

By combining the swap upgrade method, only n+1 rebalances would be needed, vs 2n for the standard method (1 rebalance as each node is removed, then another as it is added). By treating the upgrade as a swap upgrade after removing the first node, each upgraded node can swap-rebalance with an outdated node, behaving as psuedo-additional node for a cluster of n-1 nodes, with the swap method applied.

Upgrading each node between 2 and n-1 will behave the same as a swap rebalance, with node 1 being a removal rebalance (making the cluster behave as if there are regularly n-1 nodes), and node n being an addition rebalance (reverting the cluster to its n organisation).



 Comments   
Comment by Mark Woosey [ 07/Jul/14 ]
I had put a side note on the support documentation as follows:

• Note that 'online standard' rebalance can be combined with the swap method after rebalancing (using the swap process on the n-1 balanced cluster).
• That is, step 9 of online standard rebalancing is better replaced with steps from swap rebalancing.

Perhaps a similar note might be the least confusing way of explaining this (or at least informing that such a method is possible).




[MB-11649] cbcollect_info: stats klog are no longer relevant Created: 07/Jul/14  Updated: 07/Jul/14  Resolved: 07/Jul/14

Status: Resolved
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Major
Reporter: Andrei Baranouski Assignee: Aleksey Kondratenko
Resolution: Fixed Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: 3.0.0-918

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
from stats.log :

==============================================================================
memcached stats klog
['cbstats', '-a', '127.0.0.1:11210', 'klog', '-b', '_admin', '-p', '6bb2984fe2e2bd0d7cc483a20d86654c']
==============================================================================
Usage: cbstats host:port all
  or cbstats host:port allocator
  or cbstats host:port checkpoint [vbid]
  or cbstats host:port config
  or cbstats host:port dcp
  or cbstats host:port dcp-takeover vb name
  or cbstats host:port dcpagg
  or cbstats host:port diskinfo [detail]
  or cbstats host:port dispatcher [logs]
  or cbstats host:port failovers [vbid]
  or cbstats host:port hash [detail]
  or cbstats host:port items (memcached bucket only)
  or cbstats host:port key keyname vbid
  or cbstats host:port kvstore
  or cbstats host:port kvtimings
  or cbstats host:port memory
  or cbstats host:port prev-vbucket
  or cbstats host:port raw argument
  or cbstats host:port reset
  or cbstats host:port slabs (memcached bucket only)
  or cbstats host:port tap
  or cbstats host:port tap-takeover vb name
  or cbstats host:port tapagg
  or cbstats host:port timings
  or cbstats host:port upr
  or cbstats host:port upr-takeover vb name
  or cbstats host:port upragg
  or cbstats host:port uuid
  or cbstats host:port vbucket
  or cbstats host:port vbucket-details [vbid]
  or cbstats host:port vbucket-seqno [vbid]
  or cbstats host:port vkey keyname vbid
  or cbstats host:port warmup

usage: cbstats [options]

options:
  -h, --help show this help message and exit
  -a iterate over all buckets (requires admin u/p)
  -b BUCKETNAME the bucket to get stats from (Default: default)
  -p PASSWORD the password for the bucket if one exists

 Comments   
Comment by Aleksey Kondratenko [ 07/Jul/14 ]
http://review.couchbase.org/39167




[MB-11648] [Ubuntu] Online upgrade failed 2.5.0-1059 -> 3.0.0-918 due to rebalance failed {badmatch,wrong_rebalancer_pid} Created: 07/Jul/14  Updated: 07/Jul/14  Resolved: 07/Jul/14

Status: Resolved
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Blocker
Reporter: Sangharsh Agarwal Assignee: Aleksey Kondratenko
Resolution: Duplicate Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: Online upgrade 2.5.0-1059 -> 3.0.0-918

Ubuntu 12.04

Triage: Untriaged
Operating System: Ubuntu 64-bit
Link to Log File, atop/blg, CBCollectInfo, Core dump: 10.3.3.199 : https://s3.amazonaws.com/bugdb/jira/MB-11648/15494e12/10.3.3.199-732014-613-diag.zip
10.3.3.199 : https://s3.amazonaws.com/bugdb/jira/MB-11648/2b9e2786/10.3.3.199-diag.txt.gz
10.3.3.218 : https://s3.amazonaws.com/bugdb/jira/MB-11648/3a84a83d/10.3.3.218-diag.txt.gz
10.3.3.218 : https://s3.amazonaws.com/bugdb/jira/MB-11648/876f9c53/10.3.3.218-732014-69-diag.zip
10.3.3.225 : https://s3.amazonaws.com/bugdb/jira/MB-11648/7b56d5c1/10.3.3.225-732014-611-diag.zip
10.3.3.225 : https://s3.amazonaws.com/bugdb/jira/MB-11648/958f7559/10.3.3.225-diag.txt.gz
10.3.3.239 : https://s3.amazonaws.com/bugdb/jira/MB-11648/1672b214/10.3.3.239-diag.txt.gz
10.3.3.239 : https://s3.amazonaws.com/bugdb/jira/MB-11648/e7df5f00/10.3.3.239-732014-610-diag.zip
10.3.3.240 : https://s3.amazonaws.com/bugdb/jira/MB-11648/386f4960/10.3.3.240-diag.txt.gz
10.3.3.240 : https://s3.amazonaws.com/bugdb/jira/MB-11648/9490af1b/10.3.3.240-732014-67-diag.zip
Is this a Regression?: Yes

 Description   
[Test]
http://qa.hq.northscale.net/job/ubuntu_x64--36_01--XDCR_upgrade-P1/18/consoleFull

[Test Error]
[2014-07-03 06:03:43,427] - [rest_client:1399] INFO - Node versions in cluster [u'2.5.0-1059-rel-enterprise', u'2.5.0-1059-rel-enterprise']
[2014-07-03 06:03:43,442] - [rest_client:1399] INFO - Node versions in cluster [u'3.0.0-918-rel-enterprise']
[2014-07-03 06:03:44,098] - [task:284] INFO - adding node 10.3.3.199:8091 to cluster
[2014-07-03 06:03:44,098] - [rest_client:933] INFO - adding remote node @10.3.3.199:8091 to this cluster @10.3.3.240:8091
[2014-07-03 06:03:46,289] - [rest_client:1087] INFO - rebalance params : password=password&ejectedNodes=&user=Administrator&knownNodes=ns_1%4010.3.3.199%2Cns_1%4010.3.3.240%2Cns_1%4010.3.3.218
[2014-07-03 06:03:55,698] - [rest_client:1091] INFO - rebalance operation started
[2014-07-03 06:03:55,758] - [rest_client:1208] INFO - rebalance percentage : 0 %
[2014-07-03 06:04:05,775] - [rest_client:1192] ERROR - {u'status': u'none', u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try rebalance again.'} - rebalance failed
[2014-07-03 06:04:05,845] - [rest_client:1999] INFO - Latest logs from UI on 10.3.3.240:
[2014-07-03 06:04:05,846] - [rest_client:2000] ERROR - {u'node': u'ns_1@10.3.3.199', u'code': 2, u'text': u'Rebalance exited with reason {badmatch,wrong_rebalancer_pid}\n', u'shortText': u'message', u'serverTime': u'2014-07-03T06:03:55.085Z', u'module': u'ns_orchestrator', u'tstamp': 1404392635085, u'type': u'info'}
[2014-07-03 06:04:05,847] - [rest_client:2000] ERROR - {u'node': u'ns_1@10.3.3.199', u'code': 0, u'text': u'<0.1307.0> exited with {badmatch,wrong_rebalancer_pid}', u'shortText': u'message', u'serverTime': u'2014-07-03T06:03:55.084Z', u'module': u'ns_vbucket_mover', u'tstamp': 1404392635084, u'type': u'critical'}
[2014-07-03 06:04:05,848] - [rest_client:2000] ERROR - {u'node': u'ns_1@10.3.3.199', u'code': 0, u'text': u'Bucket "default" rebalance does not seem to be swap rebalance', u'shortText': u'message', u'serverTime': u'2014-07-03T06:03:53.474Z', u'module': u'ns_vbucket_mover', u'tstamp': 1404392633474, u'type': u'info'}
[2014-07-03 06:04:05,849] - [rest_client:2000] ERROR - {u'node': u'ns_1@10.3.3.199', u'code': 0, u'text': u'Bucket "default" loaded on node \'ns_1@10.3.3.199\' in 0 seconds.', u'shortText': u'message', u'serverTime': u'2014-07-03T06:03:52.766Z', u'module': u'ns_memcached', u'tstamp': 1404392632766, u'type': u'info'}
[2014-07-03 06:04:05,849] - [rest_client:2000] ERROR - {u'node': u'ns_1@10.3.3.199', u'code': 0, u'text': u'Started rebalancing bucket default', u'shortText': u'message', u'serverTime': u'2014-07-03T06:03:51.947Z', u'module': u'ns_rebalancer', u'tstamp': 1404392631947, u'type': u'info'}
[2014-07-03 06:04:05,849] - [rest_client:2000] ERROR - {u'node': u'ns_1@10.3.3.199', u'code': 4, u'text': u"Starting rebalance, KeepNodes = ['ns_1@10.3.3.199','ns_1@10.3.3.240',\n 'ns_1@10.3.3.218'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes\n", u'shortText': u'message', u'serverTime': u'2014-07-03T06:03:51.642Z', u'module': u'ns_orchestrator', u'tstamp': 1404392631642, u'type': u'info'}
[2014-07-03 06:04:05,849] - [rest_client:2000] ERROR - {u'node': u'ns_1@10.3.3.199', u'code': 0, u'text': u"Haven't heard from a higher priority node or a master, so I'm taking over.", u'shortText': u'message', u'serverTime': u'2014-07-03T06:03:51.435Z', u'module': u'mb_master', u'tstamp': 1404392631435, u'type': u'info'}
[2014-07-03 06:04:05,849] - [rest_client:2000] ERROR - {u'node': u'ns_1@10.3.3.199', u'code': 3, u'text': u'Node ns_1@10.3.3.199 joined cluster', u'shortText': u'message', u'serverTime': u'2014-07-03T06:03:41.841Z', u'module': u'ns_cluster', u'tstamp': 1404392621841, u'type': u'info'}
[2014-07-03 06:04:05,850] - [rest_client:2000] ERROR - {u'node': u'ns_1@10.3.3.199', u'code': 1, u'text': u'Couchbase Server has started on web port 8091 on node \'ns_1@10.3.3.199\'. Version: "3.0.0-918-rel-enterprise".', u'shortText': u'web start ok', u'serverTime': u'2014-07-03T06:03:41.716Z', u'module': u'menelaus_sup', u'tstamp': 1404392621716, u'type': u'info'}
[2014-07-03 06:04:05,850] - [rest_client:2000] ERROR - {u'node': u'ns_1@10.3.3.199', u'code': 0, u'text': u"Current master is older and I'll try to takeover", u'shortText': u'message', u'serverTime': u'2014-07-03T06:03:40.911Z', u'module': u'mb_master', u'tstamp': 1404392620911, u'type': u'warning'}
ERROR

 Comments   
Comment by Sangharsh Agarwal [ 07/Jul/14 ]
error_logger:error,2014-07-03T6:03:55.086,ns_1@10.3.3.199:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
  crasher:
    initial call: ns_single_vbucket_mover:mover/6
    pid: <0.1307.0>
    registered_name: []
    exception error: no match of right hand side value wrong_rebalancer_pid
      in function janitor_agent:initiate_indexing/5 (src/janitor_agent.erl, line 418)
      in call from ns_single_vbucket_mover:mover_inner/5 (src/ns_single_vbucket_mover.erl, line 378)
      in call from ns_single_vbucket_mover:'-mover/6-fun-2-'/7 (src/ns_single_vbucket_mover.erl, line 83)
      in call from misc:try_with_maybe_ignorant_after/2 (src/misc.erl, line 1474)
      in call from ns_single_vbucket_mover:mover/6 (src/ns_single_vbucket_mover.erl, line 77)
    ancestors: [<0.1303.0>,<0.1188.0>,<0.1180.0>,mb_master_sup,mb_master,
                  ns_server_sup,ns_server_cluster_sup,<0.60.0>]
    messages: []
    links: [<0.1303.0>]
    dictionary: [{cleanup_list,[<0.1319.0>]}]
    trap_exit: true
    status: running
    heap_size: 1598
    stack_size: 27
    reductions: 1723
  neighbours:
Comment by Sangharsh Agarwal [ 07/Jul/14 ]
[error_logger:error,2014-07-03T6:03:55.198,ns_1@10.3.3.199:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
  crasher:
    initial call: ns_vbucket_mover:init/1
    pid: <0.1303.0>
    registered_name: []
    exception exit: {badmatch,wrong_rebalancer_pid}
      in function gen_server:terminate/6 (gen_server.erl, line 744)
    ancestors: [<0.1188.0>,<0.1180.0>,mb_master_sup,mb_master,ns_server_sup,
                  ns_server_cluster_sup,<0.60.0>]
    messages: []
    links: [<0.1188.0>,<0.1304.0>,<0.277.0>]
    dictionary: [{bucket_name,"default"},
                  {i_am_master_mover,true},
                  {child_processes,[<0.1307.0>]}]
    trap_exit: true
    status: running
    heap_size: 121536
    stack_size: 27
    reductions: 621140
  neighbours:
    neighbour: [{pid,<0.1205.0>},
                  {registered_name,[]},
                  {initial_call,{erlang,apply,['Argument__1','Argument__2']}},
                  {current_function,
                      {ns_rebalance_observer,docs_left_updater_loop,1}},
                  {ancestors,
                      [<0.1203.0>,<0.1188.0>,<0.1180.0>,mb_master_sup,
                       mb_master,ns_server_sup,ns_server_cluster_sup,
                       <0.60.0>]},
                  {messages,[]},
                  {links,[<0.1203.0>,<0.277.0>]},
                  {dictionary,[]},
                  {trap_exit,false},
                  {status,waiting},
                  {heap_size,233},
                  {stack_size,6},
                  {reductions,32}]
    neighbour: [{pid,<0.1203.0>},
                  {registered_name,[]},
                  {initial_call,{ns_rebalance_observer,init,['Argument__1']}},
                  {current_function,{misc,parallel_map,3}},
                  {ancestors,
                      [<0.1188.0>,<0.1180.0>,mb_master_sup,mb_master,
                       ns_server_sup,ns_server_cluster_sup,<0.60.0>]},
                  {messages,
                      [{'$gen_cast',
                           {note,#Fun<ns_rebalance_observer.0.101366115>,
                               {{1404,392633,625157},
                                vbucket_move_start,<0.1307.0>,"default",
                                'ns_1@10.3.3.240',852,
                                ['ns_1@10.3.3.240','ns_1@10.3.3.218'],
                                ['ns_1@10.3.3.218','ns_1@10.3.3.199']}}},
                       {'$gen_cast',
                           {note,#Fun<ns_rebalance_observer.1.101366115>,
                               {{1404,392635,30140},
                                tap_estimate,
                                {replica_building,"default",852,
                                    'ns_1@10.3.3.240','ns_1@10.3.3.199'},
                                0,<0.1320.0>,
                                <<"replication_building_852_'ns_1@10.3.3.199'">>}}},
                       {'$gen_cast',
                           {note,#Fun<ns_rebalance_observer.1.101366115>,
                               {{1404,392635,50271},
                                tap_estimate,
                                {replica_building,"default",852,
                                    'ns_1@10.3.3.240','ns_1@10.3.3.218'},
                                0,<17451.419.1>,
                                <<"replication_building_852_'ns_1@10.3.3.218'">>}}}]},
                  {links,[<0.1204.0>,<0.1205.0>,<0.1188.0>]},
                  {dictionary,[]},
                  {trap_exit,false},
                  {status,waiting},
                  {heap_size,46422},
                  {stack_size,22},
                  {reductions,29104}]
    neighbour: [{pid,<0.1188.0>},
                  {registered_name,[]},
                  {initial_call,{erlang,apply,['Argument__1','Argument__2']}},
                  {current_function,{ns_rebalancer,wait_for_mover_tail,2}},
                  {ancestors,[<0.1180.0>,mb_master_sup,mb_master,
                              ns_server_sup,ns_server_cluster_sup,<0.60.0>]},
                  {messages,[]},
                  {links,[<0.1203.0>,<0.1303.0>,<0.1180.0>]},
                  {dictionary,[{random_seed,{3688,3451,10969}}]},
                  {trap_exit,false},
                  {status,waiting},
                  {heap_size,75113},
                  {stack_size,38},
                  {reductions,3882374}]
Comment by Aleksey Kondratenko [ 07/Jul/14 ]
MB-11647




[MB-11647] [CentOS] Online upgrade failed 2.2.0-837 -> 3.0.0-918 due to rebalance failed with badarg,\n [{erlang,\'++\',\n [{\'EXIT\',\n {{function_clause,\n [{janitor_agent,handle_call,\n [{get_mass_upr_docs_estimate,[]} Created: 07/Jul/14  Updated: 10/Jul/14  Resolved: 07/Jul/14

Status: Closed
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Blocker
Reporter: Sangharsh Agarwal Assignee: Sangharsh Agarwal
Resolution: Fixed Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: Online upgrade from 2.2.0-837 - 3.0.0-918

CentOS - 5.8

Triage: Untriaged
Operating System: Centos 64-bit
Link to Log File, atop/blg, CBCollectInfo, Core dump: 10.3.121.199 : https://s3.amazonaws.com/bugdb/jira/MB-11647/4ef1fc5a/10.3.121.199-diag.txt.gz
10.3.121.199 : https://s3.amazonaws.com/bugdb/jira/MB-11647/e7ae484e/10.3.121.199-732014-1740-diag.zip
10.3.3.126 : https://s3.amazonaws.com/bugdb/jira/MB-11647/0bc1e660/10.3.3.126-diag.txt.gz
10.3.3.126 : https://s3.amazonaws.com/bugdb/jira/MB-11647/55a62c49/10.3.3.126-732014-1734-diag.zip
10.3.5.11 : https://s3.amazonaws.com/bugdb/jira/MB-11647/0f2042f8/10.3.5.11-diag.txt.gz
10.3.5.11 : https://s3.amazonaws.com/bugdb/jira/MB-11647/32d10604/10.3.5.11-732014-1735-diag.zip
10.3.5.60 : https://s3.amazonaws.com/bugdb/jira/MB-11647/8659cc65/10.3.5.60-diag.txt.gz
10.3.5.60 : https://s3.amazonaws.com/bugdb/jira/MB-11647/b5ce3884/10.3.5.60-732014-1738-diag.zip
10.3.5.61 : https://s3.amazonaws.com/bugdb/jira/MB-11647/6da916c3/10.3.5.61-732014-1737-diag.zip
10.3.5.61 : https://s3.amazonaws.com/bugdb/jira/MB-11647/91053494/10.3.5.61-diag.txt.gz
Is this a Regression?: Yes

 Description   
http://qa.hq.northscale.net/job/centos_x64--104_01--XDCR_upgrade-P1/16/consoleFull

[Test Error]
[2014-07-03 17:31:30,321] - [rest_client:1399] INFO - Node versions in cluster [u'2.2.0-837-rel-enterprise', u'2.2.0-837-rel-enterprise']
[2014-07-03 17:31:30,339] - [rest_client:1399] INFO - Node versions in cluster [u'3.0.0-918-rel-enterprise']
[2014-07-03 17:31:31,293] - [task:284] INFO - adding node 10.3.121.199:8091 to cluster
[2014-07-03 17:31:31,293] - [rest_client:933] INFO - adding remote node @10.3.121.199:8091 to this cluster @10.3.3.126:8091
[2014-07-03 17:31:31,661] - [rest_client:1087] INFO - rebalance params : password=password&ejectedNodes=&user=Administrator&knownNodes=ns_1%4010.3.121.199%2Cns_1%4010.3.3.126%2Cns_1%4010.3.5.11
[2014-07-03 17:31:41,726] - [rest_client:1091] INFO - rebalance operation started
[2014-07-03 17:31:41,731] - [rest_client:1208] INFO - rebalance percentage : 0 %
[2014-07-03 17:31:51,740] - [rest_client:1192] ERROR - {u'status': u'none', u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try rebalance again.'} - rebalance failed
[2014-07-03 17:31:51,847] - [rest_client:1999] INFO - Latest logs from UI on 10.3.3.126:
[2014-07-03 17:31:51,847] - [rest_client:2000] ERROR - {u'node': u'ns_1@10.3.121.199', u'code': 2, u'text': u'Rebalance exited with reason {badarg,\n [{erlang,\'++\',\n [{\'EXIT\',\n {{function_clause,\n [{janitor_agent,handle_call,\n [{get_mass_upr_docs_estimate,[]},\n {<0.1293.0>,#Ref<0.0.0.13844>},\n {state,"default",<0.1299.0>,\n #Ref<17408.0.1.261106>,[],\n [active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica],\n [undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n passive,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined],\n 0}]},\n {gen_server,handle_msg,5},\n {proc_lib,init_p_do_apply,3}]},\n {gen_server,call,\n [{\'janitor_agent-default\',\n \'ns_1@10.3.3.126\'},\n {get_mass_upr_docs_estimate,[]},\n infinity]}}},\n {\'EXIT\',\n {{function_clause,\n [{janitor_agent,handle_call,\n [{get_mass_upr_docs_estimate,[]},\n {<0.1294.0>,#Ref<0.0.0.13747>},\n {state,"default",<0.1299.0>,\n #Ref<17413.0.1.257063>,[],\n [replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n replica,replica,replica,replica,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active,active,active,active,\n active,active],\n [undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined,undefined,undefined,\n undefined],\n 0}]},\n {gen_server,handle_msg,5},\n {proc_lib,init_p_do_apply,3}]},\n {gen_server,call,\n [{\'janitor_agent-default\',\n \'ns_1@10.3.5.11\'},\n {get_mass_upr_docs_estimate,[]},\n infinity]}}}],\n []},\n {lists,append,1,\n [{file,"lists.erl"},{line,125}]},\n {lists,append,1,\n [{file,"lists.erl"},{line,125}]},\n {ns_rebalance_observer,\n initiate_bucket_rebalance,2,\n [{file,"src/ns_rebalance_observer.erl"},\n {line,209}]},\n {ns_rebalance_observer,handle_set_ff_map,2,\n [{file,"src/ns_rebalance_observer.erl"},\n {line,257}]},\n {ns_rebalance_observer,handle_cast,2,\n [{file,"src/ns_rebalance_observer.erl"},\n {line,105}]},\n {gen_server,handle_msg,5,\n [{file,"gen_server.erl"},{line,604}]},\n {proc_lib,init_p_do_apply,3,\n [{file,"proc_lib.erl"},{line,239}]}]}\n', u'shortText': u'message', u'module': u'ns_orchestrator', u'tstamp': 1404433902422, u'type': u'info'}
[2014-07-03 17:31:51,852] - [rest_client:2000] ERROR - {u'node': u'ns_1@10.3.121.199', u'code': 0, u'text': u'Bucket "default" rebalance does not seem to be swap rebalance', u'shortText': u'message', u'module': u'ns_vbucket_mover', u'tstamp': 1404433902240, u'type': u'info'}
[2014-07-03 17:31:51,852] - [rest_client:2000] ERROR - {u'node': u'ns_1@10.3.121.199', u'code': 0, u'text': u'Bucket "default" loaded on node \'ns_1@10.3.121.199\' in 0 seconds.', u'shortText': u'message', u'module': u'ns_memcached', u'tstamp': 1404433901363, u'type': u'info'}
[2014-07-03 17:31:51,853] - [rest_client:2000] ERROR - {u'node': u'ns_1@10.3.121.199', u'code': 0, u'text': u'Started rebalancing bucket default', u'shortText': u'message', u'module': u'ns_rebalancer', u'tstamp': 1404433900963, u'type': u'info'}
[2014-07-03 17:31:51,853] - [rest_client:2000] ERROR - {u'node': u'ns_1@10.3.121.199', u'code': 4, u'text': u"Starting rebalance, KeepNodes = ['ns_1@10.3.121.199','ns_1@10.3.3.126',\n 'ns_1@10.3.5.11'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes\n", u'shortText': u'message', u'module': u'ns_orchestrator', u'tstamp': 1404433900867, u'type': u'info'}
[2014-07-03 17:31:51,853] - [rest_client:2000] ERROR - {u'node': u'ns_1@10.3.121.199', u'code': 0, u'text': u"Haven't heard from a higher priority node or a master, so I'm taking over.", u'shortText': u'message', u'module': u'mb_master', u'tstamp': 1404433900699, u'type': u'info'}
[2014-07-03 17:31:51,853] - [rest_client:2000] ERROR - {u'node': u'ns_1@10.3.121.199', u'code': 3, u'text': u'Node ns_1@10.3.121.199 joined cluster', u'shortText': u'message', u'module': u'ns_cluster', u'tstamp': 1404433890762, u'type': u'info'}
[2014-07-03 17:31:51,854] - [rest_client:2000] ERROR - {u'node': u'ns_1@10.3.121.199', u'code': 1, u'text': u'Couchbase Server has started on web port 8091 on node \'ns_1@10.3.121.199\'. Version: "3.0.0-918-rel-enterprise".', u'shortText': u'web start ok', u'module': u'menelaus_sup', u'tstamp': 1404433890737, u'type': u'info'}
[2014-07-03 17:31:51,854] - [rest_client:2000] ERROR - {u'node': u'ns_1@10.3.121.199', u'code': 4, u'text': u"Node 'ns_1@10.3.121.199' saw that node 'ns_1@10.3.5.11' came up. Tags: []", u'shortText': u'node up', u'module': u'ns_node_disco', u'tstamp': 1404433890636, u'type': u'info'}
[2014-07-03 17:31:51,854] - [rest_client:2000] ERROR - {u'node': u'ns_1@10.3.5.11', u'code': 4, u'text': u"Node 'ns_1@10.3.5.11' saw that node 'ns_1@10.3.121.199' came up. Tags: []", u'shortText': u'node up', u'module': u'ns_node_disco', u'tstamp': 1404433890634, u'type': u'info'}
[

 Comments   
Comment by Sangharsh Agarwal [ 07/Jul/14 ]
[ns_server:info,2014-07-03T17:31:42.405,ns_1@10.3.3.126:<0.31057.0>:ebucketmigrator_srv:init:525]Setting {"10.3.3.126",11209} vbucket 852 to state replica
[views:debug,2014-07-03T17:31:42.410,ns_1@10.3.3.126:mc_couch_events<0.381.0>:capi_set_view_manager:handle_mc_couch_event:530]Got set_vbucket event for default/852. Updated state: pending (1)
[ns_server:debug,2014-07-03T17:31:42.410,ns_1@10.3.3.126:<0.9303.0>:mc_connection:do_notify_vbucket_update:112]Signaled mc_couch_event: {set_vbucket,"default",852,pending,1}
[error_logger:error,2014-07-03T17:31:42.415,ns_1@10.3.3.126:error_logger<0.6.0>:ale_error_logger_handler:log_report:72]
=========================CRASH REPORT=========================
  crasher:
    initial call: janitor_agent:init/1
    pid: <0.9293.0>
    registered_name: 'janitor_agent-default'
    exception exit: {function_clause,
                        [{janitor_agent,handle_call,
                             [{get_mass_upr_docs_estimate,[]},
                              {<18256.1293.0>,#Ref<18256.0.0.13844>},
                              {state,"default",<18256.1299.0>,
                                  #Ref<0.0.1.261106>,[],
                                  [active,active,active,active,active,active,
                                   active,active,active,active,active,active,
                                   active,active,active,active,active,active,
Comment by Aleksey Kondratenko [ 07/Jul/14 ]
http://review.couchbase.org/39164
Comment by Sangharsh Agarwal [ 10/Jul/14 ]
Verified on build 3.0.0-933




[MB-11646] bidirectional xdcr: fullEviction=true: not all items are replicated Created: 06/Jul/14  Updated: 11/Jul/14  Resolved: 11/Jul/14

Status: Resolved
Project: Couchbase Server
Component/s: cross-datacenter-replication
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Major
Reporter: Iryna Mironava Assignee: Aleksey Kondratenko
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: 3.0.0-926

Triage: Untriaged
Operating System: Centos 64-bit
Link to Log File, atop/blg, CBCollectInfo, Core dump: https://s3.amazonaws.com/bugdb/jira/MB-11646/dca3be89/172.27.33.10-762014-189-diag.zip
https://s3.amazonaws.com/bugdb/jira/MB-11646/dca3be89/172.27.33.11-762014-1813-diag.zip
https://s3.amazonaws.com/bugdb/jira/MB-11646/dca3be89/172.27.33.12-762014-1811-diag.zip
https://s3.amazonaws.com/bugdb/jira/MB-11646/dca3be89/172.27.33.13-762014-1815-diag.zip
Is this a Regression?: Unknown

 Description   
./testrunner -i my_xdcr.ini -t xdcr.uniXDCR.unidirectional.load_with_async_ops,rdirection=unidirection,ctopology=chain,doc-ops=update-delete,sasl_buckets=1,replication_type=xmem,eviction_policy=fullEviction,dgm_run=true,active_resident_threshold=90,items=1250,value_size=2048

steps to reproduce:
1)2 clusters with 2 buckets each, fullEviction policy, resident ratio 90 or less, load items
2)setup xdcr bidirectional

expected:
items are replicated

actual: .11 node has 50876 (source) and all others have 50875

[imironava@bella testrunner]$ ./scripts/ssh.py -i my_xdcr.ini "/opt/couchbase/bin/cbstats localhost:11210 all|grep curr_items"
172.27.33.10
 curr_items: 25437
 curr_items_tot: 50875
 vb_active_curr_items: 25437
 vb_pending_curr_items: 0
 vb_replica_curr_items: 25438

172.27.33.12
 curr_items: 25437
 curr_items_tot: 50875
 vb_active_curr_items: 25437
 vb_pending_curr_items: 0
 vb_replica_curr_items: 25438

172.27.33.13
 curr_items: 25438
 curr_items_tot: 50875
 vb_active_curr_items: 25438
 vb_pending_curr_items: 0
 vb_replica_curr_items: 25437

172.27.33.11
 curr_items: 25439
 curr_items_tot: 50876
 vb_active_curr_items: 25439
 vb_pending_curr_items: 0
 vb_replica_curr_items: 25437

attaching cbcollects

 Comments   
Comment by Aleksey Kondratenko [ 08/Jul/14 ]
I will need xdcr_trace enabled and data files as usual
Comment by Aruna Piravi [ 08/Jul/14 ]
Unable to reproduce this in my local environment.

on cluster node: 10.3.4.186 for bucket default is 0
2014-07-08 12:06:19 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 10.3.4.188:11210 sasl_bucket_1
2014-07-08 12:06:19 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 10.3.4.189:11210 sasl_bucket_1
2014-07-08 12:06:19 | INFO | MainProcess | Cluster_Thread | [task.check] Saw curr_items 50875 == 50875 expected on '10.3.4.188:8091''10.3.4.189:8091',sasl_bucket_1 bucket
2014-07-08 12:06:19 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 10.3.4.188:11210 sasl_bucket_1
2014-07-08 12:06:19 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 10.3.4.189:11210 sasl_bucket_1
2014-07-08 12:06:19 | INFO | MainProcess | Cluster_Thread | [task.check] Saw vb_active_curr_items 50875 == 50875 expected on '10.3.4.188:8091''10.3.4.189:8091',sasl_bucket_1 bucket
2014-07-08 12:06:19 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 10.3.4.188:11210 sasl_bucket_1
2014-07-08 12:06:19 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 10.3.4.189:11210 sasl_bucket_1
2014-07-08 12:06:19 | INFO | MainProcess | Cluster_Thread | [task.check] Saw vb_replica_curr_items 50875 == 50875 expected on '10.3.4.188:8091''10.3.4.189:8091',sasl_bucket_1 bucket
2014-07-08 12:06:19 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 10.3.4.188:11210 default
2014-07-08 12:06:19 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 10.3.4.189:11210 default
2014-07-08 12:06:19 | INFO | MainProcess | Cluster_Thread | [task.check] Saw curr_items 50875 == 50875 expected on '10.3.4.188:8091''10.3.4.189:8091',default bucket
2014-07-08 12:06:19 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 10.3.4.188:11210 default
2014-07-08 12:06:20 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 10.3.4.189:11210 default
2014-07-08 12:06:20 | INFO | MainProcess | Cluster_Thread | [task.check] Saw vb_active_curr_items 50875 == 50875 expected on '10.3.4.188:8091''10.3.4.189:8091',default bucket
2014-07-08 12:06:20 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 10.3.4.188:11210 default
2014-07-08 12:06:20 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 10.3.4.189:11210 default
2014-07-08 12:06:20 | INFO | MainProcess | Cluster_Thread | [task.check] Saw vb_replica_curr_items 50875 == 50875 expected on '10.3.4.188:8091''10.3.4.189:8091',default bucket
2014-07-08 12:06:20 | INFO | MainProcess | test_thread | [data_helper.direct_client] creating direct client 10.3.4.188:11210 sasl_bucket_1
2014-07-08 12:06:20 | INFO | MainProcess | test_thread | [data_helper.direct_client] creating direct client 10.3.4.189:11210 sasl_bucket_1
2014-07-08 12:06:20 | INFO | MainProcess | test_thread | [task.__init__] 51250 items will be verified on sasl_bucket_1 bucket
2014-07-08 12:06:20 | INFO | MainProcess | test_thread | [data_helper.direct_client] creating direct client 10.3.4.188:11210 default
2014-07-08 12:06:21 | INFO | MainProcess | test_thread | [data_helper.direct_client] creating direct client 10.3.4.189:11210 default
2014-07-08 12:06:21 | INFO | MainProcess | test_thread | [task.__init__] 51250 items will be verified on default bucket



unknownc8e0eb186a83:testrunner apiravi$ ./scripts/ssh.py -i bixdcr.ini "/opt/couchbase/bin/cbstats localhost:11210 all|grep curr_items"
10.3.4.186
 curr_items: 25437
 curr_items_tot: 50875
 vb_active_curr_items: 25437
 vb_pending_curr_items: 0
 vb_replica_curr_items: 25438

10.3.4.187
 curr_items: 25438
 curr_items_tot: 50875
 vb_active_curr_items: 25438
 vb_pending_curr_items: 0
 vb_replica_curr_items: 25437

10.3.4.188
 curr_items: 25437
 curr_items_tot: 50875
 vb_active_curr_items: 25437
 vb_pending_curr_items: 0
 vb_replica_curr_items: 25438

10.3.4.189
 curr_items: 25438
 curr_items_tot: 50875
 vb_active_curr_items: 25438
 vb_pending_curr_items: 0
 vb_replica_curr_items: 25437
Comment by Aruna Piravi [ 08/Jul/14 ]
Hi Iryna,

Since I'm unable to reproduce the problem in my environment, can you enable the trace, collect data files and cb collect?
xdcr tracing is enabled by default if testrunner is used.Please check if you see diag/eval msgs during setup as shown

2014-07-08 11:54:22 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=2069&username=Administrator&password=password
2014-07-08 11:54:22 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] settings/web params on 10.3.4.187:8091:username=Administrator&password=password&port=8091
2014-07-08 11:54:22 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=2069&username=Administrator&password=password
2014-07-08 11:54:23 | INFO | MainProcess | test_thread | [rest_client.diag_eval] /diag/eval status on 10.3.4.186:8091: True content: 'ale:set_loglevel(xdcr_trace, debug).' command: 'ale:set_loglevel(xdcr_trace, debug).'
2014-07-08 11:54:23 | INFO | MainProcess | test_thread | [rest_client.diag_eval] /diag/eval status on 10.3.4.187:8091: True content: 'ale:set_loglevel(xdcr_trace, debug).' command: 'ale:set_loglevel(xdcr_trace, debug).'
2014-07-08 11:54:23 | INFO | MainProcess | Cluster_Thread | [task.add_nodes] adding node 10.3.4.187:8091 to cluster
2014-07-08 11:54:23 | INFO | MainProcess | Cluster_Thread | [rest_client.add_node] adding remote node @10.3.4.187:8091 to

To get the data files, pls stop the test before teardown and run -

python scripts/collect_data_files.py -i <.ini>. This has also been automated. If testrunner finds any mismatch in item count or metadata, it will collect datafiles too.

Thanks
Aruna
Comment by Iryna Mironava [ 09/Jul/14 ]
*.10 and *.11 -one cluster, *.12 and *.13 -another
[imironava@bella testrunner]$ ./scripts/ssh.py -i my_xdcr.ini "/opt/couchbase/bin/cbstats localhost:11210 all -b sasl_bucket_1 -p password|grep curr_items"
172.27.33.12
 curr_items: 25437
 curr_items_tot: 50875
 vb_active_curr_items: 25437
 vb_pending_curr_items: 0
 vb_replica_curr_items: 25438

172.27.33.10
 curr_items: 25438
 curr_items_tot: 50876
 vb_active_curr_items: 25438
 vb_pending_curr_items: 0
 vb_replica_curr_items: 25438

172.27.33.13
 curr_items: 25438
 curr_items_tot: 50875
 vb_active_curr_items: 25438
 vb_pending_curr_items: 0
 vb_replica_curr_items: 25437

172.27.33.11
 curr_items: 25438
 curr_items_tot: 50876
 vb_active_curr_items: 25438
 vb_pending_curr_items: 0
 vb_replica_curr_items: 25438

cbcollect with traces enabled:
https://s3.amazonaws.com/bugdb/jira/MB-11646/dba3be89/172.27.33.10-792014-1846-diag.zip
https://s3.amazonaws.com/bugdb/jira/MB-11646/dba3be89/172.27.33.11-792014-1848-diag.zip
https://s3.amazonaws.com/bugdb/jira/MB-11646/dba3be89/172.27.33.12-792014-1847-diag.zip
https://s3.amazonaws.com/bugdb/jira/MB-11646/dba3be89/172.27.33.13-792014-1850-diag.zip

data_files:
https://s3.amazonaws.com/bugdb/jira/MB-11646/dba3be89/172.27.33.13-792014-1845-couch.tar.gz
https://s3.amazonaws.com/bugdb/jira/MB-11646/dba3be89/172.27.33.12-792014-1844-couch.tar.gz
https://s3.amazonaws.com/bugdb/jira/MB-11646/dba3be89/172.27.33.11-792014-1845-couch.tar.gz
https://s3.amazonaws.com/bugdb/jira/MB-11646/dba3be89/172.27.33.10-792014-1844-couch.tar.gz
Comment by Aleksey Kondratenko [ 11/Jul/14 ]
Iryna, Aruna, please synchronize on this topic.

I do recall it was discussed and we have made a decision that failing tests _just based on stats_ is not what will be done.

In this case looking at actual data I'm seeing all nodes have exactly same data on disk at least.

Next time I'm going to bounce tickets where only evidence of xdcr docs mismatch is stats.

Stats mismatch is of course bug. ep-engine and not xdcr bug. But AFAIK known and much less severe than actual data mismatch.
Comment by Aleksey Kondratenko [ 11/Jul/14 ]
Not a bug based on my findings.




[MB-11645] Make cbepctl parameter options consistent compared with the actual list of parameters that we support in 3.0 Created: 03/Jul/14  Updated: 03/Jul/14

Status: Open
Project: Couchbase Server
Component/s: couchbase-bucket
Affects Version/s: 2.5.1
Fix Version/s: 3.0
Security Level: Public

Type: Bug Priority: Major
Reporter: Chiyoung Seo Assignee: Abhinav Dangeti
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Triage: Untriaged
Is this a Regression?: Unknown

 Description   
There are some mismatches between cbepctl paramter options and the list of configurable engine parameters that we support in 3.0.

We need to make them consistent, so that the couchbase documentation (http://docs.couchbase.com/couchbase-manual-2.5/cb-cli/index.html#cbepctl-tool) should be consistent too.





[MB-11644] {UPR}: DDOC fragmentation failure during rebalance-out Created: 03/Jul/14  Updated: 07/Jul/14  Resolved: 05/Jul/14

Status: Closed
Project: Couchbase Server
Component/s: None
Affects Version/s: 3.0
Fix Version/s: 3.0
Security Level: Public

Type: Task Priority: Major
Reporter: Parag Agarwal Assignee: Andrei Baranouski
Resolution: Fixed Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: INI
[global]
port:8091

[servers]
1:vm1
2:vm2
3:vm3
4:vm4
5:vm5
6:vm6
7:vm7

[vm1]
ip:10.5.2.13
[vm2]
ip:10.5.2.14
[vm3]
ip:10.5.2.15
[vm4]
ip:10.3.121.63
[vm5]
ip:10.3.121.64
[vm6]
ip:10.3.121.66
[vm7]
ip:10.3.121.69



 Description   
3.0.0-918. centos 6x

1. Create 7 Node cluster
2. Add default bucket
3. Add 500K items
4. Create 5 views and switch on auto compaction
5. Mutate operations on the existing clients
6. Monitor auto compaction

Step 5 fails on compaction with fragmentation around :: ddoc1: current amount of fragmentation = 60

Test Case:: ./testrunner -i centos.ini get-cbcollect-info=False,get-logs=False,stop-on-failure=False,get-coredumps=False,force_kill_memached=False,verify_unacked_bytes=True -t rebalance.rebalanceout.RebalanceOutTests.rebalance_out_with_ddoc_compaction,items=500000,max_verify=100000,fragmentation_value=60,GROUP=OUT;BASIC;COMPACTION;P0;FROM_2_0

https://s3.amazonaws.com/bugdb/jira/MB-11644/3.0.0_918.tar.gz

 Comments   
Comment by Aleksey Kondratenko [ 03/Jul/14 ]
What _exactly_ is a bug here?

Note that, far as I know, there's no spec requiring us to always maintain certain limit of fragmentation.
Comment by Aleksey Kondratenko [ 03/Jul/14 ]
If you have data that it's behavior worse than 2.5 then it's possible performance issue to investigate. But I'm not seeing any evidence of that yet.
Comment by Parag Agarwal [ 03/Jul/14 ]
The bug is that fragmentation failed while it was at 60. We wanted to know why?
Comment by Parag Agarwal [ 03/Jul/14 ]
This is what the monitoring task for compaction does
class MonitorViewFragmentationTask(Task):

    """
        Attempt to monitor fragmentation that is occurring for a given design_doc.
        execute stage is just for preliminary sanity checking of values and environment.

        Check function looks at index file accross all nodes and attempts to calculate
        total fragmentation occurring by the views within the design_doc.

        Note: If autocompaction is enabled and user attempts to monitor for fragmentation
        value higher than level at which auto_compaction kicks in a warning is sent and
        it is best user to use lower value as this can lead to infinite monitoring.
    """

In our case fragmentation is set to 60%, but as per logs it goes to 60% and dies prematurely

raceback (most recent call last):
  File "pytests/rebalance/rebalanceout.py", line 389, in rebalance_out_with_ddoc_compaction
    self.fail("async_monitor_view_fragmentation failed prematurely")
AssertionError: async_monitor_view_fragmentation failed prematurely

Will try to re-run this in 2.5.1

Comment by Aleksey Kondratenko [ 03/Jul/14 ]
Lets put it differently. Lets ignore the test. I have reason to believe it's invalid.

What _exactly_ on the server is not working as advertised ?
Comment by Parag Agarwal [ 03/Jul/14 ]
I am still searching, will let u know what I find
Comment by Parag Agarwal [ 03/Jul/14 ]
seeing same failure in 2.5.1
Comment by Parag Agarwal [ 03/Jul/14 ]
Andrei: can u please give your input in this test.
Comment by Parag Agarwal [ 03/Jul/14 ]
This is a test issue. Will try to investigate and fix as of now
Comment by Andrei Baranouski [ 05/Jul/14 ]
please, provide links on jenkins runs when you face with the issues there because:
1) the issue is increasingly apparent...
2) when the ticket is "fixed", I will not need to run some tests manually to verify then and it will be enough to restart the job
Comment by Parag Agarwal [ 05/Jul/14 ]
http://qa.hq.northscale.net/job/centos_x64--02_02--Rebalance-Out/33/
Comment by Andrei Baranouski [ 05/Jul/14 ]
Thanks Parag,

please see the fix http://review.couchbase.org/#/c/39139/

test logs:
https://friendpaste.com/5c51nS2nOpAQxY1hXO47Ie

2014-07-05 13:42:10 | INFO | MainProcess | Cluster_Thread | [rest_client.ddoc_compaction] compaction for ddoc '_design%2fddoc1' was triggered
2014-07-05 13:42:10 | INFO | MainProcess | Cluster_Thread | [rest_client.rebalance] rebalance params : password=password&ejectedNodes=ns_1%4010.3.121.66&user=Administrator&knownNodes=ns_1%4010.5.2.14%2Cns_1%4010.5.2.15%2Cns_1%4010.3.121.63%2Cns_1%4010.3.121.69%2Cns_1%4010.5.2.13%2Cns_1%4010.3.121.66%2Cns_1%4010.3.121.64
2014-07-05 13:42:10 | INFO | MainProcess | Cluster_Thread | [rest_client.rebalance] rebalance operation started
2014-07-05 13:42:10 | INFO | MainProcess | Cluster_Thread | [rest_client._rebalance_progress] rebalance percentage : 0 %
2014-07-05 13:42:15 | INFO | MainProcess | Cluster_Thread | [task.check] ddoc1: stats compaction:revision and fragmentation: (1,10.7187261528)
2014-07-05 13:42:15 | INFO | MainProcess | Cluster_Thread | [task.check] ddoc1: compactor was run, compaction revision was changed o