[MB-5602] auto-failover fails over a node if some of the buckets are already rebalanced out but rebalance has been stopped or interrupted ( auto-failover should failover if all buckets are down) Created: 18/Jun/12  Updated: 09/Jan/13  Resolved: 19/Jun/12

Status: Closed
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 1.8.1-release-candidate
Fix Version/s: 1.8.1
Security Level: Public

Type: Bug Priority: Blocker
Reporter: Ketaki Gangal Assignee: Aleksey Kondratenko
Resolution: Fixed Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: 18 node cluster, Centos
Build 181-918
2 buckets, 1024vbuckets

1.Setup a 18 node cluster with 2 buckets- bucket1, bucket2
2. Enable auto-failover
3. Add a new node 126
4. Rebalance

1. Rebalance works fine. But seeing these log messages -

Could not automatically failover node 'ns_1@<mailto:ns_1@><mailto:ns_1@<mailto:ns_1@>>' because I think rebalance is running auto_failover000 ns_1@<mailto:ns_1@><mailto:ns_1@<mailto:ns_1@>> 19:32:12 - Sun Jun 17, 2012
Bucket "bucket1" loaded on node 'ns_1@<mailto:ns_1@><mailto:ns_1@<mailto:ns_1@>>' in 0 seconds. ns_memcached001 ns_1@<mailto:ns_1@><mailto:ns_1@<mailto:ns_1@>> 19:32:04 - Sun Jun 17, 2012
Started rebalancing bucket bucket2 ns_rebalancer000 ns_1@<mailto:ns_1@><mailto:ns_1@<mailto:ns_1@>> 19:31:36 - Sun Jun 17, 2012
Starting rebalance, KeepNodes = ['ns_1@<mailto:ns_1@><mailto:ns_1@<mailto:ns_1@>>','ns_1@<mailto:ns_1@><mailto:ns_1@<mailto:ns_1@>>',
'ns_1@<mailto:ns_1@><mailto:ns_1@<mailto:ns_1@>>'], EjectNodes = []

Attached are the web-logs and logs from master node-104.


Other related conversation
I have enabled auto-failover on the large-cluster and every time I rebalance In a node, I get an error message showing " Could not automatically failover node 'ns_1@<mailto:ns_1@><mailto:ns_1@<mailto:ns_1@>>' because I think rebalance is running" .
The node 126 is newly added and rebalance issued, is this message displayed because the node is not yet ready to join the cluster ?
The rebalance works fine, but I do not understand why is auto-failover attempted in here. Any idea?

No. according to logs at 19:32:04 bucket1 was loaded. Maybe there are some other buckets that are still not ready on this node. May I have logs?

Comment by Farshid Ghods (Inactive) [ 18/Jun/12 ]

This happens if you have 2 or more buckets and autofailover is enabled and if both buckets have significant amount of data.

After rebalancing out first bucket node will incorrectly be interpreted as down by autofailover service since it doesn't have all buckets this service thinks (incorrectly) it needs to have.

Normally rebalance prevents autofailover to actually do anything, but if rebalance is stopped, then 'partially' rebalance out node will be automatically failed over.

Seen here: https://s3.amazonaws.com/bugdb/jira/web-log-largeCluster/
Comment by Karan Kumar (Inactive) [ 19/Jun/12 ]
Comment by Ketaki Gangal [ 21/Jun/12 ]
Tested on large-cluster build 927- Not seeing anymore auto-failover messages.
Closing this bug for now.
Comment by Thuan Nguyen [ 22/Jun/12 ]
Integrated in github-ns-server-2-0 #380 (See [http://qa.hq.northscale.net/job/github-ns-server-2-0/380/])
    MB-5602: consider buckets' servers list when computing down nodes (Revision 72b674c47e386dac5a28ecaadfea2f37c3d14133)

     Result = SUCCESS
Farshid Ghods :
Files :
* src/auto_failover.erl
Generated at Wed Sep 17 03:30:39 CDT 2014 using JIRA 5.2.4#845-sha1:c9f4cc41abe72fb236945343a1f485c2c844dac9.