[MB-5602] auto-failover fails over a node if some of the buckets are already rebalanced out but rebalance has been stopped or interrupted ( auto-failover should failover if all buckets are down) Created: 18/Jun/12  Updated: 09/Jan/13  Resolved: 19/Jun/12

Status: Closed
Project: Couchbase Server
Component/s: ns_server
Affects Version/s: 1.8.1-release-candidate
Fix Version/s: 1.8.1
Security Level: Public

Type: Bug Priority: Blocker
Reporter: Ketaki Gangal Assignee: Aleksey Kondratenko
Resolution: Fixed Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: 18 node cluster, Centos
Build 181-918
2 buckets, 1024vbuckets


 Description   
Setup
1.Setup a 18 node cluster with 2 buckets- bucket1, bucket2
2. Enable auto-failover
3. Add a new node 126
4. Rebalance

Output
1. Rebalance works fine. But seeing these log messages -

Could not automatically failover node 'ns_1@10.3.121.126<mailto:ns_1@10.3.121.126><mailto:ns_1@10.3.121.126<mailto:ns_1@10.3.121.126>>' because I think rebalance is running auto_failover000 ns_1@10.3.2.104<mailto:ns_1@10.3.2.104><mailto:ns_1@10.3.2.104<mailto:ns_1@10.3.2.104>> 19:32:12 - Sun Jun 17, 2012
Bucket "bucket1" loaded on node 'ns_1@10.3.121.126<mailto:ns_1@10.3.121.126><mailto:ns_1@10.3.121.126<mailto:ns_1@10.3.121.126>>' in 0 seconds. ns_memcached001 ns_1@10.3.121.126<mailto:ns_1@10.3.121.126><mailto:ns_1@10.3.121.126<mailto:ns_1@10.3.121.126>> 19:32:04 - Sun Jun 17, 2012
Started rebalancing bucket bucket2 ns_rebalancer000 ns_1@10.3.2.104<mailto:ns_1@10.3.2.104><mailto:ns_1@10.3.2.104<mailto:ns_1@10.3.2.104>> 19:31:36 - Sun Jun 17, 2012
Starting rebalance, KeepNodes = ['ns_1@10.3.2.85<mailto:ns_1@10.3.2.85><mailto:ns_1@10.3.2.85<mailto:ns_1@10.3.2.85>>','ns_1@10.3.2.86<mailto:ns_1@10.3.2.86><mailto:ns_1@10.3.2.86<mailto:ns_1@10.3.2.86>>',
'ns_1@10.3.2.87<mailto:ns_1@10.3.2.87><mailto:ns_1@10.3.2.87<mailto:ns_1@10.3.2.87>>','ns_1@10.3.2.88<mailto:ns_1@10.3.2.88><mailto:ns_1@10.3.2.88<mailto:ns_1@10.3.2.88>>',
'ns_1@10.3.2.89<mailto:ns_1@10.3.2.89><mailto:ns_1@10.3.2.89<mailto:ns_1@10.3.2.89>>','ns_1@10.3.2.104<mailto:ns_1@10.3.2.104><mailto:ns_1@10.3.2.104<mailto:ns_1@10.3.2.104>>',
'ns_1@10.3.2.105<mailto:ns_1@10.3.2.105><mailto:ns_1@10.3.2.105<mailto:ns_1@10.3.2.105>>','ns_1@10.3.2.106<mailto:ns_1@10.3.2.106><mailto:ns_1@10.3.2.106<mailto:ns_1@10.3.2.106>>',
'ns_1@10.3.2.108<mailto:ns_1@10.3.2.108><mailto:ns_1@10.3.2.108<mailto:ns_1@10.3.2.108>>','ns_1@10.3.2.109<mailto:ns_1@10.3.2.109><mailto:ns_1@10.3.2.109<mailto:ns_1@10.3.2.109>>',
'ns_1@10.3.2.110<mailto:ns_1@10.3.2.110><mailto:ns_1@10.3.2.110<mailto:ns_1@10.3.2.110>>','ns_1@10.3.2.111<mailto:ns_1@10.3.2.111><mailto:ns_1@10.3.2.111<mailto:ns_1@10.3.2.111>>',
'ns_1@10.3.2.112<mailto:ns_1@10.3.2.112><mailto:ns_1@10.3.2.112<mailto:ns_1@10.3.2.112>>','ns_1@10.3.2.113<mailto:ns_1@10.3.2.113><mailto:ns_1@10.3.2.113<mailto:ns_1@10.3.2.113>>',
'ns_1@10.3.2.114<mailto:ns_1@10.3.2.114><mailto:ns_1@10.3.2.114<mailto:ns_1@10.3.2.114>>','ns_1@10.3.2.115<mailto:ns_1@10.3.2.115><mailto:ns_1@10.3.2.115<mailto:ns_1@10.3.2.115>>',
'ns_1@10.3.121.126<mailto:ns_1@10.3.121.126><mailto:ns_1@10.3.121.126<mailto:ns_1@10.3.121.126>>'], EjectNodes = []

Attached are the web-logs and logs from master node-104.

https://s3.amazonaws.com/bugdb/jira/web-log-largeCluster/ns-diag-20120618095246.txt
https://s3.amazonaws.com/bugdb/jira/web-log-largeCluster/10.3.2.104-8091-diag.txt.gz


Other related conversation
I have enabled auto-failover on the large-cluster and every time I rebalance In a node, I get an error message showing " Could not automatically failover node 'ns_1@10.3.121.126<mailto:ns_1@10.3.121.126><mailto:ns_1@10.3.121.126<mailto:ns_1@10.3.121.126>>' because I think rebalance is running" .
The node 126 is newly added and rebalance issued, is this message displayed because the node is not yet ready to join the cluster ?
The rebalance works fine, but I do not understand why is auto-failover attempted in here. Any idea?

No. according to logs at 19:32:04 bucket1 was loaded. Maybe there are some other buckets that are still not ready on this node. May I have logs?



 Comments   
Comment by Farshid Ghods (Inactive) [ 18/Jun/12 ]

This happens if you have 2 or more buckets and autofailover is enabled and if both buckets have significant amount of data.

After rebalancing out first bucket node will incorrectly be interpreted as down by autofailover service since it doesn't have all buckets this service thinks (incorrectly) it needs to have.

Normally rebalance prevents autofailover to actually do anything, but if rebalance is stopped, then 'partially' rebalance out node will be automatically failed over.

Seen here: https://s3.amazonaws.com/bugdb/jira/web-log-largeCluster/10.3.2.104-8091-diag.txt.gz
Comment by Karan Kumar (Inactive) [ 19/Jun/12 ]
http://review.couchbase.org/#change,17372
Comment by Ketaki Gangal [ 21/Jun/12 ]
Tested on large-cluster build 927- Not seeing anymore auto-failover messages.
Closing this bug for now.
Comment by Thuan Nguyen [ 22/Jun/12 ]
Integrated in github-ns-server-2-0 #380 (See [http://qa.hq.northscale.net/job/github-ns-server-2-0/380/])
    MB-5602: consider buckets' servers list when computing down nodes (Revision 72b674c47e386dac5a28ecaadfea2f37c3d14133)

     Result = SUCCESS
Farshid Ghods :
Files :
* src/auto_failover.erl
Generated at Tue Sep 02 16:55:28 CDT 2014 using JIRA 5.2.4#845-sha1:c9f4cc41abe72fb236945343a1f485c2c844dac9.