[MB-4461] replication cursor stuck from slave 1 to slave 2 , hence high number of checkpoint items in slave1 Created: 21/Nov/11  Updated: 31/Jan/14  Resolved: 26/Jan/12

Status: Resolved
Project: Couchbase Server
Component/s: couchbase-bucket
Affects Version/s: 1.7.2
Fix Version/s: 2.0-beta
Security Level: Public

Type: Bug Priority: Critical
Reporter: Tim Smith (Inactive) Assignee: Mike Wiederhold
Resolution: Fixed Votes: 0
Labels: customer
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment: Ubuntu 10.04.3 LTS x86_64, Membase 1.7.2, Amazon m1.large instances, 16-node cluster.


 Description   
Two out of sixteen nodes are ejecting active items because their mem_used is above the high water mark. The other nodes are well below. Customer says that keys are of various sizes, but the larger ones should be spread out randomly across the different nodes. Number of keys on all nodes is roughly equal.

The two problem nodes show ep_value_size much larger than a healthy node. However, looking at the sqlite data files, there's no significant difference in size of the files on disk (as seen, for example, in */membase.log).

FYI, the rise in data size seems to have started on these two nodes after a different node, 10.254.7.150, stopped responding to REST and membase was restarted (with 'service membase-server restart').

The mbcollect_info data for these servers are in the S3 . The logs are named:

membase 16: a good node, for comparison
membase 07 and membase 14: the trouble nodes that are ejecting items due to large memory usage
membase 11: the node that was restarted on Saturday


Can someone please take a look at this, and help me understand why the ep_value_size might be bloating up for these two nodes?

Thanks,

Tim



 Comments   
Comment by Farshid Ghods (Inactive) [ 22/Nov/11 ]
Tim,

I will have a look at the diags tomorrow morning but it would be helpful to get these info from the customer:

1- number of items per node
2- mem_used on those nodes that are not ejecting active items
3- current state of the cluster after it did flish out the active items
Comment by Farshid Ghods (Inactive) [ 23/Nov/11 ]
farshid-2:Downloads farshid$ egrep -ir "vb_113" membase14-checkpointstats-201111231920.txt vb_113:last_closed_checkpoint_i: 1323
 vb_113:num_checkpoint_items: 373590
 vb_113:num_checkpoints: 264
 vb_113:num_items_for_persistenc: 4
 vb_113:num_tap_cursors: 1
 vb_113:open_checkpoint_id: 1324
 vb_113:persisted_checkpoint_id: 1323
 vb_113:state: replica

it shows that this node did not close the checkpoint for this vbucket and hence we have 264 open checkpoints in the memory

replication cursor seems to be stuck in the replica node.
Comment by Farshid Ghods (Inactive) [ 23/Nov/11 ]
from Chiyoung:

Keeping 2 checkpoints only applies to the master node. The slave node cannot create its new checkpoint for itself, but instead will receive a checkpoint_start (_end) messages from the master.

In this case, cluster has two replicas (master A -> slave B -> slave C), and the replication cursor for C on B got stuck and didn't move forward.
Comment by Tim Smith (Inactive) [ 28/Nov/11 ]
For the record, this has shown up on 1.7.2, not 1.7.1 as stated in the description.

Tim
Comment by Dipti Borkar [ 16/Dec/11 ]
Post 1.8.0 Hot fix.
Comment by Perry Krug [ 20/Dec/11 ]
One thing that I had a question on regarding this. Is the cursor expected to be stuck completely, or will it eventually clear itself out? At the customer, we are seeing everything eventually resolve itself...I just want to make sure we're looking at the same issue.
Comment by Alexander Petrossian (PAF) [ 26/Jan/12 ]
Colleagues, let me thank you for great product!

We're seeing something similar, was there any progress towards understanding of this?
Our statistics show that TAP cursors are really stuck
$ /opt/membase/bin/mbstats 10.112.119.11:11210 tap GXH teligent|grep -E 10.112.119.12
..
 eq_tapq:replication_ns_1@10.112.119.12:total_backlog_size: 1879177
 eq_tapq:replication_ns_1@10.112.119.12:total_noops: 58613
..
$ ---few minutes passed---
$ /opt/membase/bin/mbstats 10.112.119.11:11210 tap GXH teligent|grep -E 10.112.119.12
..
 eq_tapq:replication_ns_1@10.112.119.12:total_backlog_size: 1882715
 eq_tapq:replication_ns_1@10.112.119.12:total_noops: 58644
..

Only these two statistical rows were changed.

Is there any way to prod replication?
Or any way to learn it's state?
Comment by Chiyoung Seo [ 26/Jan/12 ]
Fixed in 1.8 release.
Comment by Alexander Petrossian (PAF) [ 27/Jan/12 ]
Thanks for great news, Chiyoung!
One small Q:
We understand there is no way to learn the state of replication – it should "just work", and that was fixed, right?
Generated at Sat Jul 12 02:10:59 CDT 2014 using JIRA 5.2.4#845-sha1:c9f4cc41abe72fb236945343a1f485c2c844dac9.