Failing rebalance, followed by log out,


I am currently testing Couchbase (4.5.0-2601 Community Edition) on my computer. I am using Docker (using 4 CPUs, 5 GB RAM) running 4 instances of Couchbase.

This is probably not an expected environment for Couchbase but I just wanted to try out how Couchbase recovers from a failing node and then adding a node back to the cluster.

I made observations that sound to me partially like superstition, so I just wanted to mention them here and maybe some of you can give me some pointers on what I might be doing wrong.

  • After a node fails and a successful failover is completed I wanted to add a new node to the cluster. However, the rebalancing process failed if one of the buckets was completely empty.
  • It seemed like if there was no to medium load on the servers or the host machine, that it was enough to cause the cluster to timeout and random servers would change from being Up to Pending.
  • In many cases of a failed rebalance I got logged out from the Couchbase Web Console

Some other comments:

  • [RESOLVED] I had issues getting the SDK connected to the cluster using different ports than the default ones, but I think I just noticed, that I looked at an old couchbase documentation
  • I also did some tests on the cluster using a python script writing a lot of data onto the database using the REST API. I read in the documentation, that the REST API should not be used for that, but without an explanation why this is bad practice.

What I am going to do:

  • Try the SDK again using different ports (EDIT: I was indeed using the wrong port.)
  • Give the Docker system more resources (EDIT: The system performed a lot better with more resources. I used 6 CPUs, 10 GB of RAM)

Have a great day