Error "buckets_cleanup_failed" when attempting to rebalance

I have a cluster in a test environment with a single node with running Couchbase 4.0.0-4051 Community Edition (build-4051). I am attempting to add another node to the cluster (4.1.0-5005 Community Edition (build-5005)). When doing so, the rebalance fails with the following error:

Rebalance exited with reason {buckets_cleanup_failed, ['ns_1@couchbase1.int.internal-hostname.com']}

I’ve tried several times and (not surprisingly) get the same results. Below is a full history of the log events after attempting a rebalance. Any ideas what’s going on? Thanks in advance for any advice / guidance.

Rebalance exited with reason {buckets_cleanup_failed, ['ns_1@couchbase1.int.internal-hostname.com']}
ns_orchestrator002
ns_1@couchbase2.int.internal-hostname.com
20:55:19 - Mon Oct 31, 2016  

Failed to cleanup old buckets on node 'ns_1@couchbase1.int.internal-hostname.com': {error, eacces} 
ns_rebalancer000 
ns_1@couchbase2.int.internal-hostname.com
20:55:19 - Mon Oct 31, 2016  

Starting rebalance, KeepNodes = ['ns_1@couchbase1.int.internal-hostname.com', 'ns_1@couchbase2.int.internal-hostname.com'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes
ns_orchestrator004
ns_1@couchbase2.int.internal-hostname.com
20:55:19 - Mon Oct 31, 2016  

Current master is older and I'll try to takeover (repeated 1 times)
mb_master000
ns_1@couchbase2.int.internal-hostname.com
20:54:58 - Mon Oct 31, 2016  

Haven't heard from a higher priority node or a master, so I'm taking over.
mb_master000
ns_1@couchbase2.int.internal-hostname.com
20:54:08 - Mon Oct 31, 2016  

Node ns_1@couchbase2.int.internal-hostname.com joined cluster
ns_cluster003
ns_1@couchbase2.int.internal-hostname.com
20:53:59 - Mon Oct 31, 2016  

Couchbase Server has started on web port 8091 on node 'ns_1@couchbase2.int.internal-hostname.com'. Version: "4.1.0-5005-community".
menelaus_sup001
ns_1@couchbase2.int.internal-hostname.com
20:53:58 - Mon Oct 31, 2016  

Current master is older and I'll try to takeover
mb_master000
ns_1@couchbase2.int.internal-hostname.com
20:53:58 - Mon Oct 31, 2016  

Deleting old data files of bucket "backups" (repeated 1 times)
ns_storage_conf000
ns_1@couchbase1.int.internal-hostname.com
20:48:03 - Mon Oct 31, 2016  

Unable to rm -rf bucket database directory backups {error,eacces} (repeated 1 times)
ns_couchdb_api000
ns_1@couchbase1.int.internal-hostname.com
20:48:03 - Mon Oct 31, 2016  

Unable to rm -rf bucket database directory backups {error,eacces}
ns_couchdb_api000
ns_1@couchbase1.int.internal-hostname.com
20:47:36 - Mon Oct 31, 2016  

Deleting old data files of bucket "backups"
ns_storage_conf000
ns_1@couchbase1.int.internal-hostname.com
20:47:36 - Mon Oct 31, 2016  

Deleting old data files of bucket "backups" (repeated 1 times)
ns_storage_conf000
ns_1@couchbase1.int.internal-hostname.com
20:47:03 - Mon Oct 31, 2016  

Unable to rm -rf bucket database directory backups {error,eacces} (repeated 1 times)
ns_couchdb_api000
ns_1@couchbase1.int.internal-hostname.com
20:47:03 - Mon Oct 31, 2016  

Unable to rm -rf bucket database directory backups {error,eacces}
ns_couchdb_api000
ns_1@couchbase1.int.internal-hostname.com
20:46:04 - Mon Oct 31, 2016

Deleting old data files of bucket "backups"
ns_storage_conf000
ns_1@couchbase1.int.internal-hostname.com
20:46:04 - Mon Oct 31, 2016  

Node 'ns_1@couchbase1.int.internal-hostname.com' saw that node 'ns_1@couchbase2.int.internal-hostname.com' came up. Tags: []
ns_node_disco004
ns_1@couchbase1.int.internal-hostname.com
20:44:42 - Mon Oct 31, 2016  

Started node add transaction by adding node 'ns_1@couchbase2.int.internal-hostname.com' to nodes_wanted (group: undefined)
ns_cluster000
ns_1@couchbase1.int.internal-hostname.com
20:44:40 - Mon Oct 31, 2016

In the end this was just a system configuration issue. We were storing our database backups in the same folder that Couchbase was using to store buckets. We moved the backups folder to a different location and the problem was resolved and the rebalance completed successfully.