Data Transfer out of Couchbase Server really slow?

Hi guys,

I’m having a strange problem with the speed of data transfer out of Couchbase.
I’ve tried various different configurations but here is my current setup:

5-Node cluster on EC2.
All Ubuntu 13.10 with 8Gb RAM each.
Approx 4m documents in the bucket (not the default bucket)
A View (map/reduce) which returns approx 800,000 records.

When I query the View with reduce=true I get the ‘count’ which is approx 800,000 - this is returned instantly. So this tells me that the View itself is executing extremely quickly.

When I query the View with reduce=false I get each of the 800,000 lines. I’ve done the calculations and the data transfer amounts to around 60 Mb of data.


I’ve installed iperf on all machines and tested the network transfer rate to be anywhere from 100 Mb/sec to 1000 Mb/sec depending on which servers I’m using. This is megabytes per second not megabits. So transferring 60 Mb of data should not be a problem but it is taking about 30 seconds.

This is completely killing my use case.

I’ve independently (not using iperf) confirmed that transferring 60 Mb of a file takes less than a second but when I query the View which returns 60Mb worth of data it takes 30 seconds and the CPU usage of the Node in question is about 60% for the full 30 seconds.

What’s going on ?
Your help would be greatly appreciated - I’m completely stuck with this.


Hi guys,

Any insight on this one ?

Really trying to get to the bottom of it. In the last two days I tried a similar (but not identical) setup on a different (not AWS) cloud provider and the transfer of the results is still taking 10’s of seconds for only 60Mb of data.



Let me answer in 2 steps:

1- Generic view processing

I do not necessary see that like a “product issue”, it is like doing a large query on a relational database… Query in Couchbase, it depends of many things:

  • how to access the data, query itself, load data from disk, cache things, …
  • network is a SMALL thing in this

When you try to return a large dataset from a Couchbase view, the index /result will come from disk and be cache in the OS Page cache, and it is not possible to cache the whole thing. Then the system has to merge all the result of all node, and also sort it…

So all these operations are taking lot of time.

2- Your use case
What do you emit in your view/index?
Be sure you do not emit lot of data, only the one necessary for your query to return the data that you really need in the application.
What I mean by that is, if you want to return the document itself it is better to emit only the id (already part of the index) and do get (or multiget) from your application, in this case the document will be cache in Couchbase cache (memcached)


Hi Tug,

Thanks for getting back to me.
About the Generic View Processing… does Couchbase not have to do all the same work when I call the View with the reduce=true ? When I call the exact same view with the reduce function = count it returns extremely quickly. That’s why I thought the View itself was fast but something was going wrong with data transfer ?

Regarding my use case…
In the map function I’m doing approx. 4 emit’s per document with each emit just containing one document.field in the key and generally null in the value. It amounts to approx 80 bytes per emit. In the query I’m specifying a key= to filter the results so that only one of the per-document-emit’s is returned in the result. So, 800k documents gives about 60Mb of data in the result.
I originally had planned to use multiget but had to move away from it as I ran into a similar issue - the multiget call was taking so long to return the data even with the entire dataset held in RAM.

Maybe my understanding is wrong. I know Couchbase is being used in much larger scenarios than mine - is it typical that a request for 800,000 documents (with 60Mb of data) should take 20-30 seconds ?

Thanks for all your help.

@James: In the case of reduce=true, if the documents haven’t changed then the result is cached in the index. To quote from the manual:

The reduce function also has a final additional benefit. The results of the computed reduction are stored in the index along with the rest of the view information. This means that when accessing a view with the reduce function enabled, the information comes directly from the index content. This results in a very low impact on the Couchbase Server to the query (the value is not computed at runtime), and results in very fast query times, even when accessing information based on a range-based query.

As Tug pointed out, it is best to try to minimise the size of the emitted view data - indexes always come from disk (ignoring the OS filesystem caching for a minute) whereas gets on the Couchbase bucket are explicitly cached in RAM across the whole cluster.

One final thing to look at would be your View fragmentation % - if that is high it could slow querying the view data. Link to the relevant section in the manual:

Thanks very much for all the info guys. That makes more sense now.
Will have to rethink my use-case to see if I can make it work. Thanks again for the detailed responses - I understand the inner workings a bit better now.