Object too large for cache

Hi there,

We’re trying to use CB as caching layer for our image storage. It uses nginx + lua + memc module.
We’re able to set and get images via couchbase.
After looking the logs, turned out we can’t set key / value larger than 1MB, we’re getting error:
SERVER ERROR object too large for cache

But the limits page of CB says that the limit is 20MB. What we’re missing here?

Kurt

  • The bucket type is couchbase, not memcache
  • There is 3GB RAM empty on CB, actually we flush everything and just trying to set one file with 1.1MB size.
  • Image with 0.96 MB was success.
  • The method we’re using is memc:set(url, img.body)
  • The version installed: couchbase-server-enterprise-3.0.2-centos6.x86_64.rpm

Just installed and tried with couchbase-server-community-3.0.1-centos6.x86_64.rpm.
Still the same error: "SERVER_ERROR object too large for cache"
I catch this error over tcpdump btw, the lua client says only “nil” as output

Hi,

Firstly let me confirm that your understanding is absolutely correct - Couchbase is configured to store items up to 20MB in size. We can run a simple test to determine if your Couchbase cluster is behaving correctly using one of our standard load generator tools, pillowfight.

With the following commands I attempt to store a single item of ~10M, ~20M and ~30M to Couchbase.

$ cbc-pillowfight  -m 10000000 -M 10000000 -t 1 -I 1  -U couchbase://192.168.105.101/charlie -B 1 -p BIG_ITEM_1 -c1
Running. Press Ctrl-C to terminate...
Thread 0 has finished populating.

$ cbc-pillowfight  -m 20000000 -M 20000000 -t 1 -I 1  -U couchbase://192.168.105.101/charlie -B 1 -p BIG_ITEM_2_ -c1
Running. Press Ctrl-C to terminate...
Thread 0 has finished populating.

$ cbc-pillowfight  -m 30000000 -M 30000000 -t 1 -I 1  -U couchbase://192.168.105.101/charlie -B 1 -p BIG_ITEM_3_ -c1
Running. Press Ctrl-C to terminate...
Thread 0 has finished populating.
Operation(s) failed: [0x4] The object requested is too big to store in the server

These commands store a single document on Couchbase of the specified size to my bucket named “charlie”. The cbc-pillowfight tool is installed as part of the libcouchbase installation.

Note the first two succeed whilst the third write fails. Running these commands to your cluster you should see the same behaviour. If not, we can take a closer look at the Server configuration to understand why it might be giving that response.

Hi there,

On a new server, we have installed;
libcouchbase2-core-2.4.8-1.el6.x86_64
couchbase-server-3.0.2-1603.x86_64
libcouchbase2-bin-2.4.8-1.el6.x86_64

Logged in, created the couchbase instance with default bucket, and the command is working.
Here’s the output:

# cbc-pillowfight -v -U couchbase://localhost/default -m 3145728 -M 3145728 -c 100 -I 50 
Running. Press Ctrl-C to terminate...
0ms [I0] {10750} [INFO] (instance - L:372) Version=2.4.8, Changeset=dfa7ae51b3150f45b9a8846f886123bc8296d9c5
0ms [I0] {10750} [INFO] (instance - L:373) Effective connection string: couchbase://localhost/default?console_log_level=2&. Bucket=default
0ms [I0] {10750} [INFO] (cccp - L:118) Requesting connection to node localhost:11210 for CCCP configuration

Is there any way to test the moxi as our lua client is not memcapable?

I’ll check for capable client in parallel.

Best

Here’s what we do in basic and snipped way:

mykey = ngx.var.uri

memcached = require "resty.memcached"
memc, err = memcached:new()
ok, err = memc:connect("xxx", 11211)
get_res, get_err = memc:get(mykey)

if get_res then
   -- show img
else
   -- get img by proxy
   -- show img
   set_ok, red_err = memc:set(mykey, img_res.body, 259200)
end

my collegue found kolchanov/Lua-couchbase on github, looking on it, don’t know if it will solve the case.

If pillowfight can successfully store large enough documents to Couchbase then that certainly suggests the problem is on the client end. It would be worth trying the lua-couchbase repo to see if your experience differs.

We’re suspecting the very same thing, but on the other hand we’re receiving the “SERVER_ERROR object too large for cache” from the server, right after the set command. Hmm…

Can we try a different SDK to see if it can be reproduced. Something like Node or Python?

That lua client is using server-side proxy for legacy memcache clients, and hits 1M limit for value

To reproduce it in ruby run

$ gem install 'dalli'
$ ruby <<EOF
require 'rubygems'
require 'dalli'
c = Dalli::Client.new('localhost:11211')
three_megabytes = 'x' * 3_000_000
c.set('foo', three_megabytes)
EOF

It will output

W, [2015-04-06T02:04:07.859526 #26808]  WARN -- : Value for foo over max size: 1048576 <= 3000007

Yes @avsej , during weekend I’m 100% sure now that the lua library memc is using the legacy methods.

@owen_hughes I tried nodejs, it’s working but we’re not 100% familiar with nodejs itself and other languages mentioned in SDK will create blocking io and would create more issues. Simple Nginx + Lua would be best and simple way to handle a lot of traffic. I found this: http://labs.couchbase.com/couchbase-nginx-module but it has very little information about how to set and get binary files, I know I should use the body for $couchbase_val but it’s not accepting it. Any idea how I can reach to this repo holder?

The repo holder is Sergey who answered you above. Does this nginx module use maxi too? If so you will hit the same issue I think.

@avsej can you advise on whether your nginx module will help?

Hi there,

I tried the client side moxi and with moxi-cluster.cfg I’m now able to set files more than 1MB. Is there a way to configure the builtin moxi to set the paramenter url=http://HOSTNAME:8091/pools/default/bucketsStreaming/default ?

Best,

This is a regression from 2.5.x… I’m currently working on finding the root cause and build a patch

I’ve uploaded a patch for review and it’ll be fixed in the next release. I don’t know if we’ll issue a patch for existing releases. Please connect through your support channels if you need one.