Hard Out of Memory error
I am experiencing this error:
Hard Out Of Memory Error. Bucket "***" on node *.*.*.* is full. All memory allocated to this bucket is used for metadata.
The bucket in question is 2.1G across 3 servers. I had run into the error previously with the bucket set to 1.5G across the cluster. As a stopgap, I have increased the size once again... this time to 2.4G, but obviously I need to figure out what is going on. I suspect that the last bit about metadata is invalid, since it appears that I only have 17M of metadata and 6M of user data per instance, according to the vbucket reporting.
Memory usage for the bucket is 1.75G, with 1.88G highwater mark. However, the Disk usage for the bucket only shows 233MB. For my other bucket (default), the disk usage exceeds the RAM usage, as I would expect as old data gets cached to disk via LRU. Persistence is enabled for the bucket, but clearly there is something else that is resulting in differing behavior between this bucket and my default bucket. My suspicion is that something is preventing this bucket from flushing out to disk, thus limiting me by the amount of RAM that I have.
Does anyone have suggestions of where I might find settings related to this? Or other diagnostics that I can use to get more information about the problem?