Translating Kubernetes Memory Limits to Couchbase

When you set a memory limit on a pod, the limit is passed through the --memory flag on the docker run command. https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-limits-are-run

Does Couchbase Server take this into account? Experimentally, it appears to be a no - pod is killed when it overruns. Are you using limits with the Operator?

top shows the full memory of the underlying VM, which AKS doesn’t even let us fully utilize and it is much higher than what we’re trying to set as our limit.

Hey Dylan,

Yes couchbase takes into account your memory limits. Top is not going to be reliable to verify this, try:
cat /sys/fs/cgroup/memory/memory.limit_in_bytes .
For container environments, couchbase will set this value as its server quota. Note this only restricts the quota’s for data within your services (data, index…). Which means it’s possible for system processes running in the container to push you over the Pod limits, especially if you’ve set your Couchbase Quota’s are very close to your Pod limits. I suggest increasing your Pod limits or adding more memory here.

If your still having issues, we’d need more insight into your limits and quota’s along with and idea of cluster load to better understand your use case. Sizing guideline may be of some here: https://docs.couchbase.com/server/5.5/install/sizing-general.html

Thanks for trying out the operator, and hope this helps!

Thank you for your response. Reading between the lines, it sounds like there isn’t a way to definitively compute a reliable limit value?