Kafka connect couchbase : 4.1.6
Total Pod : 5
Task : 2 each pod
scale policy : max 5, min 3
the issue is that the cbkc pod is set to downscale when it it not consuming enough resources. So when it is brought down, the memory consumed by the active pods suddenly shoot up. After this the active pods are shut down because of OOM error. Then we restart these pods.
when it will set same for min and max, we are not facing this error but is it recommendation for cbkc ?
currently we are min and max value as per client policy.
Setting the min too small is what is causing the OOM. So don’t do that. If the client is bent on having different values for min and max, then set the min=5 and max=6.
we have tried same, we had set min to 5 and max 7 but faced same issue, whenever we had diff values of min and max and on pod scale up to max then faced same value. But when we have same values then even 5 pod (min and max) we didn’t faced that issue.