Too many open files - Couchbase

Hi ,
Why are we getting this error while executing N1QL query in our CB cluster on K8s ?

File “/opt/conda/lib/python3.7/site-packages/couchbase/n1ql.py”, line 525, in iter
for row in self._process_payload(raw_rows):
File “/opt/conda/lib/python3.7/site-packages/couchbase/n1ql.py”, line 480, in _process_payload
self._handle_meta(self.raw.value)
File “/opt/conda/lib/python3.7/site-packages/couchbase/n1ql.py”, line 473, in _handle_meta
raise N1QLError.pyexc(‘N1QL Execution failed’, err)
couchbase.n1ql.N1QLError: <N1QL Execution failed, OBJ={‘code’: 5000, ‘msg’: ‘open /opt/couchbase/var/lib/couchbase/tmp/scan-results853086564958: too many open files - cause: open /opt/couchbase/var/lib/couchbase/tmp/scan-results853086564958: too many open files’}>
“”"

I have noticed this for two of my cluster node (where opt/couchbase/var/lib/couchbase/tmp/ is generated scan-results files) and below config present for 2 nodes among all 4 . I didn’t touch it since cluster setup so wonder why this ulimit is different across nodes : any pointer ? other 2 nodes seems open-files are : 1048576 while the problem 2 nodes are 1024 .

I have tried to hard set it and align that # with other 2 nodes but the change is not reflecting :
I followed : https://gist.github.com/luckydev/b2a6ebe793aeacf50ff15331fb3b519d

root@adc-tccluster-0002:/# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 1030122
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1030122
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited