HttpStreamingConfigListener throwing > 900 exceptions per second

Hello,

I got this exception thrown more than 900 times/sec, creating tons on logs :

[otr_dev9] 2024-02-08 11:14:52,988 ERROR 88 Ot.Couchbase.Core.Configuration.Server.Streaming.HttpStreamingConfigListener  HTTP Streaming error. Response status code does not indicate success: 404 (Object Not Found).    at System.Net.Http.HttpResponseMessage.EnsureSuccessStatusCode()
   at Couchbase.Core.Configuration.Server.Streaming.HttpStreamingConfigListener.<<StartBackgroundTask>b__18_0>d.MoveNext()
System.Net.Http.HttpRequestException: Response status code does not indicate success: 404 (Object Not Found).
   at System.Net.Http.HttpResponseMessage.EnsureSuccessStatusCode()
   at Couchbase.Core.Configuration.Server.Streaming.HttpStreamingConfigListener.<<StartBackgroundTask>b__18_0>d.MoveNext()

Source code :

The EnsureSuccessStatusCode() fails but it do not reveal what caused the HTTP request failure. (by the way, it would be best to log the content of the response, see : Why we should not use EnsureSuccessStatusCode()? · #TIL (ketanjawahire.github.io)

404 is Object Not Found. Probably the specified bucket does not exist.

       _streamingUriPath = "/pools/default/bs/" + _configSubscriber.Name;

The SDK will keep retrying until timeout - in hopes that the bucket will be created.

link should be couchbase-net-client/src/Couchbase/Core/Configuration/Server/Streaming/HttpStreamingConfigListener.cs at master · couchbase/couchbase-net-client · GitHub

Hi @julien.amaru -

Thanks for posting this! I do agree that we should not be calling EnsureSuccessStatusCode here. I created a jira ticket for a fix which should happen in the next release or so. I also identified and curated a list of other services that are calling the EnsureSuccessStatusCode so that they can be improved as well.

Jeff

1 Like

Hi mreiche,
Thanks for your answer.
Though, I really doubt that the specified bucket does not exist.
If we are talking about the bucket specifed during the Couchbase connection, it was created way before the connection attempt. Moreover, I have different instances connected to the same bucket, and not all of them have the same issue at the same time.

Lastly, if I do the experiment to connect to a not existing bucket, I have a different error :

2024-02-13 10:22:07,146 INFO 10 Ot.Couchbase.Core.ClusterContext  Bootstrapping: cannot bootstrap bucket ot_cibml_ppr2.
Couchbase.AuthenticationFailureException: Either the bucket is not present, the user does not have the right privileges to access it, or the bucket is hibernated: Eaccess

You can use SDK Doctor to diagnose SDK connection issues. SDK Doctor | Couchbase Docs

You can slso try curl -v -u user:password on that url t to see what the server returns

The url my use case has this format : “http://cmxd4o04.dev.cm.par.emea.cib:8091/pools/default/bs/CLUSTER

When I try curl -v -u user:password on that url I have the error :

Requested resource not found.

The last part of the path needs to be the bucket name (not CLUSTER. Unless the bucket name is CLUSTER). It would be helpful to post your code including the values of the arguments. It is also helpful if you show the complete output which should look like this (for a bucket that does not exist).

% curl -v -u Administrator:password http://localhost:8091/pools/default/bs/hafusx
* Trying 127.0.0.1:8091...
* Connected to localhost (127.0.0.1) port 8091 (#0)
* Server auth using Basic with user 'Administrator'
> GET /pools/default/bs/hafusx HTTP/1.1
> Host: localhost:8091
> Authorization: Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==
> User-Agent: curl/7.87.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 404 Object Not Found
< Cache-Control: no-cache,no-store,must-revalidate
< Content-Length: 31
< Content-Type: text/plain
< Date: Tue, 13 Feb 2024 17:00:03 GMT
< Expires: Thu, 01 Jan 1970 00:00:00 GMT
< Pragma: no-cache
< Server: Couchbase Server
< X-Content-Type-Options: nosniff
< X-Frame-Options: DENY
< X-Permitted-Cross-Domain-Policies: none
< X-XSS-Protection: 1; mode=block
<
Requested resource not found.
* Connection #0 to host localhost left intact

For a bucket that does exist, and the user has ac

cess to, the output should be 
% curl -v -u Administrator:password http://localhost:8091/pools/default/bs/hafus
* Trying 127.0.0.1:8091...
* Connected to localhost (127.0.0.1) port 8091 (#0)
* Server auth using Basic with user 'Administrator'
> GET /pools/default/bs/hafus HTTP/1.1
> Host: localhost:8091
> Authorization: Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==
> User-Agent: curl/7.87.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Cache-Control: no-cache,no-store,must-revalidate
< Content-Type: application/json; charset=utf-8
< Date: Tue, 13 Feb 2024 17:03:07 GMT
< Expires: Thu, 01 Jan 1970 00:00:00 GMT
< Pragma: no-cache
< Server: Couchbase Server
< Transfer-Encoding: chunked
< X-Content-Type-Options: nosniff
< X-Frame-Options: DENY
< X-Permitted-Cross-Domain-Policies: none
< X-XSS-Protection: 1; mode=block
<
{"rev":85,"revEpoch":1,"name":"hafus","nodeLocator":"vbucket","bucketType":"membase","storageBackend":"couchstore","uuid":"61487e1d209ac8bf95a4fc21785670d2","uri":"/pools/default/buckets/hafus?bucket_uuid=61487e1d209ac8bf95a4fc21785670d2","streamingUri":"/pools/default/bucketsStreaming/hafus?bucket_uuid=61487e1d209ac8bf95a4fc21785670d2","bucketCapabilitiesVer":"","bucketCapabilities":["collections","durableWrite","tombstonedUserXAttrs","couchapi","subdoc.ReplaceBodyWithXattr","subdoc.DocumentMacroSupport","subdoc.ReviveDocument","dcp","cbhello","touch","cccp","xdcrCheckpointing","nodesExt","xattr"],"collectionsManifestUid":"0","ddocs":{"uri":"/pools/default/buckets/hafus/ddocs"},"vBucketServerMap":{"hashAlgorithm":"CRC","numReplicas":0,"serverList":["$HOST:11210"],"vBucketMap":[[0],[0],[0],[0],[0],...},"nodes":[{"couchApiBase":"http://$HOST:8092/hafus%2B61487e1d209ac8bf95a4fc21785670d2","hostname":"$HOST:8091","ports":{"direct":11210}}],"nodesExt":[{"services":{"capi":8092,"kv":11210,"mgmt":8091,"projector":9999},"thisNode":true}],"clusterCapabilitiesVer":[1,0],"clusterCapabilities":{"n1ql":["enhancedPreparedStatements"]}}

^C

Indeed when I try the path with my real bucket name I have a valid response :
{"rev":368384,"revEpoch":1,"name":"ot_cibml_ppr","nodeLocator":"vbucket","bucketType":"membase","storageBackend":"couchstore","uuid":"14767de6b66f50268203e0796b9bbdba","uri":"/pools/default/buckets/ot_cibml_ppr?bucket_uuid=14767de6b66f50268203e0796b9bbdba","streamingUri":"/pools/default/bucketsStreaming/ot_cibml_ppr?bucket_uuid=14767de6b66f50268203e0796b9bbdba","bucketCapabilitiesVer":"","bucketCapabilities":["collections","durableWrite","tombstonedUserXAttrs","couchapi","subdoc.ReplaceBodyWithXattr","subdoc.DocumentMacroSupport","subdoc.ReviveDocument","dcp","cbhello","touch","cccp","xdcrCheckpointing","nodesExt","xattr"],"collectionsManifestUid":"6b","ddocs":{"uri":"/pools/default/buckets/ot_cibml_ppr/ddocs"},"vBucketServerMap":{"hashAlgorithm":"CRC","numReplicas":0,"serverList":["cmxd4o01.dev.cm.par.emea.cib:11210","cmxd4o04.dev.cm.par.emea.cib:11210","cmxd4o05.dev.cm.par.emea.cib:11210","cmxp4o13.prd.cm.par.emea.cib:11210"],"vBucketMap":[[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[3],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[1],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[3],[3],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[2],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2],[2]]},"nodes":[{"couchApiBase":"http://cmxd4o01.dev.cm.par.emea.cib:8092/ot_cibml_ppr%2B14767de6b66f50268203e0796b9bbdba","hostname":"cmxd4o01.dev.cm.par.emea.cib:8091","ports":{"direct":11210}},{"couchApiBase":"http://cmxd4o04.dev.cm.par.emea.cib:8092/ot_cibml_ppr%2B14767de6b66f50268203e0796b9bbdba","hostname":"cmxd4o04.dev.cm.par.emea.cib:8091","ports":{"direct":11210}},{"couchApiBase":"http://cmxd4o05.dev.cm.par.emea.cib:8092/ot_cibml_ppr%2B14767de6b66f50268203e0796b9bbdba","hostname":"cmxd4o05.dev.cm.par.emea.cib:8091","ports":{"direct":11210}},{"couchApiBase":"http://cmxp4o13.prd.cm.par.emea.cib:8092/ot_cibml_ppr%2B14767de6b66f50268203e0796b9bbdba","hostname":"cmxp4o13.prd.cm.par.emea.cib:8091","ports":{"direct":11210}}],"nodesExt":[{"services":{"capi":8092,"capiSSL":18092,"eventingAdminPort":8096,"eventingDebug":9140,"eventingSSL":18096,"fts":8094,"ftsGRPC":9130,"ftsGRPCSSL":19130,"ftsSSL":18094,"indexAdmin":9100,"indexHttp":9102,"indexHttps":19102,"indexScan":9101,"indexStreamCatchup":9104,"indexStreamInit":9103,"indexStreamMaint":9105,"kv":11210,"kvSSL":11207,"mgmt":8091,"mgmtSSL":18091,"n1ql":8093,"n1qlSSL":18093,"projector":9999},"hostname":"cmxd4o01.dev.cm.par.emea.cib"},{"services":{"capi":8092,"capiSSL":18092,"eventingAdminPort":8096,"eventingDebug":9140,"eventingSSL":18096,"fts":8094,"ftsGRPC":9130,"ftsGRPCSSL":19130,"ftsSSL":18094,"indexAdmin":9100,"indexHttp":9102,"indexHttps":19102,"indexScan":9101,"indexStreamCatchup":9104,"indexStreamInit":9103,"indexStreamMaint":9105,"kv":11210,"kvSSL":11207,"mgmt":8091,"mgmtSSL":18091,"n1ql":8093,"n1qlSSL":18093,"projector":9999},"thisNode":true,"hostname":"cmxd4o04.dev.cm.par.emea.cib"},{"services":{"capi":8092,"capiSSL":18092,"eventingAdminPort":8096,"eventingDebug":9140,"eventingSSL":18096,"fts":8094,"ftsGRPC":9130,"ftsGRPCSSL":19130,"ftsSSL":18094,"indexAdmin":9100,"indexHttp":9102,"indexHttps":19102,"indexScan":9101,"indexStreamCatchup":9104,"indexStreamInit":9103,"indexStreamMaint":9105,"kv":11210,"kvSSL":11207,"mgmt":8091,"mgmtSSL":18091,"n1ql":8093,"n1qlSSL":18093,"projector":9999},"hostname":"cmxd4o05.dev.cm.par.emea.cib"},{"services":{"capi":8092,"capiSSL":18092,"eventingAdminPort":8096,"eventingDebug":9140,"eventingSSL":18096,"fts":8094,"ftsGRPC":9130,"ftsGRPCSSL":19130,"ftsSSL":18094,"indexAdmin":9100,"indexHttp":9102,"indexHttps":19102,"indexScan":9101,"indexStreamCatchup":9104,"indexStreamInit":9103,"indexStreamMaint":9105,"kv":11210,"kvSSL":11207,"mgmt":8091,"mgmtSSL":18091,"n1ql":8093,"n1qlSSL":18093,"projector":9999},"hostname":"cmxp4o13.prd.cm.par.emea.cib"}],"clusterCapabilitiesVer":[1,0],"clusterCapabilities":{"n1ql":["enhancedPreparedStatements"]}}

This “CLUSTER” last part seems to come from a hardcoded string in ClusterContext.cs

Hi, I’m a colleague of Julien

I’ve analysed the code, this is what I understood and discovered

once connected, ConfigHandler polls config from the server

if during polling it catches a DocumentNotFoundException, it starts all HttpStreamingConfigListener

this listner continuously calls on cluster nodes, in a background task, the url http://nodeaddress/pools/default/bs/CLUSTER

this request always throws a 404

So the root question is : why dos it get a DocumentNotFoundException
is it a false positive because of network error?
is it expected?

BTW, I found a bug in HttpStreamingConfigListener
It gets a random node with var nodes = _configSubscriber?.ClusterNodes.ToList().Shuffle();
but if the node has no management uri, it throws a null reference exception

Ok - it sounds like it tries to go to HttpStreaming to get the cluster config, but the node doesn’t have it. Here’s what the RFC says about fallingback to the httpstreaming API. sdk-rfcs/rfc/0048-sdk3-bootstrapping.md at master · couchbaselabs/sdk-rfcs · GitHub. I’m investigating what could be happening in your case. I’ve opened Loading...

Could you run SDK Doctor against your cluster and post the output on the issue?

I opened Loading... for the null reference exception.

1 Like

We will do it tomorrow

Also I didn’t mentioned it, but we do not have memcached buckets

Looking at other Couchbase SDKs - they only fallback to http for bucket configurations, not the cluster configuration.

It would be helpful to post your code including the values of the arguments. That might help determine why it is not succeeding to get the configuration from a kv node.

@mdegroux /@julien.amaru -

Which SDK version are you using?

We are currently using CouchbaseNetClient version 3.4.11

1 Like

As requested, here is the output of the SDK doctor on our cluster :
|====================================================================|
| ___ ___ _ __ ___ ___ ___ _____ ___ ___ |
| / | | |/ /| \ / _ \ / | / _ | _ \ |
| _
\ |) | ’ <
| |) | () | (_ | || () | / |
| |
//||_\ |/ _/ _| |_| __/||\ |
| |
|====================================================================|

Note: Diagnostics can only provide accurate results when your cluster
is in a stable state. Active rebalancing and other cluster configuration
changes can cause the output of the doctor to be inconsistent or in the
worst cases, completely incorrect.

10:43:59.713 INFO :arrow_forward: Parsing connection string http://Gaasdevmarcbps01.cm.par.emea.cib:8091/ot_cibml_ppr
10:43:59.783 WARN :arrow_forward: Connection string is using the deprecated http:// scheme. Use the couchbase:// scheme instead!
10:43:59.785 INFO :arrow_forward: Connection string identifies the following CCCP endpoints:
10:43:59.787 INFO :arrow_forward: 1. Gaasdevmarcbps01.cm.par.emea.cib:11210
10:43:59.788 INFO :arrow_forward: Connection string identifies the following HTTP endpoints:
10:43:59.789 INFO :arrow_forward: 1. Gaasdevmarcbps01.cm.par.emea.cib:8091
10:43:59.789 INFO :arrow_forward: Connection string specifies bucket ot_cibml_ppr
10:43:59.790 WARN :arrow_forward: Your connection string specifies only a single host. You should consider adding additional static nodes from your cluster to this list to improve your applications fault-tolerance
10:43:59.791 INFO :arrow_forward: Performing DNS lookup for host Gaasdevmarcbps01.cm.par.emea.cib
10:43:59.858 INFO :arrow_forward: Bootstrap host Gaasdevmarcbps01.cm.par.emea.cib refers to a server with the address 10.118.180.180
10:43:59.858 INFO :arrow_forward: Attempting to connect to cluster via CCCP
10:43:59.860 INFO :arrow_forward: Attempting to fetch config via cccp from Gaasdevmarcbps01.cm.par.emea.cib:11210
10:43:59.991 WARN :arrow_forward: Bootstrap host Gaasdevmarcbps01.cm.par.emea.cib is not using the canonical node hostname of server1.emea.cib. This is not neccessarily an error, but has been known to result in strange and challenging to diagnose errors when DNS entries are reconfigured.
10:43:59.991 INFO :arrow_forward: Selected the following network type: default
10:43:59.993 INFO :arrow_forward: Identified the following nodes:
10:43:59.996 INFO :arrow_forward: [0] server1.emea.cib
10:43:59.996 INFO :arrow_forward: n1ql: 8093, n1qlSSL: 18093, fts: 8094
10:43:59.997 INFO :arrow_forward: indexScan: 9101, kv: 11210, indexStreamCatchup: 9104
10:43:59.998 INFO :arrow_forward: indexStreamInit: 9103, indexStreamMaint: 9105, mgmtSSL: 18091
10:43:59.998 INFO :arrow_forward: capi: 8092, ftsGRPCSSL: 19130, indexAdmin: 9100
10:43:59.998 INFO :arrow_forward: ftsSSL: 18094, indexHttp: 9102, indexHttps: 19102
10:43:59.999 INFO :arrow_forward: mgmt: 8091, eventingAdminPort: 8096, eventingSSL: 18096
10:44:00.000 INFO :arrow_forward: ftsGRPC: 9130, projector: 9999, capiSSL: 18092
10:44:00.001 INFO :arrow_forward: eventingDebug: 9140, kvSSL: 11207
10:44:00.001 INFO :arrow_forward: [1] server4.emea.cib
10:44:00.002 INFO :arrow_forward: capiSSL: 18092, indexAdmin: 9100, indexScan: 9101
10:44:00.008 INFO :arrow_forward: indexStreamCatchup: 9104, mgmt: 8091, n1ql: 8093
10:44:00.009 INFO :arrow_forward: fts: 8094, ftsGRPCSSL: 19130, ftsSSL: 18094
10:44:00.009 INFO :arrow_forward: indexHttp: 9102, indexHttps: 19102, kvSSL: 11207
10:44:00.010 INFO :arrow_forward: n1qlSSL: 18093, kv: 11210, mgmtSSL: 18091
10:44:00.010 INFO :arrow_forward: capi: 8092, eventingAdminPort: 8096, eventingSSL: 18096
10:44:00.010 INFO :arrow_forward: ftsGRPC: 9130, indexStreamInit: 9103, indexStreamMaint: 9105
10:44:00.010 INFO :arrow_forward: projector: 9999, eventingDebug: 9140
10:44:00.011 INFO :arrow_forward: [2] server5.emea.cib
10:44:00.011 INFO :arrow_forward: kvSSL: 11207, mgmt: 8091, n1ql: 8093
10:44:00.012 INFO :arrow_forward: projector: 9999, fts: 8094, ftsGRPC: 9130
10:44:00.012 INFO :arrow_forward: indexStreamInit: 9103, indexStreamCatchup: 9104, indexStreamMaint: 9105
10:44:00.012 INFO :arrow_forward: mgmtSSL: 18091, ftsGRPCSSL: 19130, ftsSSL: 18094
10:44:00.013 INFO :arrow_forward: indexHttp: 9102, indexHttps: 19102, indexScan: 9101
10:44:00.014 INFO :arrow_forward: eventingAdminPort: 8096, eventingDebug: 9140, eventingSSL: 18096
10:44:00.014 INFO :arrow_forward: kv: 11210, n1qlSSL: 18093, capi: 8092
10:44:00.014 INFO :arrow_forward: capiSSL: 18092, indexAdmin: 9100
10:44:00.014 INFO :arrow_forward: [3] server13.emea.cib
10:44:00.014 INFO :arrow_forward: projector: 9999, eventingSSL: 18096, indexScan: 9101
10:44:00.015 INFO :arrow_forward: mgmt: 8091, n1qlSSL: 18093, ftsSSL: 18094
10:44:00.016 INFO :arrow_forward: indexHttps: 19102, indexStreamInit: 9103, kvSSL: 11207
10:44:00.016 INFO :arrow_forward: capi: 8092, capiSSL: 18092, fts: 8094
10:44:00.017 INFO :arrow_forward: ftsGRPCSSL: 19130, n1ql: 8093, eventingAdminPort: 8096
10:44:00.017 INFO :arrow_forward: eventingDebug: 9140, ftsGRPC: 9130, indexStreamCatchup: 9104
10:44:00.017 INFO :arrow_forward: mgmtSSL: 18091, indexAdmin: 9100, indexHttp: 9102
10:44:00.018 INFO :arrow_forward: indexStreamMaint: 9105, kv: 11210
10:44:00.023 INFO :arrow_forward: Fetching config from http://server1.emea.cib:8091
10:44:00.115 INFO :arrow_forward: Received cluster configuration, nodes list:
[
{
“addressFamily”: “inet”,
“addressFamilyOnly”: false,
“clusterCompatibility”: 458753,
“clusterMembership”: “active”,
“configuredHostname”: “server1.emea.cib:8091”,
“couchApiBase”: “http://server1.emea.cib:8092/”,
“couchApiBaseHTTPS”: “https://server1.emea.cib:18092/”,
“cpuCount”: 48,
“externalListeners”: [
{
“afamily”: “inet”,
“nodeEncryption”: false
},
{
“afamily”: “inet6”,
“nodeEncryption”: false
}
],
“hostname”: “server1.emea.cib:8091”,
“interestingStats”: {
“cmd_get”: 0,
“couch_docs_actual_disk_size”: 11780069779,
“couch_docs_data_size”: 9124088306,
“couch_spatial_data_size”: 0,
“couch_spatial_disk_size”: 0,
“couch_views_actual_disk_size”: 6485,
“couch_views_data_size”: 6485,
“curr_items”: 5265580,
“curr_items_tot”: 5265580,
“ep_bg_fetched”: 7,
“get_hits”: 0,
“index_data_size”: 6447226219,
“index_disk_size”: 3370395976,
“mem_used”: 17234976352,
“ops”: 328.7,
“vb_active_num_non_resident”: 0,
“vb_replica_curr_items”: 0
},
“mcdMemoryAllocated”: 309160,
“mcdMemoryReserved”: 309160,
“memoryFree”: 369800642560,
“memoryTotal”: 405222776832,
“nodeEncryption”: false,
“nodeHash”: 103033814,
“nodeUUID”: “f2dd015b18e570cd19bacfce3b0d4e8d”,
“os”: “x86_64-pc-linux-gnu”,
“otpNode”: “ns_1@server1.emea.cib”,
“ports”: {
“direct”: 11210,
“distTCP”: 21100,
“distTLS”: 21150,
“httpsCAPI”: 18092,
“httpsMgmt”: 18091
},
“recoveryType”: “none”,
“serverGroup”: “Group 1”,
“services”: [
“eventing”,
“fts”,
“index”,
“kv”,
“n1ql”
],
“status”: “healthy”,
“systemStats”: {
“allocstall”: 0,
“cpu_cores_available”: 48,
“cpu_stolen_rate”: 0,
“cpu_utilization_rate”: 5.462492474198275,
“mem_free”: 369800642560,
“mem_limit”: 405222776832,
“mem_total”: 405222776832,
“swap_total”: 34359734272,
“swap_used”: 0
},
“thisNode”: true,
“uptime”: “354649”,
“version”: “7.1.6-3955-enterprise”
},
{
“addressFamily”: “inet”,
“addressFamilyOnly”: false,
“clusterCompatibility”: 458753,
“clusterMembership”: “active”,
“configuredHostname”: “server4.emea.cib:8091”,
“couchApiBase”: “http://server4.emea.cib:8092/”,
“couchApiBaseHTTPS”: “https://server4.emea.cib:18092/”,
“cpuCount”: 48,
“externalListeners”: [
{
“afamily”: “inet”,
“nodeEncryption”: false
},
{
“afamily”: “inet6”,
“nodeEncryption”: false
}
],
“hostname”: “server4.emea.cib:8091”,
“interestingStats”: {
“cmd_get”: 0,
“couch_docs_actual_disk_size”: 10528856463,
“couch_docs_data_size”: 9131357479,
“couch_spatial_data_size”: 0,
“couch_spatial_disk_size”: 0,
“couch_views_actual_disk_size”: 6488,
“couch_views_data_size”: 6488,
“curr_items”: 5263433,
“curr_items_tot”: 5263433,
“ep_bg_fetched”: 4,
“get_hits”: 0,
“index_data_size”: 6058328557,
“index_disk_size”: 3201015514,
“mem_used”: 17276462656,
“ops”: 305.2,
“vb_active_num_non_resident”: 0,
“vb_replica_curr_items”: 0
},
“mcdMemoryAllocated”: 309160,
“mcdMemoryReserved”: 309160,
“memoryFree”: 370629517312,
“memoryTotal”: 405222772736,
“nodeEncryption”: false,
“nodeHash”: 50091886,
“nodeUUID”: “0ce277239ad9b5f3877cf58141c46673”,
“os”: “x86_64-pc-linux-gnu”,
“otpNode”: “ns_1@server4.emea.cib”,
“ports”: {
“direct”: 11210,
“distTCP”: 21100,
“distTLS”: 21150,
“httpsCAPI”: 18092,
“httpsMgmt”: 18091
},
“recoveryType”: “none”,
“serverGroup”: “Group 1”,
“services”: [
“eventing”,
“fts”,
“index”,
“kv”,
“n1ql”
],
“status”: “healthy”,
“systemStats”: {
“allocstall”: 0,
“cpu_cores_available”: 48,
“cpu_stolen_rate”: 0,
“cpu_utilization_rate”: 5.180181666655667,
“mem_free”: 370629517312,
“mem_limit”: 405222772736,
“mem_total”: 405222772736,
“swap_total”: 34359734272,
“swap_used”: 0
},
“uptime”: “354646”,
“version”: “7.1.6-3955-enterprise”
},
{
“addressFamily”: “inet”,
“addressFamilyOnly”: false,
“clusterCompatibility”: 458753,
“clusterMembership”: “active”,
“configuredHostname”: “server5.emea.cib:8091”,
“couchApiBase”: “http://server5.emea.cib:8092/”,
“couchApiBaseHTTPS”: “https://server5.emea.cib:18092/”,
“cpuCount”: 64,
“externalListeners”: [
{
“afamily”: “inet”,
“nodeEncryption”: false
},
{
“afamily”: “inet6”,
“nodeEncryption”: false
}
],
“hostname”: “server5.emea.cib:8091”,
“interestingStats”: {
“cmd_get”: 0,
“couch_docs_actual_disk_size”: 11300502137,
“couch_docs_data_size”: 9126704224,
“couch_spatial_data_size”: 0,
“couch_spatial_disk_size”: 0,
“couch_views_actual_disk_size”: 6486,
“couch_views_data_size”: 6486,
“curr_items”: 5263990,
“curr_items_tot”: 5263990,
“ep_bg_fetched”: 15,
“get_hits”: 0,
“index_data_size”: 1232482971,
“index_disk_size”: 799850404,
“mem_used”: 17260882864,
“ops”: 301.8,
“vb_active_num_non_resident”: 0,
“vb_replica_curr_items”: 0
},
“mcdMemoryAllocated”: 309331,
“mcdMemoryReserved”: 309331,
“memoryFree”: 376734425088,
“memoryTotal”: 405446537216,
“nodeEncryption”: false,
“nodeHash”: 109024193,
“nodeUUID”: “6b2ccee2c6e3b8059c39c90eef580e1a”,
“os”: “x86_64-pc-linux-gnu”,
“otpNode”: “ns_1@server5.emea.cib”,
“ports”: {
“direct”: 11210,
“distTCP”: 21100,
“distTLS”: 21150,
“httpsCAPI”: 18092,
“httpsMgmt”: 18091
},
“recoveryType”: “none”,
“serverGroup”: “Group 1”,
“services”: [
“eventing”,
“fts”,
“index”,
“kv”,
“n1ql”
],
“status”: “healthy”,
“systemStats”: {
“allocstall”: 0,
“cpu_cores_available”: 64,
“cpu_stolen_rate”: 0,
“cpu_utilization_rate”: 2.622329423566189,
“mem_free”: 376734425088,
“mem_limit”: 405446537216,
“mem_total”: 405446537216,
“swap_total”: 34359734272,
“swap_used”: 0
},
“uptime”: “354649”,
“version”: “7.1.6-3955-enterprise”
},
{
“addressFamily”: “inet”,
“addressFamilyOnly”: false,
“clusterCompatibility”: 458753,
“clusterMembership”: “active”,
“configuredHostname”: “server13.emea.cib:8091”,
“couchApiBase”: “http://server13.emea.cib:8092/”,
“couchApiBaseHTTPS”: “https://server13.emea.cib:18092/”,
“cpuCount”: 32,
“externalListeners”: [
{
“afamily”: “inet”,
“nodeEncryption”: false
},
{
“afamily”: “inet6”,
“nodeEncryption”: false
}
],
“hostname”: “server13.emea.cib:8091”,
“interestingStats”: {
“cmd_get”: 0,
“couch_docs_actual_disk_size”: 11910970280,
“couch_docs_data_size”: 9135847809,
“couch_spatial_data_size”: 0,
“couch_spatial_disk_size”: 0,
“couch_views_actual_disk_size”: 6497,
“couch_views_data_size”: 6497,
“curr_items”: 5264697,
“curr_items_tot”: 5264697,
“ep_bg_fetched”: 1,
“get_hits”: 0,
“index_data_size”: 59036327,
“index_disk_size”: 194351439,
“mem_used”: 17222163664,
“ops”: 289.8,
“vb_active_num_non_resident”: 0,
“vb_replica_curr_items”: 0
},
“mcdMemoryAllocated”: 412548,
“mcdMemoryReserved”: 412548,
“memoryFree”: 504127565824,
“memoryTotal”: 540736208896,
“nodeEncryption”: false,
“nodeHash”: 104049149,
“nodeUUID”: “f5257c71ca518efa53f10e4cc70518c2”,
“os”: “x86_64-pc-linux-gnu”,
“otpNode”: “ns_1@server13.emea.cib”,
“ports”: {
“direct”: 11210,
“distTCP”: 21100,
“distTLS”: 21150,
“httpsCAPI”: 18092,
“httpsMgmt”: 18091
},
“recoveryType”: “none”,
“serverGroup”: “Group 1”,
“services”: [
“eventing”,
“fts”,
“index”,
“kv”,
“n1ql”
],
“status”: “healthy”,
“systemStats”: {
“allocstall”: 0,
“cpu_cores_available”: 32,
“cpu_stolen_rate”: 0,
“cpu_utilization_rate”: 3.333101091183118,
“mem_free”: 504127565824,
“mem_limit”: 540736208896,
“mem_total”: 540736208896,
“swap_total”: 34359734272,
“swap_used”: 0
},
“uptime”: “354650”,
“version”: “7.1.6-3955-enterprise”
}
]
10:44:00.207 INFO :arrow_forward: Successfully connected to Key Value service at server1.emea.cib:11210
10:44:00.292 INFO :arrow_forward: Successfully connected to Management service at server1.emea.cib:8091
10:44:00.358 INFO :arrow_forward: Successfully connected to Views service at server1.emea.cib:8092
10:44:00.416 INFO :arrow_forward: Successfully connected to Query service at server1.emea.cib:8093
10:44:00.469 INFO :arrow_forward: Successfully connected to Search service at server1.emea.cib:8094
10:44:00.469 WARN :arrow_forward: Could not test Analytics service on server1.emea.cib as it was not in the config
10:44:00.841 INFO :arrow_forward: Successfully connected to Key Value service at server4.emea.cib:11210
10:44:00.933 INFO :arrow_forward: Successfully connected to Management service at server4.emea.cib:8091
10:44:00.997 INFO :arrow_forward: Successfully connected to Views service at server4.emea.cib:8092
10:44:01.050 INFO :arrow_forward: Successfully connected to Query service at server4.emea.cib:8093
10:44:01.104 INFO :arrow_forward: Successfully connected to Search service at server4.emea.cib:8094
10:44:01.105 WARN :arrow_forward: Could not test Analytics service on server4.emea.cib as it was not in the config
10:44:01.242 INFO :arrow_forward: Successfully connected to Key Value service at server5.emea.cib:11210
10:44:01.549 INFO :arrow_forward: Successfully connected to Management service at server5.emea.cib:8091
10:44:01.624 INFO :arrow_forward: Successfully connected to Views service at server5.emea.cib:8092
10:44:01.683 INFO :arrow_forward: Successfully connected to Query service at server5.emea.cib:8093
10:44:01.734 INFO :arrow_forward: Successfully connected to Search service at server5.emea.cib:8094
10:44:01.734 WARN :arrow_forward: Could not test Analytics service on server5.emea.cib as it was not in the config
10:44:01.869 INFO :arrow_forward: Successfully connected to Key Value service at server13.emea.cib:11210
10:44:01.947 INFO :arrow_forward: Successfully connected to Management service at server13.emea.cib:8091
10:44:02.002 INFO :arrow_forward: Successfully connected to Views service at server13.emea.cib:8092
10:44:02.056 INFO :arrow_forward: Successfully connected to Query service at server13.emea.cib:8093
10:44:02.108 INFO :arrow_forward: Successfully connected to Search service at server13.emea.cib:8094
10:44:02.109 WARN :arrow_forward: Could not test Analytics service on server13.emea.cib as it was not in the config
10:44:02.487 INFO :arrow_forward: Memd Nop Pinged server1.emea.cib:11210 10 times, 0 errors, 23ms min, 30ms max, 26ms mean
10:44:02.487 WARN :arrow_forward: Memcached service on server1.emea.cib:11210 on average took longer than 10ms (was: 26ms) to reply. This is usually due to network-related issues, and could significantly affect application performance.
10:44:02.490 WARN :arrow_forward: Memcached service on server1.emea.cib:11210 maximally took longer than 20ms (was: 30ms) to reply. This is usually due to network-related issues, and could significantly affect application performance.
10:44:02.893 INFO :arrow_forward: Memd Nop Pinged server4.emea.cib:11210 10 times, 0 errors, 26ms min, 39ms max, 28ms mean
10:44:02.893 WARN :arrow_forward: Memcached service on server4.emea.cib:11210 on average took longer than 10ms (was: 28ms) to reply. This is usually due to network-related issues, and could significantly affect application performance.
10:44:02.895 WARN :arrow_forward: Memcached service on server4.emea.cib:11210 maximally took longer than 20ms (was: 39ms) to reply. This is usually due to network-related issues, and could significantly affect application performance.
10:44:03.270 INFO :arrow_forward: Memd Nop Pinged server5.emea.cib:11210 10 times, 0 errors, 23ms min, 27ms max, 25ms mean
10:44:03.270 WARN :arrow_forward: Memcached service on server5.emea.cib:11210 on average took longer than 10ms (was: 25ms) to reply. This is usually due to network-related issues, and could significantly affect application performance.
10:44:03.271 WARN :arrow_forward: Memcached service on server5.emea.cib:11210 maximally took longer than 20ms (was: 27ms) to reply. This is usually due to network-related issues, and could significantly affect application performance.
10:44:03.645 INFO :arrow_forward: Memd Nop Pinged server13.emea.cib:11210 10 times, 0 errors, 25ms min, 31ms max, 26ms mean
10:44:03.645 WARN :arrow_forward: Memcached service on server13.emea.cib:11210 on average took longer than 10ms (was: 26ms) to reply. This is usually due to network-related issues, and could significantly affect application performance.
10:44:03.647 WARN :arrow_forward: Memcached service on server13.emea.cib:11210 maximally took longer than 20ms (was: 31ms) to reply. This is usually due to network-related issues, and could significantly affect application performance.
10:44:03.650 INFO :arrow_forward: Diagnostics completed

Summary:
←[33m[WARN]←[0m Connection string is using the deprecated http:// scheme. Use the couchbase:// scheme instead!
←[33m[WARN]←[0m Your connection string specifies only a single host. You should consider adding additional static nodes from your cluster to this list to improve your applications fault-tolerance
←[33m[WARN]←[0m Bootstrap host Gaasdevmarcbps01.cm.par.emea.cib is not using the canonical node hostname of server1.emea.cib. This is not neccessarily an error, but has been known to result in strange and challenging to diagnose errors when DNS entries are reconfigured.
←[33m[WARN]←[0m Could not test Analytics service on server1.emea.cib as it was not in the config
←[33m[WARN]←[0m Could not test Analytics service on server4.emea.cib as it was not in the config
←[33m[WARN]←[0m Could not test Analytics service on server5.emea.cib as it was not in the config
←[33m[WARN]←[0m Could not test Analytics service on server13.emea.cib as it was not in the config
←[33m[WARN]←[0m Memcached service on server1.emea.cib:11210 on average took longer than 10ms (was: 26ms) to reply. This is usually due to network-related issues, and could significantly affect application performance.
←[33m[WARN]←[0m Memcached service on server1.emea.cib:11210 maximally took longer than 20ms (was: 30ms) to reply. This is usually due to network-related issues, and could significantly affect application performance.
←[33m[WARN]←[0m Memcached service on server4.emea.cib:11210 on average took longer than 10ms (was: 28ms) to reply. This is usually due to network-related issues, and could significantly affect application performance.
←[33m[WARN]←[0m Memcached service on server4.emea.cib:11210 maximally took longer than 20ms (was: 39ms) to reply. This is usually due to network-related issues, and could significantly affect application performance.
←[33m[WARN]←[0m Memcached service on server5.emea.cib:11210 on average took longer than 10ms (was: 25ms) to reply. This is usually due to network-related issues, and could significantly affect application performance.
←[33m[WARN]←[0m Memcached service on server5.emea.cib:11210 maximally took longer than 20ms (was: 27ms) to reply. This is usually due to network-related issues, and could significantly affect application performance.
←[33m[WARN]←[0m Memcached service on server13.emea.cib:11210 on average took longer than 10ms (was: 26ms) to reply. This is usually due to network-related issues, and could significantly affect application performance.
←[33m[WARN]←[0m Memcached service on server13.emea.cib:11210 maximally took longer than 20ms (was: 31ms) to reply. This is usually due to network-related issues, and could significantly affect application performance.

Found multiple issues, see listing above.

Can you show the code and parameter values that your application is using to connect to the cluster? The connection string is for connecting to the kv service 11210 (11207 for ssl). It’s not necessary to provide the port number fto connect to the defaults as it will be determined by the protocol - couchbase (or http) vs couchbases (or https). If you provide 8091 (as you did for sdk doctor), making a kv connection to 8091 (the manager port) will fail, and it will fallback to HttpStreaming, which fails with the issue that you have identified.

Here an example :

            Uri uri = new Uri("http://server.emea.cib:8091");
            ClusterOptions clusterOptions = new ClusterOptions
            {
                UserName = "UserName",
                Password = "Password",
                TracingOptions = new TracingOptions { Enabled = false },
                EnableOperationDurationTracing = false,
                OrphanTracingOptions = new OrphanOptions { Enabled = false },
                EnableTls = false,
                ForceIpAsTargetHost = false,
                QueryTimeout = TimeSpan.FromSeconds(1200),
                HttpConnectionLifetime = TimeSpan.FromSeconds(1200),
                KvDurabilityTimeout = TimeSpan.FromSeconds(1200),
            };
            clusterOptions.WithRetryStrategy(new BestEffortRetryStrategy());

            CurrentCluster = await Cluster.ConnectAsync(uri.Host, clusterOptions).ConfigureAwait(false);
            await CurrentCluster.WaitUntilReadyAsync(TimeSpan.FromSeconds(10)).ConfigureAwait(false);
            
            Bucket = await CurrentCluster.BucketAsync(_configuration.BucketName).ConfigureAwait(false);
            Scope = await Bucket.ScopeAsync("ScopeName").ConfigureAwait(false);
            Collection = await Scope.CollectionAsync("CollectionName").ConfigureAwait(false);

So if I understand well, we should rather do this ?

Uri uri = new Uri("http://server.emea.cib");

Yes, that should work.
couchbase://server.emea.cib is preferred because the sdks have deprecated using http

Hello,
My colleague is off, so I resume his work

To sumup, whe have a long time connection to couchbase
We can have it openned for days without any issues, but sometimes, somtehing occurs and the HttpStreamingListener starts to send requests on the url http://serverxxx:8091/pools/default/bs/CLUSTER
Obviously, it returns a 404 because CLUSTER is not a bucket

After some investigations, I found correlations between server logs and client logs
I’m contacting our infrastructure teams, if they can tell me there are service restart, or network failure

But in any case, I think there is a bug if the server is in an “unstable” state, it makes the sdk sending bad requests

server log

9 Mar, 2024 8:10:03 AM ns_memcached 000          ns_1@server1                       Control connection to memcached on 'ns_1@server1' disconnected. Check logs for details.
9 Mar, 2024 8:10:13 AM compaction_daemon 000     ns_1@server1                       Compactor for database `CurveBuilder_d` (pid [{type,database},
{important,true},
{name,<<"CurveBuilder_d">>},
{fa,
{#Fun<compaction_daemon.4.82522328>,
[<<"CurveBuilder_d">>,
{config,
{30,undefined},
{30,undefined},
undefined,false,false,
{daemon_config,30,131072,
20971520}},
false,
{[{type,bucket}]}]}}]) terminated unexpectedly: {{bad_return_value,
{stop,
{error,
couldnt_connect_to_memcached}}},
{gen_server,
call,
[{'ns_memcached-CurveBuilder_d',
'ns_1@server1'},
{raw_stats,
<<"diskinfo">>,
#Fun<compaction_daemon.18.82522328>,
{<<"0">>,
<<"0">>}},
300000]}}
9 Mar, 2024 8:10:13 AM compaction_daemon 000     ns_1@server1                       Compactor for database `CurveBuilder_ppr` (pid [{type,database},
{important,true},
{name,<<"CurveBuilder_ppr">>},
{fa,
{#Fun<compaction_daemon.4.82522328>,
[<<"CurveBuilder_ppr">>,
{config,
{30,undefined},
{30,undefined},
undefined,false,false,
{daemon_config,30,131072,
20971520}},
false,
{[{type,bucket}]}]}}]) terminated unexpectedly: {{bad_return_value,
{stop,
{error,
couldnt_connect_to_memcached}}},
{gen_server,
call,
[{'ns_memcached-CurveBuilder_ppr',
'ns_1@server1'},
{raw_stats,
<<"diskinfo">>,
#Fun<compaction_daemon.18.82522328>,
{<<"0">>,
<<"0">>}},
300000]}}
9 Mar, 2024 8:10:14 AM ns_node_disco 005         ns_1@server2                       Node 'ns_1@server2' saw that node 'ns_1@server1' went down. Details: [{nodedown_reason,
                                                                                                                 connection_closed}]
9 Mar, 2024 8:10:14 AM ns_node_disco 005         ns_1@server3                       Node 'ns_1@server3' saw that node 'ns_1@server1' went down. Details: [{nodedown_reason,
                                                                                                                 connection_closed}]
9 Mar, 2024 8:10:14 AM ns_node_disco 005         ns_1@server4                       Node 'ns_1@server4' saw that node 'ns_1@server1' went down. Details: [{nodedown_reason,
                                                                                                                 connection_closed}]
9 Mar, 2024 8:10:17 AM ns_memcached 000          ns_1@server4                       Control connection to memcached on 'ns_1@server4' disconnected. Check logs for details.
9 Mar, 2024 8:10:23 AM mb_master 000             ns_1@server4                       Haven't heard from a higher priority node or a master, so I'm taking over.
9 Mar, 2024 8:10:23 AM auto_failover 000         ns_1@server4                       Enabled auto-failover with timeout 120 and max count 1
9 Mar, 2024 8:10:28 AM ns_node_disco 005         ns_1@server2                       Node 'ns_1@server2' saw that node 'ns_1@server4' went down. Details: [{nodedown_reason,
                                                                                                                 connection_closed}]
9 Mar, 2024 8:10:28 AM ns_node_disco 005         ns_1@server3                       Node 'ns_1@server3' saw that node 'ns_1@server4' went down. Details: [{nodedown_reason,
                                                                                                                 connection_closed}]
9 Mar, 2024 8:10:31 AM ns_memcached 000          ns_1@server2                       Control connection to memcached on 'ns_1@server2' disconnected. Check logs for details.
9 Mar, 2024 8:10:35 AM mb_master 000             ns_1@server2                       Haven't heard from a higher priority node or a master, so I'm taking over.
9 Mar, 2024 8:10:35 AM auto_failover 000         ns_1@server2                       Enabled auto-failover with timeout 120 and max count 1
9 Mar, 2024 8:10:40 AM ns_node_disco 005         ns_1@server3                       Node 'ns_1@server3' saw that node 'ns_1@server2' went down. Details: [{nodedown_reason,
                                                                                                                 connection_closed}]
9 Mar, 2024 8:10:43 AM ns_memcached 000          ns_1@server3                       Control connection to memcached on 'ns_1@server3' disconnected. Check logs for details.
9 Mar, 2024 8:10:50 AM mb_master 000             ns_1@server3                       Haven't heard from a higher priority node or a master, so I'm taking over.
9 Mar, 2024 8:10:50 AM auto_failover 000         ns_1@server3                       Enabled auto-failover with timeout 120 and max count 1
9 Mar, 2024 8:12:59 AM ns_cookie_manager 002     ns_1@server1                       Node 'ns_1@server1' synchronized otp cookie {sanitized,
                                                                  <<"o242cW967P+IabS8YfM/nwNorzUXGvXZsXC9/ag2Ldk=">>} from cluster
9 Mar, 2024 8:12:59 AM menelaus_web_sup 001      ns_1@server1                       Couchbase Server has started on web port 8091 on node 'ns_1@server1'. Version: "7.1.6-3955-enterprise".
9 Mar, 2024 8:12:59 AM compaction_daemon 000     ns_1@server1                       Compactor for view `mapreduce_view/ps-output/_design/dev_design-doc/main` (pid [{type,
                                                                                 view},
                                                                                {important,
                                                                                 true},
                                                                                {name,
                                                                                 <<"mapreduce_view/ps-output/_design/dev_design-doc/main">>},
                                                                                {fa,
                                                                                 {#Fun<compaction_daemon.23.82522328>,
                                                                                  [<<"ps-output">>,
                                                                                   <<"_design/dev_design-doc">>,
9 Mar, 2024 8:13:00 AM ns_cookie_manager 002     ns_1@server4                       Node 'ns_1@server4' synchronized otp cookie {sanitized,
                                                                  <<"o242cW967P+IabS8YfM/nwNorzUXGvXZsXC9/ag2Ldk=">>} from cluster
9 Mar, 2024 8:13:00 AM ns_node_disco 004         ns_1@server1                       Node 'ns_1@server1' saw that node 'ns_1@server4' came up. Tags: []
9 Mar, 2024 8:13:00 AM ns_node_disco 004         ns_1@server4                       Node 'ns_1@server4' saw that node 'ns_1@server1' came up. Tags: []
9 Mar, 2024 8:13:00 AM menelaus_web_sup 001      ns_1@server4                       Couchbase Server has started on web port 8091 on node 'ns_1@server4'. Version: "7.1.6-3955-enterprise".
9 Mar, 2024 8:13:00 AM compaction_daemon 000     ns_1@server4                       Compactor for view `mapreduce_view/ps-output/_design/dev_design-doc/main` (pid [{type,
view},
{important,
true},
{name,
<<"mapreduce_view/ps-output/_design/dev_design-doc/main">>},
{fa,
{#Fun<compaction_daemon.23.82522328>,
[<<"ps-output">>,
<<"_design/dev_design-doc">>,
mapreduce_view,
main,
{config,
{30,
undefined},
{30,
undefined},
undefined,
false,
false,
{daemon_config,
30,
131072,
20971520}},
false,
{[{type,
bucket}]}]}}]) terminated unexpectedly: {{badmatch,
{error,
{error,
{dcp_socket_connect_failed,
econnrefused}}}},
[{compaction_daemon,
get_group_data_info,
4,
[{file,
"src/compaction_daemon.erl"},
{line,
1151}]},
{compaction_daemon,
view_needs_compaction,
5,
[{file,
"src/compaction_daemon.erl"},
{line,
1099}]},
{compaction_daemon,
'-spawn_view_index_compactor/7-fun-0-',
8,
[{file,
"src/compaction_daemon.erl"},
{line,
947}]},
{proc_lib,
init_p,
3,
[{file,
"proc_lib.erl"},
{line,
211}]}]}
9 Mar, 2024 8:13:01 AM ns_cookie_manager 002     ns_1@server2                       Node 'ns_1@server2' synchronized otp cookie {sanitized,
                                                                  <<"o242cW967P+IabS8YfM/nwNorzUXGvXZsXC9/ag2Ldk=">>} from cluster
9 Mar, 2024 8:13:01 AM ns_node_disco 004         ns_1@server2                       Node 'ns_1@server2' saw that node 'ns_1@server1' came up. Tags: []
9 Mar, 2024 8:13:01 AM ns_node_disco 004         ns_1@server1                       Node 'ns_1@server1' saw that node 'ns_1@server2' came up. Tags: []
9 Mar, 2024 8:13:01 AM ns_node_disco 004         ns_1@server2                       Node 'ns_1@server2' saw that node 'ns_1@server4' came up. Tags: []
9 Mar, 2024 8:13:01 AM ns_node_disco 004         ns_1@server4                       Node 'ns_1@server4' saw that node 'ns_1@server2' came up. Tags: []
9 Mar, 2024 8:13:01 AM menelaus_web_sup 001      ns_1@server2                       Couchbase Server has started on web port 8091 on node 'ns_1@server2'. Version: "7.1.6-3955-enterprise".
9 Mar, 2024 8:13:01 AM ns_cookie_manager 002     ns_1@server3                       Node 'ns_1@server3' synchronized otp cookie {sanitized,
                                                                  <<"o242cW967P+IabS8YfM/nwNorzUXGvXZsXC9/ag2Ldk=">>} from cluster
9 Mar, 2024 8:13:01 AM ns_node_disco 004         ns_1@server3                       Node 'ns_1@server3' saw that node 'ns_1@server1' came up. Tags: []
9 Mar, 2024 8:13:01 AM ns_node_disco 004         ns_1@server1                       Node 'ns_1@server1' saw that node 'ns_1@server3' came up. Tags: []
9 Mar, 2024 8:13:01 AM ns_node_disco 004         ns_1@server3                       Node 'ns_1@server3' saw that node 'ns_1@server4' came up. Tags: []
9 Mar, 2024 8:13:01 AM ns_node_disco 004         ns_1@server4                       Node 'ns_1@server4' saw that node 'ns_1@server3' came up. Tags: []
9 Mar, 2024 8:13:01 AM ns_node_disco 004         ns_1@server2                       Node 'ns_1@server2' saw that node 'ns_1@server3' came up. Tags: []
9 Mar, 2024 8:13:01 AM ns_node_disco 004         ns_1@server3                       Node 'ns_1@server3' saw that node 'ns_1@server2' came up. Tags: []
9 Mar, 2024 8:13:01 AM menelaus_web_sup 001      ns_1@server3                       Couchbase Server has started on web port 8091 on node 'ns_1@server3'. Version: "7.1.6-3955-enterprise".
9 Mar, 2024 8:13:01 AM ns_memcached 000          ns_1@server1                       Bucket "CurveBuilder_d" loaded on node 'ns_1@server1' in 0 seconds.
9 Mar, 2024 8:13:01 AM ns_memcached 000          ns_1@server1                       Bucket "CurveBuilder_ppr" loaded on node 'ns_1@server1' in 0 seconds.
9 Mar, 2024 8:13:01 AM compaction_daemon 000     ns_1@server3                       Compactor for view `mapreduce_view/ps-output/_design/dev_design-doc/main` (pid [{type,
view},
{important,
true},
{name,
<<"mapreduce_view/ps-output/_design/dev_design-doc/main">>},
{fa,
{#Fun<compaction_daemon.23.82522328>,
[<<"ps-output">>,
<<"_design/dev_design-doc">>,
mapreduce_view,
main,
{config,
{30,
undefined},
{30,
undefined},
undefined,
false,
false,
{daemon_config,
30,
131072,
20971520}},
false,
{[{type,
bucket}]}]}}]) terminated unexpectedly: {{badmatch,
{error,
{error,
{dcp_socket_connect_failed,
econnrefused}}}},
[{compaction_daemon,
get_group_data_info,
4,
[{file,
"src/compaction_daemon.erl"},
{line,
1151}]},
{compaction_daemon,
view_needs_compaction,
5,
[{file,
"src/compaction_daemon.erl"},
{line,
1099}]},
{compaction_daemon,
'-spawn_view_index_compactor/7-fun-0-',
8,
[{file,
"src/compaction_daemon.erl"},
{line,
947}]},
{proc_lib,
init_p,
3,
[{file,
"proc_lib.erl"},
{line,
211}]}]}
9 Mar, 2024 8:13:01 AM ns_memcached 000          ns_1@server1                       Bucket "ot_cibml" loaded on node 'ns_1@server1' in 0 seconds.
9 Mar, 2024 8:13:02 AM ns_memcached 000          ns_1@server4                       Bucket "CurveBuilder_d" loaded on node 'ns_1@server4' in 0 seconds.
9 Mar, 2024 8:13:02 AM ns_memcached 000          ns_1@server4                       Bucket "CurveBuilder_ppr" loaded on node 'ns_1@server4' in 0 seconds.
9 Mar, 2024 8:13:02 AM ns_memcached 000          ns_1@server4                       Bucket "ot_cibml" loaded on node 'ns_1@server4' in 0 seconds.
9 Mar, 2024 8:13:02 AM ns_memcached 000          ns_1@server2                       Bucket "CurveBuilder_d" loaded on node 'ns_1@server2' in 0 seconds.
9 Mar, 2024 8:13:02 AM ns_memcached 000          ns_1@server2                       Bucket "CurveBuilder_ppr" loaded on node 'ns_1@server2' in 0 seconds.
9 Mar, 2024 8:13:02 AM ns_memcached 000          ns_1@server3                       Bucket "CurveBuilder_d" loaded on node 'ns_1@server3' in 0 seconds.
9 Mar, 2024 8:13:03 AM ns_memcached 000          ns_1@server2                       Bucket "ot_cibml" loaded on node 'ns_1@server2' in 0 seconds.
9 Mar, 2024 8:13:03 AM ns_memcached 000          ns_1@server3                       Bucket "ot_cibml" loaded on node 'ns_1@server3' in 0 seconds.
9 Mar, 2024 8:13:03 AM ns_memcached 000          ns_1@server3                       Bucket "CurveBuilder_ppr" loaded on node 'ns_1@server3' in 1 seconds.
9 Mar, 2024 8:13:05 AM ns_memcached 000          ns_1@server1                       Bucket "ot_cibml_ppr" loaded on node 'ns_1@server1' in 4 seconds.
9 Mar, 2024 8:13:05 AM ns_memcached 000          ns_1@server4                       Bucket "ot_cibml_ppr" loaded on node 'ns_1@server4' in 4 seconds.
9 Mar, 2024 8:13:06 AM ns_memcached 000          ns_1@server2                       Bucket "ot_cibml_ppr" loaded on node 'ns_1@server2' in 3 seconds.
9 Mar, 2024 8:13:06 AM ns_memcached 000          ns_1@server3                       Bucket "ot_cibml_ppr" loaded on node 'ns_1@server3' in 4 seconds.
9 Mar, 2024 8:13:09 AM mb_master 000             ns_1@server1                       Haven't heard from a higher priority node or a master, so I'm taking over.
9 Mar, 2024 8:13:09 AM auto_failover 000         ns_1@server1                       Enabled auto-failover with timeout 120 and max count 1
9 Mar, 2024 8:13:54 AM ns_orchestrator 000       ns_1@server1                       Starting rebalance, KeepNodes = ['ns_1@server1',
                                 'ns_1@server4',
                                 'ns_1@server2',
                                 'ns_1@server3'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = bb702ca885fdf3ee3a804d82ab2f01c7
9 Mar, 2024 8:13:54 AM ns_orchestrator 000       ns_1@server1                       Not rebalancing because rebalance is already in progress.~n
9 Mar, 2024 8:13:55 AM ns_rebalancer 000         ns_1@server1                       Started rebalancing bucket CurveBuilder_d
9 Mar, 2024 8:13:55 AM ns_vbucket_mover 000      ns_1@server1                       Bucket "CurveBuilder_d" rebalance appears to be swap rebalance
9 Mar, 2024 8:13:55 AM ns_rebalancer 000         ns_1@server1                       Started rebalancing bucket CurveBuilder_ppr
9 Mar, 2024 8:13:55 AM ns_vbucket_mover 000      ns_1@server1                       Bucket "CurveBuilder_ppr" rebalance appears to be swap rebalance
9 Mar, 2024 8:13:56 AM ns_rebalancer 000         ns_1@server1                       Started rebalancing bucket ot_cibml
9 Mar, 2024 8:13:56 AM ns_vbucket_mover 000      ns_1@server1                       Bucket "ot_cibml" rebalance appears to be swap rebalance
9 Mar, 2024 8:13:56 AM ns_rebalancer 000         ns_1@server1                       Started rebalancing bucket ot_cibml_ppr
9 Mar, 2024 8:13:56 AM ns_vbucket_mover 000      ns_1@server1                       Bucket "ot_cibml_ppr" rebalance appears to be swap rebalance
Rebalance Operation Id = bb702ca885fdf3ee3a804d82ab2f01c7

and client log, this is just the latest events before 404 (because of forum message limits)


Mar 9, 2024 @ 08:12:33.462,"[otr_dev31] 2024-03-09 08:12:33,462 ERROR 61 Ot.Couchbase.Core.IO.Connections.DefaultConnectionPoolScaleController  Unhandled error in DefaultConnectionPoolScaleController The operation is not allowed on non-connected sockets.    at System.Net.Sockets.NetworkStream.InitNetworkStream(Socket socket, FileAccess Access)
at System.Net.Sockets.NetworkStream..ctor(Socket socket, Boolean ownsSocket)
   at Couchbase.Core.IO.Connections.MultiplexingConnection..ctor(Socket socket, ILogger`1 logger)
   at Couchbase.Core.IO.Connections.ConnectionFactory.<CreateAndConnectAsync>d__5.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Couchbase.Core.IO.Connections.ConnectionPoolBase.<CreateConnectionAsync>d__23.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Couchbase.Core.IO.Connections.Channels.ChannelConnectionPool.<>c__DisplayClass30_0.<<AddConnectionsAsync>g__StartConnection|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Couchbase.Core.IO.Connections.Channels.ChannelConnectionPool.<ScaleAsync>d__26.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Couchbase.Core.IO.Connections.DefaultConnectionPoolScaleController.<RunScalingLogic>d__20.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at Couchbase.Core.IO.Connections.DefaultConnectionPoolScaleController.<MonitorAsync>d__19.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at Couchbase.Core.IO.Connections.DefaultConnectionPoolScaleController.<MonitorAsync>d__19.MoveNext()
System.IO.IOException: The operation is not allowed on non-connected sockets.
   at System.Net.Sockets.NetworkStream.InitNetworkStream(Socket socket, FileAccess Access)
   at System.Net.Sockets.NetworkStream..ctor(Socket socket, Boolean ownsSocket)
   at Couchbase.Core.IO.Connections.MultiplexingConnection..ctor(Socket socket, ILogger`1 logger)
   at Couchbase.Core.IO.Connections.ConnectionFactory.<CreateAndConnectAsync>d__5.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Couchbase.Core.IO.Connections.ConnectionPoolBase.<CreateConnectionAsync>d__23.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Couchbase.Core.IO.Connections.Channels.ChannelConnectionPool.<>c__DisplayClass30_0.<<AddConnectionsAsync>g__StartConnection|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Couchbase.Core.IO.Connections.Channels.ChannelConnectionPool.<ScaleAsync>d__26.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Couchbase.Core.IO.Connections.DefaultConnectionPoolScaleController.<RunScalingLogic>d__20.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at Couchbase.Core.IO.Connections.DefaultConnectionPoolScaleController.<MonitorAsync>d__19.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at Couchbase.Core.IO.Connections.DefaultConnectionPoolScaleController.<MonitorAsync>d__19.MoveNext()"
Mar 9, 2024 @ 08:12:35.530,"[otr_dev31] 2024-03-09 08:12:35,530 WARN 10 Ot.Couchbase.Core.Configuration.Server.ConfigHandler  Issue getting Cluster Map on server server3:11210!
Couchbase.Core.Exceptions.UnambiguousTimeoutException: The GetClusterConfig operation 573237/ timed out after 00:00:02.5065732. It was retried 0 times using Couchbase.Core.Retry.BestEffortRetryStrategy. The KvTimeout is 00:00:02.5000000. ---> System.OperationCanceledException: The operation was canceled.
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Threading.Tasks.Sources.ManualResetValueTaskSourceCore`1.GetResult(Int16 token)
   at Couchbase.Core.ClusterNode.<ExecuteOp>d__136.MoveNext()
   --- End of inner exception stack trace ---
   at Couchbase.Utils.ThrowHelper.ThrowTimeoutException(IOperation operation, Exception innerException, TypedRedactor redactor, IErrorContext context)
   at Couchbase.Core.ClusterNode.<ExecuteOp>d__136.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Couchbase.Core.ClusterNode.<GetClusterMap>d__128.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at Couchbase.Core.Configuration.Server.ConfigHandler.<PollAsync>d__13.MoveNext()
-----------------------Context Info---------------------------
{""dispatchedFrom"":null,""dispatchedTo"":null,""documentKey"":"""",""clientContextId"":""573237"",""cas"":0,""status"":""operationTimeout"",""bucketName"":""ot_cibml_ppr"",""collectionName"":null,""scopeName"":null,""message"":null,""opCode"":""getClusterConfig"",""retryReasons"":[]}"
Mar 9, 2024 @ 08:12:38.039,"[otr_dev31] 2024-03-09 08:12:38,039 WARN 78 Ot.Couchbase.Core.Configuration.Server.ConfigHandler  Issue getting Cluster Map on server server1:11210!
Couchbase.Core.Exceptions.UnambiguousTimeoutException: The GetClusterConfig operation 573239/ timed out after 00:00:02.5083640. It was retried 0 times using Couchbase.Core.Retry.BestEffortRetryStrategy. The KvTimeout is 00:00:02.5000000. ---> System.OperationCanceledException: The operation was canceled.
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Threading.Tasks.Sources.ManualResetValueTaskSourceCore`1.GetResult(Int16 token)
   at Couchbase.Core.ClusterNode.<ExecuteOp>d__136.MoveNext()
   --- End of inner exception stack trace ---
   at Couchbase.Utils.ThrowHelper.ThrowTimeoutException(IOperation operation, Exception innerException, TypedRedactor redactor, IErrorContext context)
   at Couchbase.Core.ClusterNode.<ExecuteOp>d__136.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Couchbase.Core.ClusterNode.<GetClusterMap>d__128.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at Couchbase.Core.Configuration.Server.ConfigHandler.<PollAsync>d__13.MoveNext()
-----------------------Context Info---------------------------
{""dispatchedFrom"":null,""dispatchedTo"":null,""documentKey"":"""",""clientContextId"":""573239"",""cas"":0,""status"":""operationTimeout"",""bucketName"":""ot_cibml_ppr"",""collectionName"":null,""scopeName"":null,""message"":null,""opCode"":""getClusterConfig"",""retryReasons"":[]}"
Mar 9, 2024 @ 08:12:40.542,"[otr_dev31] 2024-03-09 08:12:40,542 WARN 78 Ot.Couchbase.Core.Configuration.Server.ConfigHandler  Issue getting Cluster Map on server server4:11210!
Couchbase.Core.Exceptions.UnambiguousTimeoutException: The GetClusterConfig operation 573241/ timed out after 00:00:02.5020291. It was retried 0 times using Couchbase.Core.Retry.BestEffortRetryStrategy. The KvTimeout is 00:00:02.5000000. ---> System.OperationCanceledException: The operation was canceled.
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Threading.Tasks.Sources.ManualResetValueTaskSourceCore`1.GetResult(Int16 token)
   at Couchbase.Core.ClusterNode.<ExecuteOp>d__136.MoveNext()
   --- End of inner exception stack trace ---
   at Couchbase.Utils.ThrowHelper.ThrowTimeoutException(IOperation operation, Exception innerException, TypedRedactor redactor, IErrorContext context)
   at Couchbase.Core.ClusterNode.<ExecuteOp>d__136.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Couchbase.Core.ClusterNode.<GetClusterMap>d__128.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at Couchbase.Core.Configuration.Server.ConfigHandler.<PollAsync>d__13.MoveNext()
-----------------------Context Info---------------------------
{""dispatchedFrom"":null,""dispatchedTo"":null,""documentKey"":"""",""clientContextId"":""573241"",""cas"":0,""status"":""operationTimeout"",""bucketName"":""ot_cibml_ppr"",""collectionName"":null,""scopeName"":null,""message"":null,""opCode"":""getClusterConfig"",""retryReasons"":[]}"
Mar 9, 2024 @ 08:12:45.559,"[otr_dev31] 2024-03-09 08:12:45,559 WARN 46 Ot.Couchbase.Core.Configuration.Server.ConfigHandler  Issue getting Cluster Map on server server2:11210!
Couchbase.Core.Exceptions.UnambiguousTimeoutException: The GetClusterConfig operation 573243/ timed out after 00:00:02.5092405. It was retried 0 times using Couchbase.Core.Retry.BestEffortRetryStrategy. The KvTimeout is 00:00:02.5000000. ---> System.OperationCanceledException: The operation was canceled.
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Threading.Tasks.Sources.ManualResetValueTaskSourceCore`1.GetResult(Int16 token)
   at Couchbase.Core.ClusterNode.<ExecuteOp>d__136.MoveNext()
   --- End of inner exception stack trace ---
   at Couchbase.Utils.ThrowHelper.ThrowTimeoutException(IOperation operation, Exception innerException, TypedRedactor redactor, IErrorContext context)
   at Couchbase.Core.ClusterNode.<ExecuteOp>d__136.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Couchbase.Core.ClusterNode.<GetClusterMap>d__128.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at Couchbase.Core.Configuration.Server.ConfigHandler.<PollAsync>d__13.MoveNext()
-----------------------Context Info---------------------------
{""dispatchedFrom"":null,""dispatchedTo"":null,""documentKey"":"""",""clientContextId"":""573243"",""cas"":0,""status"":""operationTimeout"",""bucketName"":""ot_cibml_ppr"",""collectionName"":null,""scopeName"":null,""message"":null,""opCode"":""getClusterConfig"",""retryReasons"":[]}"
Mar 9, 2024 @ 08:12:48.064,"[otr_dev31] 2024-03-09 08:12:48,064 WARN 78 Ot.Couchbase.Core.Configuration.Server.ConfigHandler  Issue getting Cluster Map on server server3:11210!
Couchbase.Core.Exceptions.UnambiguousTimeoutException: The GetClusterConfig operation 573245/ timed out after 00:00:02.5041577. It was retried 0 times using Couchbase.Core.Retry.BestEffortRetryStrategy. The KvTimeout is 00:00:02.5000000. ---> System.OperationCanceledException: The operation was canceled.
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Threading.Tasks.Sources.ManualResetValueTaskSourceCore`1.GetResult(Int16 token)
   at Couchbase.Core.ClusterNode.<ExecuteOp>d__136.MoveNext()
   --- End of inner exception stack trace ---
   at Couchbase.Utils.ThrowHelper.ThrowTimeoutException(IOperation operation, Exception innerException, TypedRedactor redactor, IErrorContext context)
   at Couchbase.Core.ClusterNode.<ExecuteOp>d__136.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Couchbase.Core.ClusterNode.<GetClusterMap>d__128.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at Couchbase.Core.Configuration.Server.ConfigHandler.<PollAsync>d__13.MoveNext()
-----------------------Context Info---------------------------
{""dispatchedFrom"":null,""dispatchedTo"":null,""documentKey"":"""",""clientContextId"":""573245"",""cas"":0,""status"":""operationTimeout"",""bucketName"":""ot_cibml_ppr"",""collectionName"":null,""scopeName"":null,""message"":null,""opCode"":""getClusterConfig"",""retryReasons"":[]}"
Mar 9, 2024 @ 08:12:50.567,"[otr_dev31] 2024-03-09 08:12:50,567 WARN 78 Ot.Couchbase.Core.Configuration.Server.ConfigHandler  Issue getting Cluster Map on server server1:11210!
Couchbase.Core.Exceptions.UnambiguousTimeoutException: The GetClusterConfig operation 573247/ timed out after 00:00:02.5017035. It was retried 0 times using Couchbase.Core.Retry.BestEffortRetryStrategy. The KvTimeout is 00:00:02.5000000. ---> System.OperationCanceledException: The operation was canceled.
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Threading.Tasks.Sources.ManualResetValueTaskSourceCore`1.GetResult(Int16 token)
   at Couchbase.Core.ClusterNode.<ExecuteOp>d__136.MoveNext()
   --- End of inner exception stack trace ---
   at Couchbase.Utils.ThrowHelper.ThrowTimeoutException(IOperation operation, Exception innerException, TypedRedactor redactor, IErrorContext context)
   at Couchbase.Core.ClusterNode.<ExecuteOp>d__136.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Couchbase.Core.ClusterNode.<GetClusterMap>d__128.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at Couchbase.Core.Configuration.Server.ConfigHandler.<PollAsync>d__13.MoveNext()
-----------------------Context Info---------------------------
{""dispatchedFrom"":null,""dispatchedTo"":null,""documentKey"":"""",""clientContextId"":""573247"",""cas"":0,""status"":""operationTimeout"",""bucketName"":""ot_cibml_ppr"",""collectionName"":null,""scopeName"":null,""message"":null,""opCode"":""getClusterConfig"",""retryReasons"":[]}"
Mar 9, 2024 @ 08:12:53.073,"[otr_dev31] 2024-03-09 08:12:53,073 WARN 86 Ot.Couchbase.Core.Configuration.Server.ConfigHandler  Issue getting Cluster Map on server server4:11210!
Couchbase.Core.Exceptions.UnambiguousTimeoutException: The GetClusterConfig operation 573249/ timed out after 00:00:02.5044611. It was retried 0 times using Couchbase.Core.Retry.BestEffortRetryStrategy. The KvTimeout is 00:00:02.5000000. ---> System.OperationCanceledException: The operation was canceled.
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Threading.Tasks.Sources.ManualResetValueTaskSourceCore`1.GetResult(Int16 token)
   at Couchbase.Core.ClusterNode.<ExecuteOp>d__136.MoveNext()
   --- End of inner exception stack trace ---
   at Couchbase.Utils.ThrowHelper.ThrowTimeoutException(IOperation operation, Exception innerException, TypedRedactor redactor, IErrorContext context)
   at Couchbase.Core.ClusterNode.<ExecuteOp>d__136.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Couchbase.Core.ClusterNode.<GetClusterMap>d__128.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at Couchbase.Core.Configuration.Server.ConfigHandler.<PollAsync>d__13.MoveNext()
-----------------------Context Info---------------------------
{""dispatchedFrom"":null,""dispatchedTo"":null,""documentKey"":"""",""clientContextId"":""573249"",""cas"":0,""status"":""operationTimeout"",""bucketName"":""ot_cibml_ppr"",""collectionName"":null,""scopeName"":null,""message"":null,""opCode"":""getClusterConfig"",""retryReasons"":[]}"
Mar 9, 2024 @ 08:12:58.095,"[otr_dev31] 2024-03-09 08:12:58,095 WARN 3 Ot.Couchbase.Core.Configuration.Server.ConfigHandler  Issue getting Cluster Map on server server2:11210!
Couchbase.Core.Exceptions.UnambiguousTimeoutException: The GetClusterConfig operation 573251/ timed out after 00:00:02.5100230. It was retried 0 times using Couchbase.Core.Retry.BestEffortRetryStrategy. The KvTimeout is 00:00:02.5000000. ---> System.OperationCanceledException: The operation was canceled.
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Threading.Tasks.Sources.ManualResetValueTaskSourceCore`1.GetResult(Int16 token)
   at Couchbase.Core.ClusterNode.<ExecuteOp>d__136.MoveNext()
   --- End of inner exception stack trace ---
   at Couchbase.Utils.ThrowHelper.ThrowTimeoutException(IOperation operation, Exception innerException, TypedRedactor redactor, IErrorContext context)
   at Couchbase.Core.ClusterNode.<ExecuteOp>d__136.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Couchbase.Core.ClusterNode.<GetClusterMap>d__128.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at Couchbase.Core.Configuration.Server.ConfigHandler.<PollAsync>d__13.MoveNext()
-----------------------Context Info---------------------------
{""dispatchedFrom"":null,""dispatchedTo"":null,""documentKey"":"""",""clientContextId"":""573251"",""cas"":0,""status"":""operationTimeout"",""bucketName"":""ot_cibml_ppr"",""collectionName"":null,""scopeName"":null,""message"":null,""opCode"":""getClusterConfig"",""retryReasons"":[]}"
Mar 9, 2024 @ 08:13:00.602,"[otr_dev31] 2024-03-09 08:13:00,602 WARN 93 Ot.Couchbase.Core.Configuration.Server.ConfigHandler  Issue getting Cluster Map on server server3:11210!
Couchbase.Core.Exceptions.UnambiguousTimeoutException: The GetClusterConfig operation 573253/ timed out after 00:00:02.5058874. It was retried 0 times using Couchbase.Core.Retry.BestEffortRetryStrategy. The KvTimeout is 00:00:02.5000000. ---> System.OperationCanceledException: The operation was canceled.
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Threading.Tasks.Sources.ManualResetValueTaskSourceCore`1.GetResult(Int16 token)
   at Couchbase.Core.ClusterNode.<ExecuteOp>d__136.MoveNext()
   --- End of inner exception stack trace ---
   at Couchbase.Utils.ThrowHelper.ThrowTimeoutException(IOperation operation, Exception innerException, TypedRedactor redactor, IErrorContext context)
   at Couchbase.Core.ClusterNode.<ExecuteOp>d__136.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Couchbase.Core.ClusterNode.<GetClusterMap>d__128.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at Couchbase.Core.Configuration.Server.ConfigHandler.<PollAsync>d__13.MoveNext()
-----------------------Context Info---------------------------
{""dispatchedFrom"":null,""dispatchedTo"":null,""documentKey"":"""",""clientContextId"":""573253"",""cas"":0,""status"":""operationTimeout"",""bucketName"":""ot_cibml_ppr"",""collectionName"":null,""scopeName"":null,""message"":null,""opCode"":""getClusterConfig"",""retryReasons"":[]}"
Mar 9, 2024 @ 08:13:02.408,"[otr_dev31] 2024-03-09 08:13:02,408 INFO 39 Ot.Couchbase.Core.IO.Connections.DefaultConnectionPoolScaleController  Detected connection less than minimum, scaling up connection pool server4:11210"
Mar 9, 2024 @ 08:13:02.408,"[otr_dev31] 2024-03-09 08:13:02,408 INFO 43 Ot.Couchbase.Core.IO.Connections.DefaultConnectionPoolScaleController  Detected connection less than minimum, scaling up connection pool server1:11210"
Mar 9, 2024 @ 08:13:02.451,"[otr_dev31] 2024-03-09 08:13:02,451 INFO 53 Ot.Couchbase.Core.IO.Authentication.ScramShaMechanism  v=xTE2WakKaJGf/hh64+ljIy/ZrHk="
Mar 9, 2024 @ 08:13:02.466,"[otr_dev31] 2024-03-09 08:13:02,466 ERROR 39 Ot.Couchbase.Core.Configuration.Server.Streaming.HttpStreamingConfigListener  HTTP Streaming error. Object reference not set to an instance of an object.    at Couchbase.Core.Configuration.Server.Streaming.HttpStreamingConfigListener.<<StartBackgroundTask>b__18_0>d.MoveNext()
System.NullReferenceException: Object reference not set to an instance of an object.
at Couchbase.Core.Configuration.Server.Streaming.HttpStreamingConfigListener.<<StartBackgroundTask>b__18_0>d.MoveNext()"
Mar 9, 2024 @ 08:13:02.482,"[otr_dev31] 2024-03-09 08:13:02,482 ERROR 86 Ot.Couchbase.Core.ClusterNode  The Bucket [ot_cibml_ppr] could not be selected. Either it does not exist, is unavailable or the node itself does not have the Data service enabled."
Mar 9, 2024 @ 08:13:02.482,"[otr_dev31] 2024-03-09 08:13:02,482 INFO 86 Ot.Couchbase.Core.IO.Connections.MultiplexingConnection  Closing connection 8578023028544043582"
Mar 9, 2024 @ 08:13:02.482,"[otr_dev31] 2024-03-09 08:13:02,482 INFO 51 Ot.Couchbase.Core.IO.Authentication.ScramShaMechanism  v=XIYVSYHY7M0K3/ZpdLdbfT6fI6A="
Mar 9, 2024 @ 08:13:02.482,"[otr_dev31] 2024-03-09 08:13:02,482 INFO 43 Ot.Couchbase.Core.IO.Authentication.ScramShaMechanism  v=xKSAHZRxHiauvJXtAcFljhmjjmo="
Mar 9, 2024 @ 08:13:02.497,"[otr_dev31] 2024-03-09 08:13:02,497 ERROR 51 Ot.Couchbase.Core.ClusterNode  The Bucket [ot_cibml_ppr] could not be selected. Either it does not exist, is unavailable or the node itself does not have the Data service enabled."
Mar 9, 2024 @ 08:13:02.497,"[otr_dev31] 2024-03-09 08:13:02,497 INFO 10 Ot.Couchbase.Core.IO.Authentication.ScramShaMechanism  v=DQPspULne8ScdZeAoUVTocl/5NA="
Mar 9, 2024 @ 08:13:02.497,"[otr_dev31] 2024-03-09 08:13:02,497 INFO 51 Ot.Couchbase.Core.IO.Connections.MultiplexingConnection  Closing connection 18067288236164032077"
Mar 9, 2024 @ 08:13:02.742,"[otr_dev31] 2024-03-09 08:13:02,742 ERROR 87 Ot.Couchbase.Core.Configuration.Server.Streaming.HttpStreamingConfigListener  HTTP Streaming error. Response status code does not indicate success: 404 (Object Not Found).    at System.Net.Http.HttpResponseMessage.EnsureSuccessStatusCode()
at Couchbase.Core.Configuration.Server.Streaming.HttpStreamingConfigListener.<<StartBackgroundTask>b__18_0>d.MoveNext()
System.Net.Http.HttpRequestException: Response status code does not indicate success: 404 (Object Not Found).
   at System.Net.Http.HttpResponseMessage.EnsureSuccessStatusCode()
   at Couchbase.Core.Configuration.Server.Streaming.HttpStreamingConfigListener.<<StartBackgroundTask>b__18_0>d.MoveNext()"
Mar 9, 2024 @ 08:13:02.758,"[otr_dev31] 2024-03-09 08:13:02,758 ERROR 49 Ot.Couchbase.Core.Configuration.Server.Streaming.HttpStreamingConfigListener  HTTP Streaming error. Response status code does not indicate success: 404 (Object Not Found).    at System.Net.Http.HttpResponseMessage.EnsureSuccessStatusCode()
at Couchbase.Core.Configuration.Server.Streaming.HttpStreamingConfigListener.<<StartBackgroundTask>b__18_0>d.MoveNext()
System.Net.Http.HttpRequestException: Response status code does not indicate success: 404 (Object Not Found).
   at System.Net.Http.HttpResponseMessage.EnsureSuccessStatusCode()
   at Couchbase.Core.Configuration.Server.Streaming.HttpStreamingConfigListener.<<StartBackgroundTask>b__18_0>d.MoveNext()"