Not configured service

Hi All,
I’ve a problem about this: “A request has been made for a service that is not configured or supported by the cluster. Please check the cluster and enable or add a new node with the requested service: Data.”

I’ve read several post about that, some of them very similar to my scenario. The client cluster runs on Kubernetes, and I can confirm that the nodes are properly configured and alive. I’ve tried several configuration for servers: the k8 internal name, the external and also the k8 service name.

The problem is not alway present, but in the last days is very frequent, so the system doesn’t work anymore.
Our software works fine in our cluster composed by just one node. We use SDK 2.7.26

Thanks,
Regards

@Dario_Mazza -

Is this intermittent? Can you supply logs?

Jeff

Hi Jeff,
I’ve tried to configure NLog following exactly the link Collecting Information and Logging in the .NET SDK with Couchbase Server | Couchbase Docs, But in the log file written by the application is not present any information useful from Couchbase SDK.
Do you intend any other log or procedure to enable it?

Thanks for your kind support,
Dario

@Dario_Mazza -

It definitely should work are you using .NET Core or Full framework?

Jeff

.net core 3.1
In couhbase config is enough to assign the loggerfactory? No other settings?

Thanks
Regards
Dario

@Dario_Mazza

Linking to the ILoggerFactory wires up Couchbase logging to your logging infrastructure of choice. However, you may still need to configure your logging infrastructure to emit the logs from the “Couchbase” namespace at the desired log level. How to do this will vary depending on your logging provider.

If you’re using the default logging provider in .NET Core 3.1, here is some information on configuring it: Logging in .NET | Microsoft Docs

Hi Brant,
ok, I’ve produced the logs. I’ve sent an email with a link to wetrasfer to your email, containing the sdk logs and the sdk doctor execution from my pc. The sdk log is produced running the application inside docker, using the external nodes address.

Just some clarification about the scenario. The error appears not always, but once it appears the client doesn’t work anymore until rebooting the pod. It appears running our microservices inside the client k8 cluster (same cluster where couchbase server is running) or from my docker desktop. In both cases I use the external node host name. Consider that I’ve also tried to use the k8 service name but nothing change.

Let me know if something interesting is present in the logs.

Thanks for your help,
Regards

Dario

@Dario_Mazza

Do these logs include an example where the failure occurred? If so, can you point me to a line number?

One thought I have is that this sounds very similar to problems I’ve encountered in our production environment. We typically experience this problem following a cluster rebalance, the SDK will sometimes (but not always) get itself into an erroneous state that requires recycling the pods. I believe it is a corner case bug in the 2.x SDK, but I’ve never been able to track it down. I had, frankly, given up on it and was hoping the 3.x SDK would resolve it (3.x contains a virtually complete rewrite).

https://issues.couchbase.com/browse/NCBC-2395

Does this sounds like it may be your problem, too? Do you see any correlation between cluster rebalances and these outages? If so, we can discuss how we’ve configured our systems to mitigate the problem for now.

Hi,
the line where the error is present is the following one:

You may see the service unavailable error also before our application exception.

I’ve also send you a wetransfer with a print of server log regarding our bucket.
Thanks,
Regards

Dario

Hi All
Just to confirm that migrating to sdk 3.2 solved the issue.

Thanks
Regards

Dario

1 Like