Couchbase Cache boostrap error


i have implemented couchbase caching in my application and all was working fine until i deployed to the production environment. Now the application is under load the website crashes with the following errors.

“Only one usage of each socket address (protocol/network address/port) is normally permitted”

After some investigation i have diagnosed that my resources are being used up when creating caches.

I believe it is because i open a cluster and bucket for every connection to the cache, thus using up resources.

Can anyone point me in the direction of any article that shows how to open a cluster once in a webservice and keep it open for a duration so multiple calls to the service can use the same cluster even if the service itself shuts down.

thanks in advance

In all of our SDKs, the examples we show and the section on “Managing Connections” discusses reuse of bucket objects to be efficient, retain connections, etc. What platform are you using? Are you using one of the Couchbase SDKs?

hi ,thanks for the reply. I am using the sdk.
my problem is i have implemented couchbase in a wcf service to handle saving and retrieving cached data.
The service handles the opening of a cluster and a bucket for every call to it.
this service is called from 2 different applications and can be called many times during a web session (a get cache method is called on every page hit to validate user status)
in dev and test i never had a problem because i only had limited users accessing the cache service. however when i went live yesterday to load went up massively causing the socket error and a “bootstrap” error.
My investigation has led me to believe that every service call is trying to open the cluster and cache before accessing the data. I assume this is causing a bottleneck.
What i think i need to do is have the cluster and bucket open all the time (or at least for a certain time period) and then be able to access this cache all the time.
Is this possible and if so can you point me to example documentation of how this is implemented ?


I’m not familiar with WCF in particular, but a quick search seems to indicate you’ll want to manage that Bucket object as a Service reference in the app, which makes it a singleton. @matthew.groves or @jmorris or @MikeGoldsmith can probably fill this out in more detail.

We include a singleton helper as well for some of the .NET environments, and that’s covered in the documentation on managing connections. The ClusterHelper may apply as well in this case.

@timsfcuk -

Its been a long time since I did any WCF programming, but there appear to be a couple of solutions in conjuction with what @ingenthr said that might work:

  • Use Appinitialize
  • Global.asax
  • Use a WCF event if self hosting

The key here is that the Cluster initialization happens when the app starts and the buckets are reused. Ideally, you would close/dispose of the cluster/buckets when the app shuts down as well.


Thanks for your input.
Got it working by calling initializeClusterHelper on application_start in the Global.asax using the clusterhelper and called close in clusterhelper on application_end.

@ingenthr, @jmorris thanks for pointing out this method. I missed this in the documentation.



Hi all

I have redeployed my caching solution to the production environment using the tips suggested above. All was working fine for a while but as load increased a number of errors starting logging to the database.
The error is
with the message
Only one usage of each socket address (protocol/network address/port) is normally permitted

My environment is 2 production windows servers with a load balancer over the top. I have a default bucket and a single cluster configured on one couchbase server that is referenced from both production servers.

Anyone any ideas what this error could be and how i can resolve it please

many thanks in advance



It is my understanding that a web application can be started many times. The application can be closed down whenever IIS thinks that it’s not used, and when the next request comes in the application will start again.

Is this therefore a flaw in the way clusters are opened as part of any web request call? I can think of a dirty workaround to create a windows service that opens the cluster on start and the web service contacts that.

Just for some statistics, if we fire off 10 requests on multiple threads within a few seconds we get the error above.

a windows service may do the job but it doesn’t seem the best idea for a clean service layer architecture. @jmorris, @ingenthr, @matthew.groves, @MikeGoldsmith, have any of you encountered this problem before? is there a better solution?

It sounds like you may be running out of ephemeral ports - there are two means of fixing this by tuning your OS:

  • Increase the ephemeral port range using the MaxUserPort registry setting

  • Decrease the time it takes the OS to close connections in TIME_WAIT using the TcpTimedWaitDelay registry setting.

That being said, running out of ephemeral ports is an effect; identifying the cause should also help. I would be concerned that the Cluster or Bucket objects are being created; for example is WCF is calling Application_Start and Application_End for every request? That would cause a degradation in performance and without the proper OS settings for the ephemeral port limit to be reached.

You should be able to figure out how many times the Cluster and Bucket are being created by simply enabling logging on the SDK and grepping the log files.

This is the reason for using Application_Start/Application_End to support this behavior. The expectation is that the app is shutdown far fewer times than request coming in - its not per web request - the natural life cycle of the app.


Happy new year and thank you for taking the time to reply over the holidays.

Today I have done some more extensive testing with logging when a cluster is opened and closed. I can confirm that the cluster is only opened once, but we are receiving the following error at approximately 50 consecutive requests. Note it takes around 40 seconds before it starts to fail.

Only one usage of each socket address (protocol/network address/port) is normally permitted

The cluster is closing when the application pool is stopped or website is stopped as expected. Can we therefore assume that our issue is not with connecting to the cluster, but the bucket and maybe we’re not recycling the buckets faster. Our config entry as as below, is the lifespan anything to be concerned about?

[couchbase useSsl=“false” operationLifespan=“1000”]
[add uri=“http://omitted:8091/pools/default” /]
[add name=“default” useSsl=“false” password=“omitted” operationLifespan=“2000”]
[connectionPool name=“custom” maxSize=“500” minSize=“5” sendTimeout=“12000” /]

Apologies for the formatting it doesn’t accept brackets.

Thanks in advance for your help, it has been invaluable thus far.


To confirm the following method is throwing the exception above:


My code is using:

var cluster = ClusterHelper.Get();
IBucket bucket = cluster.OpenBucket();
// Code


Changed the code to:

IBucket bucket = ClusterHelper.GetBucket(“default”);
// Code

And it is working much, much better than before. So far no errors I fired off 1250 requests and it took 28 seconds to come back all valid. I’m just running a test that fires a request every few seconds for a couple of hours to see how we go but looks like we’re all good now! I think my understanding of the OpenBucket method was wrong, I thought it was implemented like the GetBucket, so opening and closing a bucket each time per method was not correct.

@jmorris, @ingenthr


The big difference is that the ClusterHelper.GetBucket(“default”) method lazy-loads and caches the bucket instance, while OpenBucket does not.