Recently my wife and I have been watching reruns of some TV programs containing Jeremy Clarkson (Top Gear, The Grand Tour, Clarkson’s Farm). Despite the buffoonery he is quite a clever chap, although will likely be remembered for thinking “Speed and Power” is the answer to everything.

Couchbase offers speed and Power in abundance through its architecture. Most Relational databases are monolithic in their design meaning that they can only be scaled vertically, not horizontally. Unlike most NoSQL databases each service that makes up Couchbase’s data platform can be put on different nodes meaning that each service could be scaled horizontally and vertically too. This allows you to put the power down exactly where it’s needed and attain more speed.

Despite the entertainment value of Clarkson, speed and power isn’t always the answer, especially if the problem is at the design level. Power through hardware can help hide the issue but it’s not going away and is wasteful. Speed, however, is often essential in this day and age where slow has been called “the new down”.

In a recent engagement we found some developers using a pattern that could be improved upon in their Google Functions using one simple tip. In this blog post, we’ll share how we fine-tuned the engine (function code) and catapulted performance down the last straight, over the line, and onto the next lap.

Keeping with the TV theme, let’s pretend that you want to watch your favorite TV series on a streaming service. The process would look something like this:

      • Turn on TV/Device
      • Open App
      • Log in to service
      • Choose program
      • Watch Program
      • Choose Program
      • Watch Program
      • Etc. etc. etc.

How annoying would it be if the service provider made you log in each time you wanted to watch another program? Probably enough to move to another service, right?!?

Now let’s say this is a call to your database from a cloud function.

Turning on the TV and opening the App would be writing and deploying your functions. We would only do these steps once and so will put these to one side and continue with the rest of the list.

On the left image you can see a repeated need to log in, whilst on the right the login process only happens once and that login data is reused for the length of the session. As you would expect it takes less time to process a request for information if the login details have already been checked and do not have to be sent through each time.

This is where not testing at scale really becomes an issue. For argument’s sake let’s say the authentication took 1 ms (your mileage will vary depending on type of authentication, distance between client and database, and a whole host of other things. Now 1ms might not sound like a lot and if looking at just one occurrence of a run you could say was not an issue. What if instead of just 1 request there were thousands of requests happening using this function? Suddenly thousands of milliseconds per second are being wasted. When you’re being charged for how long these functions take that will cost you.

Top Tips for a Hot Lap Time

Follow these handy hints for a connection between your Google Cloud Functions and Capella that will reduce round trip times and get pulses racing:

    1. Go Global: When setting up your Cloud Function, opt for a globally-scoped database connection. This gives your function a perfect start to access your Capella cluster from anywhere in the grid.
    2. Spend Less Time in the Pits: Resist the urge to create a new Couchbase Cluster object every time your function gets called. That’s like doing a pit stop on every lap – a tad excessive, wouldn’t you say? Instead, create the Cluster object once, then store it for your function to reuse.
    3. Tune That Idle Time: Set maxIdleTimeMS to 60000 (that’s one minute for us mere mortals with only seconds on a stop watch). This automatically closes connections after they’ve been idle for a minute, preventing timeouts and keeping your serverless functions sprightly.
    4. Concurrency is Key: When creating your Google Cloud function: 
      • Embrace the 2nd Gen: It’s like upgrading from a Ford Model T to an F1 car – it handles multiple concurrent requests and reduces connection load by letting your function share a single Cluster object.
      • Crank Up the Concurrency: Increase the concurrency setting to minimize those dreaded cold starts and give your latency a turbo boost. Think of the cold start as a standing start in a race compared to a rolling start where the participant in the race is up to speed. In a cold/standing start the participant has to expend energy getting up to speed and then complete the activity.

Example: Python Cloud Function Connecting to Capella

There you have it! A Python-flavored example to optimizing your Google Cloud Functions with Couchbase Capella.

Go on, give it a go, how hard could it be?



Posted by Richard Douglas - Solutions Engineer

Solutions Engineer

Leave a reply