In this blog post, we’ll show how to use Couchbase Capella in conjunction with ElasticSearch and AWS services for efficient Eventing application log shipping and analytics.

Couchbase Capella is a multi-model NoSQL cloud data platform. It delivers millisecond data response at scale, with the best price-performance of any document Database-as-a-Service. Capella Eventing Service, enables user-defined business logic to be triggered in real-time when there are changes to data in Capella. Typical use cases include enriching documents, cascade deletes, propagating data changes inside a database, and more. 

Elastic stack (also known as the ELK Stack) offers a powerful centralized logging solution to create insights, set alerts based on KPIs, and keep track of application behavior.

Capella Eventing functions are easy to create using the Data Tools Eventing editor in the UI. Eventing application logs allow users to identify and capture various business logic-related activities and errors via user-defined messages specific to each Eventing function. While the application logs can be viewed in the Capella UI, centralized log management and analytics using specialized logging solutions could be helpful in many scenarios. This blog shows you step-by-step instructions on how to bring Eventing application logs to Elastic and help with centralized log storage and analytics.

Prerequisite

Overview of the necessary steps

Setting Up Capella Cluster: Start by creating a Capella Cluster with data and eventing services. A quick guide can be found on Couchbase Capella documentation.

Bucket/Scope/Collection Creation: Create the necessary buckets, scopes, and collections in Couchbase. This includes ‘data’, ‘eventing-logs’, and their respective scopes and collections.

Eventing Functions Preparation: Prepare the eventing functions for data population and transformation. This involves creating synthetic datasets and modifying them to create logs.

ElasticSearch Cluster and Index Creation: Set up your ElasticSearch cluster and create an index for efficient data handling.

VPC and Private Link Setup: Create a new VPC for the Elastic Connector and establish a private link from Couchbase Capella to the VPC.

ElasticSearch Traffic Filter: Implement traffic filtering for secure data transfer.

Install and Run Couchbase ElasticSearch Connector: Install the Couchbase ElasticSearch Connector on an EC2 machine and configure it for log shipping.

Data Visualization and Analysis: Use the collected data source to visualize and analyze logs, exploring the insights they provide.

Detailed Configuration

Couchbase Environment

We need to create two buckets and their scopes and collection. On one side we will create all we need to manage data and eventing functions. On the other one we will create a second one dedicated to the logs generated by the eventing functions.

The foundations of this simple activity can be found on the Couchbase Capella documentation which illustrates the principles and the actions involved. Note that bucket creation involves the configuration of several settings, of which the most prominent is Memory Quota. For testing purposes you can choose 100 MB, otherwise an accurate evaluation will be necessary.

Create two buckets with the following structure:

    • Name: “data
      • Scope Name: “eventing
        • Collection Name: “meta
          Storage for eventing functions checkpoints
      • Scope Name: “inventory
        • Collection Name: “data
          Storage for actual data
    • Name: “eventing-logs
      • Scope Name: “eventing
        • Collection Name: “logs
          Storage for logs. Optionally we can set a TTL to reduce the amount of storage.
      • Scope Name: “elastic-checkpoints
        • Collection Name: “connector
          Storage for Couchbase-Elasticsearch Connector checkpoints

Data population

We will now create two eventing functions. If this is the first time you create an eventing function, you can familiarize yourself with the environment following this guide

The purpose of these functions is to:

    • First Function: create data we can work on. A dataset representing a product catalog.

      This function creates simple json documents, introducing some variability across 100000 iterations. While this is not a realistic product catalog, it sets a good ground for experimenting eventing functions that need to log the way it is performing. Note that the function is triggered the first time by creating an empty document.
    • Second Function: the actual eventing function with some business logic we would like to observe through ELK stack.

While the eventing function runs unbounded across all the new dataset, it substitutes the property category whenever it matches the value “Category 3” and it changes this value to “jeans”. In several parts of the function we can observe that it is conveniently called a function called logStuff: Its purpose is to save a log line in a dedicated collection that will be later on replicated to ElasticSearch and indexed.

Data population function

Create an eventing function with the following configurations:

And the following bucket binding:

Here is the function:

The moment the function gets a document in data.inventory.data with the ID start, it will generate 100000 random documents.

Note: to generate this document you just need to go to the Data Tools tab, select Documents and then click on the Create New Document button (remember to select the right collection! Data -> Inventory -> Data). 

Data Transformation function

This second function will demonstrate how to persist logs on temporary storage to be used later on the ELK Stack. While performing its business logic on the data.inventory.data collection, it will save some application logs on the eventing-logs.eventing.logs so that it will be able to later on export them to ElasticSearch. 

The configuration:

The bindings:

The function:

The function is responsible for modifying a certain category of data and transforming its label. In doing so, we need to keep track of how the function is performing through logs that will be later on analyzed with the ELK stack.

The logStuff function creates a JSON document with an ECS compliant format and persists it on a separate bucket. 

Once the eventing function is created, remember to deploy it. 

Once the dataset is created and the transformation function executed we should have a populated bucket of logs ready to be shipped to ElasticSearch.

Note: In order to avoid collisions, logs lines are created with timestamps and random numbers later on hashed to save space.

Elastic Cloud configuration

Before starting synchronizing the data we need to create an index in ElasticSearch. Make sure to modify the Elasticsearch endpoint URL and to create an API Key with administrative rights.

Here is a CURL example:

The index maps the data type of the logger so that it can be searched efficiently.

ElasticSearch Connector

Network configuration

We need to create a VPC with an EC2 machine with a public IP address that will host the ElasticSearch connector. A regular EC2 instance with a 2-4 cpus will do the job for this test (we used a t2.medium). Of course, if this is going to be a production environment, an evaluation of the sizing of this machine would be an important step for this deployment.

Next step is to secure the connectivity between the newly create VPC and:

    • Couchbase Capella, and
    • Elastic Cloud

The most practical way (testing purpose only!) is through public connectivity. On the Capella cluster settings we need to whitelist the public IP of the EC2 instance so that the Elasticsearch Connector can establish a connection to the database. For Elastic Cloud, instead, we need to create an API key.

There is also the alternative to go through private links and extensive documentation can be found here:

Couchbase-ElasticSearch connector

After having provisioned the EC2 instance we need to download the executables from the Couchbase website. Before that, please have a look at the requirements, here and download here.

We need now to configure it modifying the $CBES_HOME/config/default-connector.toml file.

Here are a few sections that will resonate with you, some sections removed to keep it brief:

Best Practices and Tips

Here are a few things might be useful to keep in mind:

    • Couchbase ElasticSearch connector – Make sure that you get your data type right, [[elasticsearch.type]]. Your connector won’t feed data unless this is done properly.
    • Couchbase Capella – Don’t forget to create the database credentials for the Connector! 
    • ElasticSearch
      • Once an index is created you can create a datasource from which you will be able to build your insights.
      • ApiKeys can be created for administrative purpose (e.g., creating indexes) or applications (e.g., sending the data through the Connector). Make sure that you are using the right ones.
    • Data Retention
      • Saving logs can be very disk consuming. Avoid this by:
        • Excluding delete notifications from the ElasticSearch Connector with the ignoreDeletes = true flag
        • Setting a TTL for the collection that contains your logs, so that Couchbase Capella will clean up logs automatically that are too old

Conclusion

Through this tutorial we have seen how to implement observability for our Eventing Functions deployed in Capella. As a next step you can sign up for a Capella trial and experiment for yourself the many features provided with the most powerful NoSQL database out there. To help you orient, you can also use our playground and our Capella iQ, our generative AI assistant, for learning how to develop and experiment in a safe environment.

Author

Posted by Davide Pellegatta, Senior Solutions Engineer

Posted by Ravi Kesarwani, Principal Product Manager

Leave a reply