In my previous blog, we discussed setting up a Couchbase cluster locally on your own hardware. The beauty of this was that it significantly reduces both the TCO of your testing infrastructure and the time needed to acquire new test infrastructure, which is especially important if you’re just starting your Couchbase journey.

To bring everyone to the same starting point, here’s a link to my previous blog, where we ran through how to quickly spin up a development cluster locally using the Couchbase Autonomous Operator for Kubernetes.

So where could we possibly go from here? Establishing an easily deployed environment that would mimic your application deployment is the only natural path. This will be another important step in the right direction to allow you to both develop and test your deployments locally, with minimal effort.

Even Simpler Deployment

We have simplified the deployment method discussed in my previous blog, of course it is still important to understand the components of the Couchbase operator we will deploy, so if you haven’t already, please reference my previous blog.

The new method of deployment is contained within the operator-gitops repo maintained by Couchbase (and, more specifically, the fantastic Patrick Stephens). The only thing you now need to do to have a Couchbase cluster deployed locally is run the create-cluster.sh script. Once this is complete, you can run your create-dev.sh script. Just like that, you’ll have everything you need (infrastructure-wise) to test your Couchbase Application features.

Development Deployment Breakdown

To help this be a practical technical guide for you, the following sections cover all the details of what we are doing in the create-dev.sh script.

Dockerfile
When I was a developer, I had to maintain my own development and test environments. Maintaining these environments was a nightmare as there wasn’t much in the way of automation. New releases naturally introduced breaking code changes. Thankfully, this has all changed, and we show a typical Node.js SDK example of automation in this Dockerfile from the operator-gitops repo. This blog post presents this so that you can replicate one in your environment.

We’ll go through this line by line:

  1. FROM node:16.9.1 – this is the base image provided by Node.js so that we don’t have to worry about installing it (amongst other things).
  2. RUN apt-get … – this is a series of commands to install tools for testing connectivity between your development environment and Couchbase deployments.
  3. WORKDIR /usr/app – here, we’re setting a directory which will always be used by the commands which follow in the dockerfile. It also sets an entry path so that you’ll land there when you shell into the container.
  4. RUN npm init -y && npm install couchbase –save – this will install the Couchbase SDK as per the instructions in our docs.
  5. CMD [“sleep”,”3600″] – this is a hack to keep the container alive so we can attach.

These shared concepts also apply to other Couchbase SDKs. Find a relevant default image, install the tools, set a workdir, install the Couchbase SDK, set something in place so we can attach in and test code.

The first thing that will happen in our automatic dev environment deployment will be to build the Docker image specified in the dockerfile and preload the image in KIND (Kubernetes in Docker).

Development Kubernetes Deployment

Our newly created container image can now be utilized in a Kubernetes deployment where the container is deployed and scaled by the system. Should our container’s deployed pod fail, the Kubernetes Deployment Controller will notice this and fire up a new instance.

We have provided a nice and simple deployment definition within the create-dev.sh script that will use our Docker image and attach some simple labels to manage the deployment with kubectl.

Deployment Rollout

Without rolling out the deployment definition, it will remain just a definition. The next step is to do a rollout and present our pod name so that we can push our code into it.

Running Code in the Container

At this stage, we successfully ran the create-dev.sh script. We now have the task of getting our code into the container to test it. A few commands here will help us get code into the container and then open a shell.

This is just an example, but the concepts should remain the same. Here, we take our directory of Couchbase SDK code and copy it into the newly created development pod (remember to get the echo’d output of “Dev Pod Name: X”).

This command will take us to the shell within the container. You should see the path change in the prompt. From here, you can run the code you copied across in the “kubectl cp” command.

Tasty Sample Code

We have provided a small sample script that follows the same patterns as our SDK docs. The SDK docs do a fantastic job of explaining the Couchbase-isms of the building blocks of a Couchbase application.

Please note this documentation that helps determine the correct endpoint you should connect to. The key here is the cluster name defined in the cluster deployment and appending it with -srv. To quote the docs: “The service name is in the form <cluster-name>-srv.”

The few configurations available at the top of the script should remain the same for a default deployment in Helm. Check that the connection credentials used in the script match that of the output from the command: helm get all couchbase

In any case, you should check that the configuration options meet your expectations. Remember that the code doesn’t lie – if it can’t connect, it probably has not been told how to connect correctly. However, if you make changes that divert from the defaults, remember to change your cluster name, bucket, and authentication credentials.

Conclusion

You should be coming away from this blog with a fully configured development environment, which will allow you to test your Couchbase SDK code against a real-life Couchbase cluster, no mocks! I hope the automation helps you have more time to write code instead of managing development environments as I had to during my time as a developer.

Here are the direct links to the resources used in this post:

Docs: Finding the cluster endpoint for Kubernetes DNS

Author

Posted by Sam Redman, Solutions Engineer

Sam Redman is a Solutions Engineer at Couchbase. Sam previously worked in development and SRE environments before joining Couchbase.

Leave a reply