카우치베이스 서버

CI/CD 온디맨드 임시 환경용 Couchbase 클러스터를 복제하는 방법

Continuous Integration and Continuous Deployment are now common software development practices. In the world of databases, this translates into needing on-demand, stateful, ephemeral environments.  

Provisioning a stateless environment is not tied to any particular source of data. All that is needed is to run the code you want to test in your CI environment. This is the basis of most CI/CD tools and won’t be covered in this article. 

The slightly harder part comes from the dependencies the application needs to be tested properly, which is often referred to as external services. Couchbase being one of them. There are different ways to get those, through Docker containers for instance, or hosted in your test infrastructure, or some external as a Service solution. It does not really matter as long as they are available while running your test. Good practices would be to use Environment Variables to refer to those instances. 

Assuming these services are running, like a Couchbase Free Tier instance or a Docker container, the next step is to make sure that they are configured correctly, and seeded with the data needed for the test.

A while ago, I posted about using Couchbase Shell in GitHub actions. This will tell you the basics about using Couchbase Shell with GitHub Actions, but this can be applied to most CI/CD solutions as well. Today, I want to go further and show you some useful scripts to clone a cluster or elements of a cluster for your on demand environments.

Using Couchbase Shell to clone environments

When using Couchbase Shell, the first thing that comes to mind when wanting to do something is, is there a function for that? As of now we don’t have a function to clone something. Most of the available functions reflect our APIs capabilities and we have no cloning APIs today. But, we have the ability to write scripts, which means we can make our own!

The first thing that comes to mind when managing databases is often to recreate the structure and schemas. As Couchbase is Schemaless, this will only consist of the existing buckets, scopes, collections, and indexes in the source cluster. The first step is to export that structure so it can be reimported later. This function will list every bucket, then inner scopes and collections, and add them to an array. Then it will list all indexes and add them to the output JSON. 

This works because under the hood, Couchbase Shell is using Nushell, a new type of shell that is portable (meaning it works the same way on Linux, Windows, or OS X, which is great for CI/CD scripts having to support different OS), and that considers any structure data as a DataFrame, making the manipulation of JSON extremely easy.

To try it out, run cbsh, then source the file containing the function. For me it’s ci_scripts.nu. I have a cluster already configured in my cbsh config, called local

Now if you open local-cluster-export.json, you will get the structure of your cluster:

I have deleted that bucket for the purpose of this test, to reimport it later: buckets drop travel-sample.

The next logical step is to have a function that takes this file as input and recreate the complete structure in another cluster:

Now to run that function:

And there you have it, functions that allow you to export and import the data structure from one cluster to another. While this is a good starting point, there are still questions about how to reimport data, or about granularity. Also, you may not want to export and import a complete cluster.

Filtering buckets to import is fairly easy as Nushell allows you to filter dataframes:

This will recreate a JSON object containing only a bucket named 여행 샘플 and indexes for this bucket.

From there you should be all set to manage basic cluster structure. What about the data? There are different ways you can import data with cbsh, as it covers most key/value operations as well as any INSERT/UPSERT queries. And then we have the 문서 가져오기 명령. Its usage is fairly straightforward, all you need is a list of rows with an identified id field. This can be anything that can be turned into a dataframe for Nushell (XML, CSV, TSV, Parquet, and more). And of course, it can be a JSON file from a Couchbase SQL++ query. This is an example that will save a query result to a file and import that file back to a collection: 


That’s one particular example but the whole point of using scripting language is to make them your own. You will find a more complete example in this GitHub Gist. It has support for environment variables for source and destination and you can decide to either clone all buckets of a cluster, a specific bucket, scope, or collection.

Don’t hesitate to drop us a comment here or on 불화, we are always looking for suggestions to improve the global Couchbase experience.

이 문서 공유하기
받은 편지함에서 카우치베이스 블로그 업데이트 받기
이 필드는 필수 입력 사항입니다.

작성자

게시자 로랑 도귄

Laurent는 파리에 사는 괴짜 금속공학도입니다. 주로 Java로 코드를 작성하고 AsciiDoc으로 구조화된 텍스트를 작성하며 데이터, 리액티브 프로그래밍 및 기타 유행어에 대해 자주 이야기합니다. 또한 Clever Cloud와 Nuxeo의 개발자 옹호자로 활동하며 해당 커뮤니티가 더 크고 강력하게 성장할 수 있도록 자신의 시간과 전문성을 바쳤습니다. 현재 Couchbase에서 개발자 관계를 운영하고 있습니다.

댓글 남기기

카우치베이스 카펠라를 시작할 준비가 되셨나요?

구축 시작

개발자 포털에서 NoSQL을 살펴보고, 리소스를 찾아보고, 튜토리얼을 시작하세요.

카펠라 무료 사용

클릭 몇 번으로 Couchbase를 직접 체험해 보세요. Capella DBaaS는 가장 쉽고 빠르게 시작할 수 있는 방법입니다.

연락하기

카우치베이스 제품에 대해 자세히 알고 싶으신가요? 저희가 도와드리겠습니다.