In this tutorial, we’ll demonstrate how to harness the power of vector search capabilities in Couchbase with n8n’s workflow automation platform. We’ll build a simple travel agent workflow that recommends vacation destinations based on user queries, using vector embeddings to provide contextually relevant results.

Introduction

Vector search makes it possible to search based on semantic similarity rather than exact matches. The Couchbase Search Vector node in n8n allows you to perform vector search operations using Couchbase’s Search Service, supporting retrieval, updating, and insertion of documents in a vector database.

In this tutorial, we’ll:

    1. Set up a Couchbase Capella cluster
    2. Configure the necessary bucket, scope, and collection
    3. Create a search index for vector searching
    4. Build an n8n workflow with data ingestion and chat functionality
    5. Test our simple travel agent

Prerequisites

    • An n8n self-hosted instance
    • The n8n-nodes-couchbase community node package installed on your n8n instance
      • Information on installing community nodes can be found here
    • A Couchbase Capella account (free tier is sufficient)
    • API keys for OpenAI and Gemini (for embeddings and LLM capabilities)

Step 1: Deploy a Couchbase Cluster

Let’s start by deploying a cluster on Couchbase Capella’s free tier, although any type of cluster with the search service will work.

    1. Log into Couchbase Capella, or sign up for a Couchbase Capella account
    2. Deploy an Operational Cluster
      • The cluster must have the Search service enabled

Step 2: Configure Your Couchbase Database

    1. Create a bucket called travel-agent
    2. Create a scope within travel-agent called vectors
    3. Within that scope, create a collection named points-of-interest

Step 3: Create a Search Index

    1. Import a Search Index definition using the Capella UI

Step 4: Set Up Cluster Access

    1. Create cluster access credentials
    2. Configure allowed IP addresses
      • You must allow the IP address of the machine running n8n
      • You can allow access from anywhere for testing purposes, but this should not be used in production scenarios

Step 5: Working with the n8n Workflow

A template of the workflow is provided here. Our n8n workflow consists of two main components:

Data Ingestion Workflow

    1. Webhook node (to listen for HTTP requests)
    2. OpenAI Embeddings node (to generate embeddings on document insertion)
    3. Couchbase Vector node (configured for document insertion)
    4. Default Data Loader and Recursive Character Text Splitter

Chat Application Workflow

    1. Chat Trigger node
    2. AI Tools Agent node connect to:
      • Gemini (as the Chat Model, for generating responses)
      • Simple Memory (as the Memory, to maintain conversation context)
      • Couchbase Search Vector node (as the Tool, for search)
      • OpenAI Embeddings node (as the Embedding model for the Couchbase Search Vector node, to convert queries to vectors)

Step 6: Configure the Couchbase Vector Nodes

    1. Create a new Couchbase credential with:
      • Connection string (from the “Connect” tab in Capella)
      • The username and password from your Cluster Access Credentials (created in step 4)
    2. For both Couchbase Vector nodes, configure the node settings:
      • Bucket: travel-agent
      • Scope: vectors
      • Collection: points-of-interest
      • Index: poi-index

Step 7: Ingest Sample Data

We’ll ingest sample travel destination data with descriptions using a provided shell script.

    1. Save and activate your workflow
    2. Copy the production webhook URL
    3. Download the provided shell script, and run it to insert a few points of interest:

The script will create embeddings for each destination description and store them in Couchbase with their vector representations.

Step 8: Test Your Travel Agent

Now you can interact with your travel agent by asking questions like:

    • “Where should we go for a sightseeing vacation?”
    • “I want an adventurous vacation”
    • “What’s a good romantic destination?”

The workflow will be:

    1. Convert your query to a vector embedding
    2. Search Couchbase for semantically similar destinations
    3. Use the LLM to formulate a response based on the retrieved destination information

Next Steps

While this travel agent may not be super practical for actually planning a vacation, it effectively demonstrates how we can leverage Couchbase and Vector Search to retrieve data from a database and use it in LLM conversations using n8n’s powerful workflow tooling.

Take a look at our Vector Search Cookbook to see examples of other vector search applications in Couchbase for inspiration, and try them out in n8n using the Couchbase Search Vector Node! Happy automating!

Resources

Author

Posted by Elliot Scribner - Software Engineer

Elliot Scribner is a Software Engineer on the Developer Experience and Ecosystem team at Couchbase. With substantial experience across several languages and technologies, including Full Stack Node.js (TypeScript) and Python, Elliot has led the development of multiple integration projects that have enhanced Couchbase’s developer offerings. He collaborates cross-functionally to ensure seamless development and deliver successful integrations. Elliot is passionate about empowering developers through robust tools and intuitive ecosystems.

Leave a reply