What’s New in Couchbase

Couchbase Capella supports vector search and GenAI applications

Couchbase Capella™ Spring 2024: What’s New

 

Support for vector search and AI-powered adaptive applications

This spring release of Capella brings support for Couchbase Server 7.6, including high-performance storage, indexing, and retrieval of vector embeddings. Organizations are racing to build hyper-personalized, adaptive applications powered by generative AI that deliver exceptional experiences to their end users. Vector search enables organizations to leverage the retrieval-augmented generation (RAG) framework to interact with LLMs and other foundation models to make AI-powered applications safer, up to date, and correctly aligned to specific corporate information. Additionally, teams can use hybrid search, a combination of vector search, text search, range search, explicit value search, and geospatial search, to build out robust search solutions for end users – all within a single platform. There’s no need for adding the complexity of a separate vector database. Advanced searches can be supported by a single SQL++ query and single index to deliver powerful, yet low latency results, while at the same time lowering TCO. Learn more

Support for vectors in Couchbase Mobile

Couchbase is the first database vendor to announce vector support for its embeddable mobile database, Couchbase Lite. This will allow customers to build powerful AI-powered applications at the edge, where data is produced and consumed. See the Mobile tab for more details.

AI ecosystem integrations

Couchbase is boosting its AI partner ecosystem with LangChain and LlamaIndex to further increase developer productivity. Integration with LangChain enables a common API interface to converse with a broad library of LLMs. Similarly, Couchbase integration with LlamaIndex will provide developers with even more choices for LLMs when building adaptive applications. These integrations will accelerate query prompt assembly, improve response validation, and facilitate RAG applications.

Graph capabilities, enabled by recursive query traversals

Couchbase offers support for graph relationship traversals, which means the ability to do recursive queries for hierarchy and network mapping. Query hierarchical data structures allow for complex analyses with ANSI SQL recursive common table expressions (CTE) in areas like organization charts, bills of materials, supply chain management, network topology, and social networks, for example. Learn more

New file-based index rebalancing, reduces times by up to 80%

In order to make scale out of index nodes faster without impacting CPU or memory usage, Server 7.6 now utilizes file-based index rebalancing. Testing has shown a dramatic improvement in the time to complete rebalances, making the process easier and more successful. Learn more

Couchstore to Magma, one-step upgrade without downtime

Customers will be able to migrate from Couchstore to the Magma storage engine without stopping the front-end workloads. Customers must be on version 7.6 to enable the migration. Migration can be reversed at any time if needed. Learn more

Faster failover times, improving HA

In the case of a data node outage, the query will automatically be rerouted to the next available data node without any action from the application. The lower minimum auto-failover timeout has been reduced from 5 seconds to 1 second and the heartbeat frequency has been reduced from 1 second to 200 milliseconds. Learn more

Query simplifications

To make initial development and testing easier, users can perform all database CRUD and join operations without indexes. This improves the experience for new users with CREATE, INSERT, and SELECT without getting an index error. Additionally, users can do simple key-value range scans based on document key prefixes without needing query and indexing nodes. For the best performance, indexes are still recommended where speed is a priority. Learn more

Announcing Couchbase Enterprise Server 7.6

 

Support for vector search and AI-powered adaptive applications

This spring release of Capella brings support for Couchbase Server 7.6, including high-performance storage, indexing, and retrieval of vector embeddings. Organizations are racing to build hyper-personalized, adaptive applications powered by generative AI that deliver exceptional experiences to their end users. Vector search enables organizations to leverage the retrieval-augmented generation (RAG) framework to interact with LLMs and other foundation models to make AI-powered chatbots and applications safer, up to date, and correctly aligned to specific corporate information. Additionally, teams can use hybrid search, a combination of vector search, text search, range search, explicit value search and geospatial search, to build out robust search solutions for end users – all within a single platform. There’s no need for a separate vector database. Advanced searches can be supported by a single SQL query and single index to deliver powerful, yet low latency results, while at the same time lowering TCO. Learn more

Support for vectors in Couchbase Mobile

Couchbase is the first database vendor to announce vector support for its embeddable mobile database, Couchbase Lite. This will allow customers to build powerful AI-powered applications at the edge, where data is produced and consumed. See the Mobile tab for more details.

AI ecosystem integrations

Couchbase is boosting its AI partner ecosystem with LangChain and LlamaIndex to further increase developer productivity. Integration with LangChain enables a common API interface to converse with a broad library of LLMs. Similarly, Couchbase integration with LlamaIndex will provide developers with even more choices for LLMs when building adaptive applications. These integrations will accelerate query prompt assembly, improve response validation, and facilitate RAG applications.

Graph capabilities, enabled by query traversals

Couchbase offers support for graph relationship traversals, which means the ability to do recursive queries for hierarchy and network mapping. Query hierarchical data structures allow for complex analyses with ANSI SQL recursive CTE in areas like organization charts, bill of materials, supply chain management, network topology, and social networks, for example. Learn more

New index rebalancing, reduces times by up to 80%

In order to make scale out of index nodes faster without impacting CPU or memory usage, Server 7.6 now utilizes file-based rebalancing. Testing has shown a dramatic improvement in the time to complete rebalances, making the process easier and more successful. Learn more

Couchstore to Magma, one-step upgrade without downtime

Customers will be able to migrate from Couchstore to the Magma storage engine without stopping the front-end workloads. Customers must be on version 7.6 to enable the migration. Migration can be reversed at any time if needed. Learn more

Faster failover times, improving HA

In the case of a data node outage, the query will automatically be rerouted to the next available data node without any action from the application. The lower minimum auto-failover timeout has been reduced from 5 seconds to 1 second and the heartbeat frequency has been reduced from 1 second to 200 milliseconds. Learn more

Query simplifications

To make initial development and testing easier, users can perform all database CRUD and join operations without indexes. This improves the experience for new users with CREATE, INSERT, and SELECT without getting an index error. Additionally, users can do simple key-value range scans based on document key prefixes without needing query and indexing nodes. For the best performance, indexes are still recommended where speed is a priority. Learn more

Announcing Vector Search in Couchbase Mobile

Vector search is now available as a beta release in Couchbase Lite, the Couchbase Mobile embedded database! For applications running on embedded and mobile devices, this exciting new capability enables semantic search, similarity search, and retrieval-augmented generation (RAG).

Running vector search in mobile and embedded devices brings all the benefits of edge computing including millisecond response times, reliability, availability even when offline, bandwidth savings and most importantly, customized responses without compromising data privacy. The benefits are amplified when combined with vector search in Couchbase Capella/Couchbase Server, enabling cloud to edge vector support.

Offline support for semantic and similarity search

Any Couchbase Lite offline-first application that uses full-text search (FTS) can be augmented with rich semantic capabilities offered by vector search, especially where simple text-based searches are insufficient to quickly retrieve contextually relevant data.

Improved data privacy for AI apps

Vector search within the device can eliminate data privacy concerns by ensuring that the personal data and search queries of sensitive nature do not have to leave the device.

Retrieval-augmented generation (RAG)

Vector search makes output from generative AI more personal by enabling retrieval-augmented generation (RAG), where current local vector data is passed along with prompts to provide better precision and context for LLM responses.

Couchbase mobile vector search beta program

Try out the new vector search capabilities in Couchbase Mobile for yourself. Sign up for beta access here

Start building

Check out our developer portal to explore NoSQL, browse resources, and take Couchbase for a spin in our playground.

Develop now
Try Capella

Get hands-on with Couchbase in just a few clicks. Capella DBaaS is the easiest and fastest way to get started.

Try free
Try Capella iQ

Use our generative AI-powered coding assistant to create sample data, refine it, and build queries on the datasets.

Get started