Tag: LLMs

Unleash Real-Time Agentic AI With Streaming Agents on Confluent Cloud and Couchbase
Unleash Real-Time Agentic AI With Streaming Agents on Confluent Cloud and Couchbase

We’re thrilled to be partnered with Confluent today as they announce the new features for Streaming Agents on Confluent Cloud and a new Real-Time Context Engine.

Building Production-Ready AI Agents with Couchbase and Nebius AI (Webinar Recap)
Building Production-Ready AI Agents with Couchbase and Nebius AI (Webinar Recap)

This combination of LLM, plus the tools, memory and goals is what gives agents the capability to do more than just generate text.

Unlocking the Power of AWS Bedrock with Couchbase
Unlocking the Power of AWS Bedrock with Couchbase

In this blog, we explore how Couchbase’s vector store, when integrated with AWS Bedrock, creates a powerful, scalable, and cost-effective AI solution.

Introducing Couchbase as a Vector Store in Flowise
Introducing Couchbase as a Vector Store in Flowise

Integrate Couchbase as a high-performance vector store in Flowise. Upsert data, run vector searches, and build AI applications with this no-code solution.

Introducing Model Context Protocol (MCP) Server for Couchbase
Introducing Model Context Protocol (MCP) Server for Couchbase

Introducing Couchbase MCP Server: an open-source solution to power AI agents and GenAI apps with real-time access to your Couchbase data.

A Guide to LLM Embeddings
A Guide to LLM Embeddings

Learn how LLMs generate and use embeddings to enhance natural language processing, improve search relevance, and enable AI-driven applications.

Couchbase and NVIDIA Team Up to Help Accelerate Agentic Application Development
Couchbase and NVIDIA Team Up to Help Accelerate Agentic Application Development

Couchbase and NVIDIA team up to make agentic applications easier and faster to build, to feed with data, and to run.

AI-Ready Data: Automate Embeddings with Capella’s Vectorization Service
AI-Ready Data: Automate Embeddings with Capella’s Vectorization Service

Capella Vectorization Service automates embeddings, AI development is faster, easier with seamless semantic search, RAG apps, smart data retrieval.

DeepSeek Models Now Available in Capella AI Services
DeepSeek Models Now Available in Capella AI Services

DeepSeek-R1 is now in Capella AI Services! Unlock advanced reasoning for enterprise AI at lower TCO. 🚀 Sign up for early access!

A Tool to Ease Your Transition From Oracle PL/SQL to Couchbase JavaScript UDF
A Tool to Ease Your Transition From Oracle PL/SQL to Couchbase JavaScript UDF

Convert PL/SQL to JavaScript UDFs seamlessly with an AI-powered tool. Automate Oracle PL/SQL migration to Couchbase with high accuracy using ANTLR and LLMs.

Integrate Groq’s Fast LLM Inferencing With Couchbase Vector Search
Integrate Groq’s Fast LLM Inferencing With Couchbase Vector Search

Integrate Groq’s fast LLM inference with Couchbase Vector Search for efficient RAG apps. Compare its speed with OpenAI, Gemini, and Ollama.

Capella Model Service: Secure, Scalable, and OpenAI-Compatible
Capella Model Service: Secure, Scalable, and OpenAI-Compatible

Capella Model Service lets you deploy secure, scalable AI models with OpenAI compatibility. Now in Private Preview!