{"id":17061,"date":"2025-04-23T04:50:54","date_gmt":"2025-04-23T11:50:54","guid":{"rendered":"https:\/\/www.couchbase.com\/blog\/?p=17061"},"modified":"2025-06-16T10:43:34","modified_gmt":"2025-06-16T17:43:34","slug":"couchbase-partners-arize-ai","status":"publish","type":"post","link":"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/","title":{"rendered":"Couchbase Partners with Arize AI to Enable Trustworthy, Production-Ready AI Agent Applications"},"content":{"rendered":"<p>As enterprises look to deploy production-ready AI agent applications, Large Language Model (LLM) observability has emerged as a critical requirement for ensuring both performance and trust. Organizations need visibility into how agents interact with data, make decisions, and retrieve information to maintain reliability, security, and compliance. Without proper observability, enterprises risk deploying models that produce inconsistent, inaccurate, or biased results, leading to poor user experiences and operational inefficiencies. The new partnership between <a href=\"https:\/\/www.couchbase.com\/\" target=\"_blank\" rel=\"noopener\">Couchbase <\/a>and <a href=\"https:\/\/arize.com\/\" target=\"_blank\" rel=\"noopener\">Arize AI<\/a> plays a vital role in bringing robust monitoring, evaluation, and optimization capabilities to AI-driven applications.<\/p>\n<p>The integration of Couchbase and Arize AI delivers a powerful solution for building and monitoring Retrieval Augmented Generation (RAG) and agent applications at scale. By leveraging Couchbase&#8217;s high-performance vector database and the Arize AI observability platform and enhanced monitoring capabilities, enterprises can confidently build, deploy and optimize Agentic RAG solutions in production.<\/p>\n<p>In this blog, we\u2019ll walk through creating an Agentic RAG QA chatbot using LangGraph and the Couchbase <a href=\"https:\/\/couchbaselabs.github.io\/agent-catalog\/index.html#\" target=\"_blank\" rel=\"noopener\">Agent Catalog<\/a> component of the recently announced <a href=\"https:\/\/www.couchbase.com\/products\/ai-services\/\" target=\"_blank\" rel=\"noopener\">Capella AI services<\/a> (in preview), and evaluating and optimizing its performance with Arize AI. This is a tangible example of how Couchbase and Arize AI enable developers to enhance retrieval workflows, improve response accuracy, and monitor LLM-powered interactions in real time.<\/p>\n<h2>The Value of the Couchbase and Arize AI Partnership<\/h2>\n<p>By joining forces, Couchbase and Arize AI are revolutionizing how developers build and evaluate AI agent applications. Developers can construct sophisticated agent applications by leveraging Couchbase Capella as a single data platform for LLM caching, long-term and short-term agent memory, vector embedding use cases, analytics, and operational workloads along with their favorite agent development framework for orchestrating agent workflows.<\/p>\n<p>Couchbase Agent Catalog further enhances this system by providing a centralized store for multi-agent workflows within an organization that allows for storage, management, and discovery of various agent tools, prompt versioning, and LLM trace debugging.<\/p>\n<p>To ensure high reliability and transparency, Arize AI provides critical observability features, including:<\/p>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li aria-level=\"1\"><b>Tracing Agent Function Calls<\/b>: Arize enables detailed monitoring of the agent\u2019s function calls, including retrieval steps and LLM interactions, to track how responses are generated.<\/li>\n<li aria-level=\"1\"><b>Dataset Benchmarking<\/b>: Developers can create a structured dataset to evaluate and compare agent performance over time.<\/li>\n<li aria-level=\"1\"><b>Performance Evaluation with LLM as a Judge<\/b>: Using built-in evaluators, Arize leverages LLMs to assess response accuracy, relevance, and overall agent effectiveness.<\/li>\n<li aria-level=\"1\"><b>Experimenting with Retrieval Strategies<\/b>: By adjusting chunk sizes, overlaps, and the number of retrieved documents (K-value), developers can analyze their impact on agent performance.<\/li>\n<li aria-level=\"1\"><b>Comparative Analysis in Arize<\/b>: The platform allows side-by-side comparisons of different retrieval strategies, helping teams determine the optimal configuration for their agent.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3 style=\"font-size: 1.5em;\">The Importance of LLM Observability<\/h3>\n<p>To ensure that AI applications perform well in production, enterprises need a robust evaluation framework. Observability tools like Arize AI allow developers to:<\/p>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li aria-level=\"1\">Assess LLM outputs based on factors such as relevance, hallucination rates, and latency<\/li>\n<li aria-level=\"1\">Conduct systematic evaluations to measure the impact of prompt changes, retrieval modifications, and parameter adjustments<\/li>\n<li aria-level=\"1\">Curate comprehensive datasets to benchmark performance across different use cases<\/li>\n<li aria-level=\"1\">Automate evaluation processes within CI\/CD pipelines, ensuring consistent application reliability<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>Using an LLM as a judge, Arize AI allows developers to measure agent effectiveness using pre-tested evaluators, multi-level custom evaluation techniques, and large-scale performance benchmarking. By running thousands of evaluations, teams can iterate quickly and refine LLM prompts, retrieval methods, and agent workflows to improve overall application quality.<\/p>\n<h3 style=\"font-size: 1.5em;\">Building an Agentic RAG QA Chatbot<\/h3>\n<p>Agentic RAG combines the power of traditional retrieval-augmented generation with intelligent decision-making. In this implementation, we enable an LLM to dynamically decide whether retrieval is necessary based on the query context.<\/p>\n<div id=\"attachment_17062\" style=\"width: 910px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-17062\" class=\"wp-image-17062 size-large\" style=\"border: 1px solid Gainsboro;\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image1-1-1024x373.png\" alt=\"Arize AI for Agentic RAG with Couchbase\" width=\"900\" height=\"328\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image1-1-1024x373.png 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image1-1-300x109.png 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image1-1-768x280.png 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image1-1-1536x560.png 1536w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image1-1-1320x481.png 1320w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image1-1.png 1615w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><p id=\"caption-attachment-17062\" class=\"wp-caption-text\">Illustration depicting the agent workflow from <a href=\"https:\/\/github.com\/langchain-ai\/langgraph\/blob\/main\/examples\/rag\/langgraph_agentic_rag.ipynb\" target=\"_blank\" rel=\"noopener\">Langgraph\u2019s agentic RAG example<\/a>.<\/p><\/div>\n<h2>Step-by-Step Implementation<\/h2>\n<p>The rest of this blog is based on the accompanying <a href=\"https:\/\/github.com\/Arize-ai\/tutorials\/blob\/main\/python\/llm\/agents\/couchbase_langgraph_agentic_rag.ipynb\" target=\"_blank\" rel=\"noopener\">tutorial notebook<\/a>. Before building and deploying an observable AI agent, you\u2019ll need to configure your development environment.<\/p>\n<h3 style=\"font-size: 1.5em;\">Prerequisites:<\/h3>\n<ol>\n<li style=\"list-style-type: none;\">\n<ol>\n<li aria-level=\"1\">To follow along with this tutorial, you&#8217;ll need to <a href=\"https:\/\/app.arize.com\/auth\/join\" target=\"_blank\" rel=\"noopener\">sign up for Arize<\/a> and get your Space, API and Developer keys. You can see the <a href=\"https:\/\/docs.arize.com\/arize\/llm-tracing\/quickstart-llm#get-your-api-keys\" target=\"_blank\" rel=\"noopener\">guide here<\/a>. You will also need an <a href=\"https:\/\/openai.com\/\" target=\"_blank\" rel=\"noopener\">OpenAI API key<\/a>.<\/li>\n<li aria-level=\"1\">You&#8217;ll need to setup your Couchbase cluster by doing the following:\n<ol>\n<li aria-level=\"2\">Create an account at <a href=\"https:\/\/cloud.couchbase.com\/\" target=\"_blank\" rel=\"noopener\">Couchbase Cloud<\/a><\/li>\n<li aria-level=\"2\">Create a free cluster with the Data, Index, and Search services enabled*<\/li>\n<li aria-level=\"2\">Create cluster access credentials<\/li>\n<li aria-level=\"2\">Allow access to the cluster from your local machine<\/li>\n<li aria-level=\"2\">Create a bucket to store your documents<\/li>\n<li aria-level=\"2\">Create a <a href=\"https:\/\/docs.couchbase.com\/cloud\/vector-search\/create-vector-search-index-ui.html\" target=\"_blank\" rel=\"noopener\">search index<\/a><\/li>\n<\/ol>\n<\/li>\n<li aria-level=\"1\">Create tools and prompts required by agents using Couchbase Agent Catalog (for installation and more instructions, explore documentation <a href=\"https:\/\/couchbaselabs.github.io\/agent-catalog\/index.html\" target=\"_blank\" rel=\"noopener\">here<\/a>)<\/li>\n<\/ol>\n<\/li>\n<\/ol>\n<p>*The Search Service will be used to perform Semantic Search later when we use Agent catalog.<\/p>\n<hr \/>\n<h3 style=\"font-size: 1.5em;\">1) Create an Agentic RAG chatbot using LangGraph, Couchbase as the vector store and Agent Catalog to manage AI agents<\/h3>\n<h4 style=\"font-size: 1.25em;\">Setting Up Dependencies<\/h4>\n<pre class=\"nums:false wrap:true lang:python decode:true\">%pip install -qU langchain-openai langchain-community langchain langgraph langgraph.prebuilt openai langchain-couchbase agentc langchain-huggingface langchain_core\r\n\r\n%pip install -qq \"arize-phoenix[evals]&gt;=7.0.0\" \"arize-otel&gt;=0.7.0\" \"openinference-instrumentation-openai&gt;=0.1.18\" \"openinference-instrumentation-langchain&gt;=0.1.29\"<\/pre>\n<h4 style=\"font-size: 1.25em;\">Connecting to Couchbase<\/h4>\n<p>We&#8217;ll use Couchbase as our vector store. Here&#8217;s how to set up the connection:<\/p>\n<pre class=\"nums:false wrap:true lang:python decode:true\">from datetime import timedelta\r\nfrom couchbase.auth import PasswordAuthenticator\r\nfrom couchbase.cluster import Cluster\r\nfrom couchbase.options import ClusterOptions\r\nfrom langchain_couchbase.vectorstores import CouchbaseSearchVectorStore\r\nfrom langchain_huggingface import HuggingFaceEmbeddings\r\n\r\n# Cluster settings\r\nCB_CONN_STRING = \"your-connection-string\"\r\nCB_USERNAME = \"your-username\"\r\nCB_PASSWORD = \"your-password\"\r\nBUCKET_NAME = \"your-bucket-name\"\r\nSCOPE_NAME = \"your-scope-name\"\r\nCOLLECTION_NAME = \"your-collection-name\"\r\nSEARCH_INDEX_NAME = \"your-search-index-name\"\r\n\r\n# Connect to couchbase cluster\r\nauth = PasswordAuthenticator(CB_USERNAME, CB_PASSWORD)\r\noptions = ClusterOptions(auth)\r\noptions.apply_profile(\"wan_development\")\r\ncluster = Cluster(CB_CONN_STRING, options)\r\ncluster.wait_until_ready(timedelta(seconds=5))\r\n\r\n# Initialize vector store\r\nembeddings = HuggingFaceEmbeddings(model_name=\"sentence-transformers\/all-MiniLM-L12-v2\")\r\nvector_store = CouchbaseVectorStore(\r\n\u00a0\u00a0\u00a0\u00a0cluster=cluster,\r\n\u00a0\u00a0\u00a0\u00a0bucket_name=BUCKET_NAME,\r\n\u00a0\u00a0\u00a0\u00a0scope_name=SCOPE_NAME,\r\n\u00a0\u00a0\u00a0\u00a0collection_name=COLLECTION_NAME,\r\n\u00a0\u00a0\u00a0\u00a0embedding=embeddings,\r\n\u00a0\u00a0\u00a0\u00a0index_name=SEARCH_INDEX_NAME,\r\n)<\/pre>\n<h4 style=\"font-size: 1.25em;\">Document Ingestion<\/h4>\n<p>We&#8217;ll create a helper function to load and index documents with configurable chunking parameters:<\/p>\n<pre class=\"nums:false wrap:true lang:python decode:true\">from langchain_community.document_loaders import WebBaseLoader\r\nfrom langchain_text_splitters import RecursiveCharacterTextSplitter\r\n\r\ndef reset_vector_store(vector_store, chunk_size=1024, chunk_overlap=20):\r\n\u00a0\u00a0\u00a0\u00a0try:\u00a0\u00a0\u00a0\u00a0\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0# Delete existing documents\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0results = vector_store.similarity_search(\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0k=1000,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0query=\"\",\u00a0\u00a0\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0search_options={\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"query\": {\"field\": \"metadata.source\", \"match\": \"lilian_weng_blog\"}\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0},\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0)\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0if results:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0deleted_ids = [result.id for result in results]\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0vector_store.delete(ids=deleted_ids)\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0# Load documents from URLs\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0urls = [\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"https:\/\/lilianweng.github.io\/posts\/2024-07-07-hallucination\/\",\r\n\"https:\/\/lilianweng.github.io\/posts\/2023-03-15-prompt-engineering\/\",\r\n\"https:\/\/lilianweng.github.io\/posts\/2023-10-25-adv-attack-llm\/\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0]\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0docs = [WebBaseLoader(url).load() for url in urls]\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0docs_list = [item for sublist in docs for item in sublist]\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0# Use RecursiveCharacterTextSplitter with configurable parameters\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0text_splitter = RecursiveCharacterTextSplitter(\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0chunk_size=chunk_size,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0chunk_overlap=chunk_overlap,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0separators=[\"\\n\\n\", \"\\n\", \" \", \"\"],\u00a0\u00a0\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0)\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0doc_splits = text_splitter.split_documents(docs_list)\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0# Add metadata to documents\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0for doc in doc_splits:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0doc.metadata[\"source\"] = \"lilian_weng_blog\"\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0# Add documents to vector store\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0vector_store.add_documents(doc_splits)\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0return vector_store\r\n\u00a0\u00a0\u00a0\u00a0except ValueError as e:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0print(f\"Error: {e}\")\r\n\r\n# Initialize with default settings\r\nreset_vector_store(vector_store)<\/pre>\n<h4 style=\"font-size: 1.25em;\">Setting Up the Retriever Tool<\/h4>\n<p>Fetch our retriever tool from the Agent Catalog using the <code>agentc<\/code> provider. In the future, when more tools (and\/or prompts) are required and the application grows more complex, Agent Catalog SDK and CLI can be used to automatically fetch the tools based on the use case (semantic search) or by name.<\/p>\n<p>For instructions on how this tool was created and more capabilities of Agent catalog, please refer to the documentation <a href=\"https:\/\/couchbaselabs.github.io\/agent-catalog\/index.html\" target=\"_blank\" rel=\"noopener\">here<\/a>.<\/p>\n<pre class=\"nums:false wrap:true lang:python decode:true\">import agentc.langchain\r\nimport agentc\r\nfrom langchain_core.tools import tool\r\n\r\nprovider = agentc.Provider(\r\n\u00a0\u00a0\u00a0\u00a0decorator=lambda t: tool(t.func),\u00a0\r\n\u00a0\u00a0\u00a0\u00a0secrets={\"CB_USERNAME\": CB_USERNAME,\u00a0\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"CB_PASSWORD\": CB_PASSWORD,\u00a0\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"CB_CONN_STRING\": CB_CONN_STRING})\r\n\r\n# Get the retriever tool from Agent Catalog\r\nretriever_tool = provider.get_item(name=\"retriever_tool\", item_type=\"tool\")\r\ntools = retriever_tool<\/pre>\n<h4 style=\"font-size: 1.25em;\">Defining the Agent State<\/h4>\n<p>We will define a graph of agents to help all involved agents communicate with each other better. Agents communicate through a `state` object that is passed around to each node and modified with output from that node.<\/p>\n<p>Our state will be a list of <em>messages<\/em> and each node in our graph will append to it:<\/p>\n<pre class=\"nums:false wrap:true lang:python decode:true\">from typing import Annotated, Sequence, TypedDict\r\nfrom langchain_core.messages import BaseMessage\r\nfrom langgraph.graph.message import add_messages\r\n\r\nclass AgentState(TypedDict):\r\n\u00a0\u00a0\u00a0\u00a0# The add_messages function defines how an update should be processed\r\n\u00a0\u00a0\u00a0\u00a0# Default is to replace. add_messages says \"append\"\r\n\u00a0\u00a0\u00a0\u00a0messages: Annotated[Sequence[BaseMessage], add_messages]<\/pre>\n<h4 style=\"font-size: 1.25em;\">Creating Agent Nodes<\/h4>\n<p>We&#8217;ll define the core components of our agent pipeline:<\/p>\n<p>Nodes: Relevance Checking Function, Query Rewriter, Main Agent, Response Generation<\/p>\n<pre class=\"nums:false wrap:true lang:python decode:true\">from typing import Annotated, Literal, Sequence, TypedDict\r\nfrom langchain import hub\r\nfrom langchain_core.messages import BaseMessage, HumanMessage\r\nfrom langchain_core.output_parsers import StrOutputParser\r\nfrom langchain_core.prompts import PromptTemplate\r\nfrom langchain_core.pydantic_v1 import BaseModel, Field\r\nfrom langchain_openai import ChatOpenAI\r\nfrom langgraph.prebuilt import tools_condition\r\n\r\n### Relevance Checking Function\r\ndef grade_documents(state) -&gt; Literal[\"generate\", \"rewrite\"]:\r\n\u00a0\u00a0\u00a0\u00a0\"\"\"\r\n\u00a0\u00a0\u00a0\u00a0Determines whether the retrieved documents are relevant to the question.\r\n\u00a0\u00a0\u00a0\u00a0\"\"\"\r\n\u00a0\u00a0\u00a0\u00a0print(\"---CHECK RELEVANCE---\")\r\n\u00a0\u00a0\u00a0\u00a0# Data model\r\n\u00a0\u00a0\u00a0\u00a0class grade(BaseModel):\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"\"\"Binary score for relevance check.\"\"\"\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0binary_score: str = Field(description=\"Relevance score 'yes' or 'no'\")\r\n\u00a0\u00a0\u00a0\u00a0# LLM\r\n\u00a0\u00a0\u00a0\u00a0model = ChatOpenAI(temperature=0, model=\"gpt-4o\", streaming=True)\r\n\u00a0\u00a0\u00a0\u00a0llm_with_tool = model.with_structured_output(grade)\r\n\u00a0\u00a0\u00a0\u00a0#fetch a prompt from Agent Catalog\r\n\u00a0\u00a0\u00a0\u00a0grade_documents_prompt = PromptTemplate(\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0template=provider.get_item(name=\"grade_documents\", item_type=\"prompt\").prompt.render(),\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0input_variables=[\"context\", \"question\"],\r\n\u00a0\u00a0\u00a0\u00a0)\r\n\u00a0\u00a0\u00a0\u00a0# Chain\r\n\u00a0\u00a0\u00a0\u00a0chain = grade_documents_prompt | llm_with_tool\r\n\u00a0\u00a0\u00a0\u00a0messages = state[\"messages\"]\r\n\u00a0\u00a0\u00a0\u00a0last_message = messages[-1]\r\n\u00a0\u00a0\u00a0\u00a0question = messages[0].content\r\n\u00a0\u00a0\u00a0\u00a0docs = last_message.content\r\n\r\n\u00a0\u00a0\u00a0\u00a0scored_result = chain.invoke({\"question\": question, \"context\": docs})\r\n\u00a0\u00a0\u00a0\u00a0score = scored_result.binary_score\r\n\u00a0\u00a0\u00a0\u00a0if score == \"yes\":\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0print(\"---DECISION: DOCS RELEVANT---\")\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0return \"generate\"\r\n\u00a0\u00a0\u00a0\u00a0else:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0print(\"---DECISION: DOCS NOT RELEVANT---\")\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0print(score)\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0return \"rewrite\"\r\n### Main Agent Node\r\ndef agent(state):\r\n\u00a0\u00a0\u00a0\u00a0\"\"\"\r\n\u00a0\u00a0\u00a0\u00a0Invokes the agent model to generate a response or use tools.\r\n\u00a0\u00a0\u00a0\u00a0\"\"\"\r\n\u00a0\u00a0\u00a0\u00a0print(\"---CALL AGENT---\")\r\n\u00a0\u00a0\u00a0\u00a0messages = state[\"messages\"]\r\n\u00a0\u00a0\u00a0\u00a0model = ChatOpenAI(temperature=0, streaming=True, model=\"gpt-4-turbo\")\r\n\u00a0\u00a0\u00a0\u00a0model = model.bind_tools(tools)\r\n\u00a0\u00a0\u00a0\u00a0response = model.invoke(messages)\r\n\u00a0\u00a0\u00a0\u00a0return {\"messages\": [response]}\r\n### Query Rewriting Node\r\ndef rewrite(state):\r\n\u00a0\u00a0\u00a0\u00a0\"\"\"\r\n\u00a0\u00a0\u00a0\u00a0Transform the query to produce a better question.\r\n\u00a0\u00a0\u00a0\u00a0\"\"\"\r\n\u00a0\u00a0\u00a0\u00a0print(\"---TRANSFORM QUERY---\")\r\n\u00a0\u00a0\u00a0\u00a0messages = state[\"messages\"]\r\n\u00a0\u00a0\u00a0\u00a0question = messages[0].content\r\n\u00a0\u00a0\u00a0\u00a0msg = [\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0HumanMessage(\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0content=f\"\"\" \\n\u00a0\r\n\u00a0\u00a0\u00a0\u00a0Look at the input and try to reason about the underlying semantic intent \/ meaning. \\n\u00a0\r\n\u00a0\u00a0\u00a0\u00a0Here is the initial question:\r\n\u00a0\u00a0\u00a0\u00a0\\n ------- \\n\r\n\u00a0\u00a0\u00a0\u00a0{question}\u00a0\r\n\u00a0\u00a0\u00a0\u00a0\\n ------- \\n\r\n\u00a0\u00a0\u00a0\u00a0Formulate an improved question: \"\"\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0)\r\n\u00a0\u00a0\u00a0\u00a0]\r\n\u00a0\u00a0\u00a0\u00a0model = ChatOpenAI(temperature=0, model=\"gpt-4-0125-preview\", streaming=True)\r\n\u00a0\u00a0\u00a0\u00a0response = model.invoke(msg)\r\n\u00a0\u00a0\u00a0\u00a0return {\"messages\": [response]}\r\n\r\n### Response Generation Node\r\ndef generate(state):\r\n\u00a0\u00a0\u00a0\u00a0\"\"\"\r\n\u00a0\u00a0\u00a0\u00a0Generate answer using retrieved documents\r\n\u00a0\u00a0\u00a0\u00a0\"\"\"\r\n\u00a0\u00a0\u00a0\u00a0print(\"---GENERATE---\")\r\n\u00a0\u00a0\u00a0\u00a0messages = state[\"messages\"]\r\n\u00a0\u00a0\u00a0\u00a0question = messages[0].content\r\n\u00a0\u00a0\u00a0\u00a0last_message = messages[-1]\r\n\u00a0\u00a0\u00a0\u00a0docs = last_message.content\r\n\r\n\u00a0\u00a0\u00a0\u00a0# Prompt\r\n\u00a0\u00a0\u00a0\u00a0prompt = hub.pull(\"rlm\/rag-prompt\")\r\n\u00a0\u00a0\u00a0\u00a0llm = ChatOpenAI(model_name=\"gpt-4o-mini\", temperature=0, streaming=True)\r\n\u00a0\u00a0\u00a0\u00a0rag_chain = prompt | llm | StrOutputParser()\r\n\r\n\u00a0\u00a0\u00a0\u00a0# Run\r\n\u00a0\u00a0\u00a0\u00a0response = rag_chain.invoke({\"context\": docs, \"question\": question})\r\n\u00a0\u00a0\u00a0\u00a0return {\"messages\": [response]}<\/pre>\n<h4 style=\"font-size: 1.25em;\">Building the Agent Graph<\/h4>\n<p>Now we&#8217;ll connect the nodes into a coherent workflow:<\/p>\n<pre class=\"nums:false wrap:true lang:default decode:true\">from langgraph.graph import END, StateGraph, START\r\nfrom langgraph.prebuilt import ToolNode\r\n\r\n# Define a new graph\r\nworkflow = StateGraph(AgentState)\r\n\r\n# Define the nodes\r\nworkflow.add_node(\"agent\", agent)\r\nretrieve = ToolNode(retriever_tool)\r\nworkflow.add_node(\"retrieve\", retrieve)\r\nworkflow.add_node(\"rewrite\", rewrite)\r\nworkflow.add_node(\"generate\", generate)\r\n\r\n# Define edges\r\nworkflow.add_edge(START, \"agent\")\r\n\r\n# Conditional edges based on agent's decision\r\nworkflow.add_conditional_edges(\r\n\u00a0\u00a0\u00a0\u00a0\"agent\",\r\n\u00a0\u00a0\u00a0\u00a0tools_condition,\r\n\u00a0\u00a0\u00a0\u00a0{\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"tools\": \"retrieve\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0END: END,\r\n\u00a0\u00a0\u00a0\u00a0},\r\n)\r\n\r\n# Conditional edges after retrieval based on document relevance\r\nworkflow.add_conditional_edges(\r\n\u00a0\u00a0\u00a0\u00a0\"retrieve\",\r\n\u00a0\u00a0\u00a0\u00a0grade_documents,\r\n)\r\nworkflow.add_edge(\"generate\", END)\r\nworkflow.add_edge(\"rewrite\", \"agent\")\r\n\r\n# Compile the graph\r\ngraph = workflow.compile()<\/pre>\n<h4 style=\"font-size: 1.25em;\">Visualizing the Agent Graph<\/h4>\n<p>Let&#8217;s visualize our workflow to better understand it:<\/p>\n<pre class=\"nums:false wrap:true lang:python decode:true\">from IPython.display import Image, display\r\n\r\ntry:\r\n\u00a0\u00a0\u00a0\u00a0display(Image(graph.get_graph(xray=True).draw_mermaid_png()))\r\nexcept Exception:\r\n\u00a0\u00a0\u00a0\u00a0# This requires some extra dependencies and is optional\r\n\u00a0\u00a0\u00a0\u00a0pass<\/pre>\n<hr \/>\n<h3 style=\"font-size: 1.5em;\">2) Trace the agent\u2019s function calls using Arize, capturing retrieval queries, LLM responses, and tool usage<\/h3>\n<p>Arize provides comprehensive observability for our agent system. Let&#8217;s set up tracing:<\/p>\n<pre class=\"nums:false wrap:true lang:python decode:true\">from arize.otel import register\r\n# Setup tracer provider\r\ntracer_provider = register(\r\n\u00a0\u00a0\u00a0\u00a0space_id = SPACE_ID,\r\n\u00a0\u00a0\u00a0\u00a0api_key = API_KEY,\r\n\u00a0\u00a0\u00a0\u00a0project_name = \"langgraph-agentic-rag\",\r\n)\r\n# Import the automatic instrumentor from OpenInference\r\nfrom openinference.instrumentation.langchain import LangChainInstrumentor\r\n# Instrument LangChain\r\nLangChainInstrumentor().instrument(tracer_provider=tracer_provider)<\/pre>\n<p>Now let&#8217;s run the agent to see how it works:<\/p>\n<pre class=\"nums:false wrap:true lang:python decode:true\">import pprint\r\ninputs = {\r\n\u00a0\u00a0\u00a0\u00a0\"messages\": [\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0(\"user\", \"What does Lilian Weng say about the types of adversarial attacks on LLMs?\"),\r\n\u00a0\u00a0\u00a0\u00a0]\r\n}\r\nfor output in graph.stream(inputs):\r\n\u00a0\u00a0\u00a0\u00a0for key, value in output.items():\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0pprint.pprint(f\"Output from node '{key}':\")\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0pprint.pprint(value, indent=2, width=80, depth=None)<\/pre>\n<p>This will execute our agent graph and output detailed information for each node as it processes the query. In Arize, you&#8217;ll be able to see a trace visualization showing the execution flow, latency, and details of each function call.<\/p>\n<div id=\"attachment_17064\" style=\"width: 910px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-17064\" class=\"wp-image-17064 size-large\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image3-1-1024x614.png\" alt=\"Tracing visualization from Arize platform\" width=\"900\" height=\"540\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image3-1-1024x614.png 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image3-1-300x180.png 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image3-1-768x461.png 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image3-1-1536x921.png 1536w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image3-1-1320x792.png 1320w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image3-1.png 1999w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><p id=\"caption-attachment-17064\" class=\"wp-caption-text\">Tracing Visualization from Arize Platform<\/p><\/div>\n<hr \/>\n<h3 style=\"font-size: 1.5em;\">3) Benchmark performance by generating a dataset with queries and expected responses<\/h3>\n<p>To systematically evaluate our system, we need a benchmark dataset:<\/p>\n<pre class=\"nums:false wrap:true lang:python decode:true\">import pandas as pd\r\nfrom langchain_openai import ChatOpenAI\r\n# Define a template for generating questions\r\nGEN_TEMPLATE = \"\"\"\r\nYou are an assistant that generates Q&amp;A questions about the content below.\r\nThe questions should involve the content, specific facts and figures,names, and elements of the story. Do not ask any questions where the answer is not in the content.\r\nRespond with one question per line. Do not include any numbering at the beginning of each line. Do not include any category headings.\r\nGenerate 10 questions. Be sure there are no duplicate questions.\r\n[START CONTENT]\r\n{content}\r\n[END CONTENT]\r\n\"\"\"\r\n# Load the content you want to generate questions about\r\ncontent = \"\"\"\r\nLilian Weng discusses various aspects of adversarial attacks on LLMs and prompt engineering techniques. Make sure to use Lilian Weng's name in the questions.\r\n\"\"\"\r\n# Format the template with the content\r\nformatted_template = GEN_TEMPLATE.format(content=content)\r\n# Initialize the language model\r\nmodel = ChatOpenAI(model=\"gpt-4o\", max_tokens=1300)\r\n# Generate questions\r\nresponse = model.invoke(formatted_template)\r\nquestions_content = response.content\r\nquestions = questions_content.strip().split(\"\\n\")\r\n# Create a dataframe to store the questions\r\nquestions_df = pd.DataFrame(questions, columns=[\"input\"])<\/pre>\n<hr \/>\n<h3 style=\"font-size: 1.5em;\">4) Evaluate Performance Using LLM as a Judge<\/h3>\n<p>We&#8217;ll use LLM-based evaluation to assess the quality of our agent&#8217;s responses:<\/p>\n<pre class=\"nums:false lang:python decode:true\">from phoenix.evals import (\r\n\u00a0\u00a0\u00a0\u00a0RAG_RELEVANCY_PROMPT_RAILS_MAP,\r\n\u00a0\u00a0\u00a0\u00a0RAG_RELEVANCY_PROMPT_TEMPLATE,\r\n\u00a0\u00a0\u00a0\u00a0QA_PROMPT_RAILS_MAP,\r\n\u00a0\u00a0\u00a0\u00a0QA_PROMPT_TEMPLATE,\r\n\u00a0\u00a0\u00a0\u00a0OpenAIModel,\r\n\u00a0\u00a0\u00a0\u00a0llm_classify\r\n)\r\n\r\n# The rails is used to hold the output to specific values based on the template\r\nRELEVANCE_RAILS = list(RAG_RELEVANCY_PROMPT_RAILS_MAP.values())\r\nQA_RAILS = list(QA_PROMPT_RAILS_MAP.values())\r\n\r\nrelevance_eval_df = llm_classify(\r\n\u00a0\u00a0\u00a0\u00a0dataframe=response_df,\r\n\u00a0\u00a0\u00a0\u00a0template=RAG_RELEVANCY_PROMPT_TEMPLATE,\r\n\u00a0\u00a0\u00a0\u00a0model=OpenAIModel(model=\"gpt-4o\"),\r\n\u00a0\u00a0\u00a0\u00a0rails=RELEVANCE_RAILS,\r\n\u00a0\u00a0\u00a0\u00a0provide_explanation=True,\r\n\u00a0\u00a0\u00a0\u00a0include_prompt=True,\r\n\u00a0\u00a0\u00a0\u00a0concurrency=4,\r\n)\r\n\r\ncorrectness_eval_df = llm_classify(\r\n\u00a0\u00a0\u00a0\u00a0dataframe=response_df,\r\n\u00a0\u00a0\u00a0\u00a0template=QA_PROMPT_TEMPLATE,\r\n\u00a0\u00a0\u00a0\u00a0model=OpenAIModel(model=\"gpt-4o\"),\r\n\u00a0\u00a0\u00a0\u00a0rails=QA_RAILS,\r\n\u00a0\u00a0\u00a0\u00a0provide_explanation=True,\r\n\u00a0\u00a0\u00a0\u00a0include_prompt=True,\r\n\u00a0\u00a0\u00a0\u00a0concurrency=4,\r\n)\r\n<\/pre>\n<hr \/>\n<h3 style=\"font-size: 1.5em;\">5) Experiment with Retrieval Settings<\/h3>\n<p>Now let&#8217;s experiment with different configurations to optimize our system:<\/p>\n<pre class=\"nums:false wrap:true lang:python decode:true\"># Create a dataset in Arize to store our experiments\r\nfrom arize.experimental.datasets import ArizeDatasetsClient\r\nfrom uuid import uuid1\r\nfrom arize.experimental.datasets.experiments.types import (\r\n\u00a0\u00a0\u00a0\u00a0ExperimentTaskResultColumnNames,\r\n\u00a0\u00a0\u00a0\u00a0EvaluationResultColumnNames,\r\n)\r\nfrom arize.experimental.datasets.utils.constants import GENERATIVE\r\n# Set up the arize client\r\narize_client = ArizeDatasetsClient(developer_key=DEVELOPER_KEY, api_key=API_KEY)\r\ndataset_name = \"rag-experiments-\" + str(uuid1())[:3]\r\ndataset_id = arize_client.create_dataset(\r\n\u00a0\u00a0\u00a0\u00a0space_id=SPACE_ID,\r\n\u00a0\u00a0\u00a0\u00a0dataset_name=dataset_name,\r\n\u00a0\u00a0\u00a0\u00a0dataset_type=GENERATIVE,\r\n\u00a0\u00a0\u00a0\u00a0data=questions_df,\r\n)\r\ndataset = arize_client.get_dataset(space_id=SPACE_ID, dataset_id=dataset_id)\r\n# Define column mappings for task and evaluation results\r\ntask_cols = ExperimentTaskResultColumnNames(\r\n\u00a0\u00a0\u00a0\u00a0example_id=\"example_id\", result=\"output\"\r\n)\r\nrelevance_evaluator_cols = EvaluationResultColumnNames(\r\n\u00a0\u00a0\u00a0\u00a0label=\"relevance\",\r\n\u00a0\u00a0\u00a0\u00a0explanation=\"relevance_explanation\",\r\n)\r\ncorrectness_evaluator_cols = EvaluationResultColumnNames(\r\n\u00a0\u00a0\u00a0\u00a0label=\"correctness\",\r\n\u00a0\u00a0\u00a0\u00a0explanation=\"correctness_explanation\",\r\n)\r\n\r\n# Function to log experiments to Arize\r\ndef log_experiment_to_arize(experiment_df, experiment_name):\r\n\u00a0\u00a0\u00a0\u00a0experiment_df[\"example_id\"] = dataset[\"id\"]\r\n\u00a0\u00a0\u00a0\u00a0return arize_client.log_experiment(\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0space_id=SPACE_ID,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0experiment_name=experiment_name + \"-\" + str(uuid1())[:2],\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0experiment_df=experiment_df,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0task_columns=task_cols,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0evaluator_columns={\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"correctness\": correctness_evaluator_cols,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"relevance\": relevance_evaluator_cols,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0},\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0dataset_name=dataset_name,\r\n\u00a0\u00a0\u00a0\u00a0)<\/pre>\n<p>Now we&#8217;ll run experiments with different configurations:<\/p>\n<pre class=\"nums:false wrap:true lang:python decode:true\"># Experiment 1: Chunks of 1024 tokens, k=2\r\nreset_vector_store(vector_store, chunk_size=1024, chunk_overlap=20)\r\nk_2_chunk_1024_overlap_20 = run_rag(questions_df, k_value=2)\r\nk_2_chunk_1024_overlap_20 = run_evaluators(k_2_chunk_1024_overlap_20)\r\n# Experiment 2: Chunks of 1024 tokens, k=4\r\nk_4_chunk_1024_overlap_20 = run_rag(questions_df, k_value=4)\r\nk_4_chunk_1024_overlap_20 = run_evaluators(k_4_chunk_1024_overlap_20)\r\n# Experiment 3: Smaller chunks (200 tokens), k=2\r\nreset_vector_store(vector_store, chunk_size=200, chunk_overlap=20)\r\nk_2_chunk_200_overlap_20 = run_rag(questions_df, k_value=2)\r\nk_2_chunk_200_overlap_20 = run_evaluators(k_2_chunk_200_overlap_20)\r\n# Experiment 4: Medium chunks (500 tokens), k=2\r\nreset_vector_store(vector_store, chunk_size=500, chunk_overlap=20)\r\nk_2_chunk_500_overlap_20 = run_rag(questions_df, k_value=2)\r\nk_2_chunk_500_overlap_20 = run_evaluators(k_2_chunk_500_overlap_20)\r\n# Log all experiments to Arize\r\nlog_experiment_to_arize(k_2_chunk_1024_overlap_20, \"k_2_chunk_1024_overlap_20\")\r\nlog_experiment_to_arize(k_4_chunk_1024_overlap_20, \"k_4_chunk_1024_overlap_20\")\r\nlog_experiment_to_arize(k_2_chunk_200_overlap_20, \"k_2_chunk_200_overlap_20\")\r\nlog_experiment_to_arize(k_2_chunk_500_overlap_20, \"k_2_chunk_500_overlap_20\")<\/pre>\n<hr \/>\n<h3 style=\"font-size: 1.5em;\">6) Compare Experiments in Arize<\/h3>\n<p>After running all the experiments, you can now view and compare them in the Arize UI. The experiments should be visible in your Arize workspace under the dataset name we created earlier.<\/p>\n<div id=\"attachment_17065\" style=\"width: 910px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-17065\" class=\"wp-image-17065 size-large\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image2-1-1024x657.png\" alt=\"\" width=\"900\" height=\"577\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image2-1-1024x657.png 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image2-1-300x193.png 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image2-1-768x493.png 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image2-1-1536x986.png 1536w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image2-1-1320x847.png 1320w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image2-1.png 1999w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><p id=\"caption-attachment-17065\" class=\"wp-caption-text\">Experiments comparison view from Arize Platform<\/p><\/div>\n<p>In Arize, you can:<\/p>\n<ol>\n<li style=\"list-style-type: none;\">\n<ol>\n<li aria-level=\"1\">Compare the overall performance metrics between different configurations<\/li>\n<li aria-level=\"1\">Analyze per-question performance to identify patterns<\/li>\n<li aria-level=\"1\">Examine trace details to understand execution flow<\/li>\n<li aria-level=\"1\">View relevance and correctness scores for each experiment<\/li>\n<li aria-level=\"1\">See explanations for evaluation decisions<\/li>\n<li aria-level=\"1\">Evaluate outputs using an LLM as a judge to score response relevance and correctness.<\/li>\n<li aria-level=\"1\">Optimize retrieval settings by experimenting with chunk sizes, overlap configurations, and document retrieval limits.<\/li>\n<li aria-level=\"1\">Compare and analyze experiments in Arize to determine the best-performing configurations.<\/li>\n<\/ol>\n<\/li>\n<\/ol>\n<h3 style=\"font-size: 1.5em;\">Innovate with Couchbase and Arize AI<\/h3>\n<p>The integration of Couchbase and Arize empowers enterprises to build robust, production-ready GenAI applications with strong observability and optimization capabilities. By leveraging Agentic RAG with monitored retrieval decisions, organizations can improve accuracy, reduce hallucinations, and ensure optimal performance over time.<\/p>\n<p>As enterprises continue to push the boundaries of GenAI, combining high-performance vector storage with AI observability will be key to deploying reliable and scalable applications. With Couchbase and Arize, organizations have the tools to confidently navigate the challenges of enterprise GenAI deployment.<\/p>\n<h2>Additional Resources<\/h2>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li aria-level=\"1\">Get started with <a href=\"https:\/\/cloud.couchbase.com\/\" target=\"_blank\" rel=\"noopener\">Couchbase Capella developer platform<\/a> for free<\/li>\n<li aria-level=\"1\">Reach out to Arize AI for a demo <a href=\"https:\/\/arize.com\/request-a-demo\/\" target=\"_blank\" rel=\"noopener\">here<\/a><\/li>\n<li aria-level=\"1\">Read more posts and tutorials on <a href=\"https:\/\/www.couchbase.com\/blog\/category\/generative-ai-genai\/\" target=\"_blank\" rel=\"noopener\">Generative AI (GenAI)<\/a><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>As enterprises look to deploy production-ready AI agent applications, Large Language Model (LLM) observability has emerged as a critical requirement for ensuring both performance and trust. Organizations need visibility into how agents interact with data, make decisions, and retrieve information [&hellip;]<\/p>\n","protected":false},"author":85604,"featured_media":17063,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"footnotes":""},"categories":[10123,10122,1815,3917,2242,9973,9921],"tags":[],"ppma_author":[10107,10106],"class_list":["post-17061","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-agentic-ai-apps","category-artificial-intelligence-ai","category-best-practices-and-tutorials","category-company","category-connectors","category-generative-ai-genai","category-partners"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.3 (Yoast SEO v27.3) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Couchbase Partners with Arize AI to Enable Trustworthy, Production-Ready AI Agent Applications<\/title>\n<meta name=\"description\" content=\"Couchbase partners with Arize AI to bring observability to production-ready AI agent apps. Learn to build, monitor, optimize Agentic RAG solutions at scale.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Couchbase Partners with Arize AI to Enable Trustworthy, Production-Ready AI Agent Applications\" \/>\n<meta property=\"og:description\" content=\"Couchbase partners with Arize AI to bring observability to production-ready AI agent apps. Learn to build, monitor, optimize Agentic RAG solutions at scale.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/\" \/>\n<meta property=\"og:site_name\" content=\"The Couchbase Blog\" \/>\n<meta property=\"article:published_time\" content=\"2025-04-23T11:50:54+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-06-16T17:43:34+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/blog_header_images_2025-18.png\" \/>\n\t<meta property=\"og:image:width\" content=\"2400\" \/>\n\t<meta property=\"og:image:height\" content=\"1256\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Richard Young - Dir. Partner Solutions Architecture, Arize AI, Tanvi Johari, Software Engineer, Couchbase\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Richard Young - Dir. Partner Solutions Architecture, Arize AI\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/couchbase-partners-arize-ai\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/couchbase-partners-arize-ai\\\/\"},\"author\":{\"name\":\"Richard Young - Dir. Partner Solutions Architecture, Arize AI\",\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/#\\\/schema\\\/person\\\/093cd9f3536b4ab051e854456e8d9573\"},\"headline\":\"Couchbase Partners with Arize AI to Enable Trustworthy, Production-Ready AI Agent Applications\",\"datePublished\":\"2025-04-23T11:50:54+00:00\",\"dateModified\":\"2025-06-16T17:43:34+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/couchbase-partners-arize-ai\\\/\"},\"wordCount\":1428,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/couchbase-partners-arize-ai\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/wp-content\\\/uploads\\\/sites\\\/1\\\/2025\\\/04\\\/blog_header_images_2025-18.png\",\"articleSection\":[\"Agentic AI Applications\",\"Artificial Intelligence (AI)\",\"Best Practices and Tutorials\",\"Company\",\"Connectors\",\"Generative AI (GenAI)\",\"Partners\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/couchbase-partners-arize-ai\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/couchbase-partners-arize-ai\\\/\",\"url\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/couchbase-partners-arize-ai\\\/\",\"name\":\"Couchbase Partners with Arize AI to Enable Trustworthy, Production-Ready AI Agent Applications\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/couchbase-partners-arize-ai\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/couchbase-partners-arize-ai\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/wp-content\\\/uploads\\\/sites\\\/1\\\/2025\\\/04\\\/blog_header_images_2025-18.png\",\"datePublished\":\"2025-04-23T11:50:54+00:00\",\"dateModified\":\"2025-06-16T17:43:34+00:00\",\"description\":\"Couchbase partners with Arize AI to bring observability to production-ready AI agent apps. Learn to build, monitor, optimize Agentic RAG solutions at scale.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/couchbase-partners-arize-ai\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/couchbase-partners-arize-ai\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/couchbase-partners-arize-ai\\\/#primaryimage\",\"url\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/wp-content\\\/uploads\\\/sites\\\/1\\\/2025\\\/04\\\/blog_header_images_2025-18.png\",\"contentUrl\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/wp-content\\\/uploads\\\/sites\\\/1\\\/2025\\\/04\\\/blog_header_images_2025-18.png\",\"width\":2400,\"height\":1256},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/couchbase-partners-arize-ai\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Couchbase Partners with Arize AI to Enable Trustworthy, Production-Ready AI Agent Applications\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/#website\",\"url\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/\",\"name\":\"The Couchbase Blog\",\"description\":\"Couchbase, the NoSQL Database\",\"publisher\":{\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/#organization\",\"name\":\"The Couchbase Blog\",\"url\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/wp-content\\\/uploads\\\/2023\\\/04\\\/admin-logo.png\",\"contentUrl\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/wp-content\\\/uploads\\\/2023\\\/04\\\/admin-logo.png\",\"width\":218,\"height\":34,\"caption\":\"The Couchbase Blog\"},\"image\":{\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/#\\\/schema\\\/logo\\\/image\\\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/#\\\/schema\\\/person\\\/093cd9f3536b4ab051e854456e8d9573\",\"name\":\"Richard Young - Dir. Partner Solutions Architecture, Arize AI\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/wp-content\\\/uploads\\\/sites\\\/1\\\/2025\\\/04\\\/1570866773537.jpegfe78e2371572797afb80e27c299842d9\",\"url\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/wp-content\\\/uploads\\\/sites\\\/1\\\/2025\\\/04\\\/1570866773537.jpeg\",\"contentUrl\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/wp-content\\\/uploads\\\/sites\\\/1\\\/2025\\\/04\\\/1570866773537.jpeg\",\"caption\":\"Richard Young - Dir. Partner Solutions Architecture, Arize AI\"},\"sameAs\":[\"https:\\\/\\\/arize.com\"],\"url\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/author\\\/richardyoung\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Couchbase Partners with Arize AI to Enable Trustworthy, Production-Ready AI Agent Applications","description":"Couchbase partners with Arize AI to bring observability to production-ready AI agent apps. Learn to build, monitor, optimize Agentic RAG solutions at scale.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/","og_locale":"en_US","og_type":"article","og_title":"Couchbase Partners with Arize AI to Enable Trustworthy, Production-Ready AI Agent Applications","og_description":"Couchbase partners with Arize AI to bring observability to production-ready AI agent apps. Learn to build, monitor, optimize Agentic RAG solutions at scale.","og_url":"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/","og_site_name":"The Couchbase Blog","article_published_time":"2025-04-23T11:50:54+00:00","article_modified_time":"2025-06-16T17:43:34+00:00","og_image":[{"width":2400,"height":1256,"url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/blog_header_images_2025-18.png","type":"image\/png"}],"author":"Richard Young - Dir. Partner Solutions Architecture, Arize AI, Tanvi Johari, Software Engineer, Couchbase","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Richard Young - Dir. Partner Solutions Architecture, Arize AI","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/#article","isPartOf":{"@id":"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/"},"author":{"name":"Richard Young - Dir. Partner Solutions Architecture, Arize AI","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/093cd9f3536b4ab051e854456e8d9573"},"headline":"Couchbase Partners with Arize AI to Enable Trustworthy, Production-Ready AI Agent Applications","datePublished":"2025-04-23T11:50:54+00:00","dateModified":"2025-06-16T17:43:34+00:00","mainEntityOfPage":{"@id":"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/"},"wordCount":1428,"commentCount":0,"publisher":{"@id":"https:\/\/www.couchbase.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/#primaryimage"},"thumbnailUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/blog_header_images_2025-18.png","articleSection":["Agentic AI Applications","Artificial Intelligence (AI)","Best Practices and Tutorials","Company","Connectors","Generative AI (GenAI)","Partners"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/","url":"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/","name":"Couchbase Partners with Arize AI to Enable Trustworthy, Production-Ready AI Agent Applications","isPartOf":{"@id":"https:\/\/www.couchbase.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/#primaryimage"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/#primaryimage"},"thumbnailUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/blog_header_images_2025-18.png","datePublished":"2025-04-23T11:50:54+00:00","dateModified":"2025-06-16T17:43:34+00:00","description":"Couchbase partners with Arize AI to bring observability to production-ready AI agent apps. Learn to build, monitor, optimize Agentic RAG solutions at scale.","breadcrumb":{"@id":"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/#primaryimage","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/blog_header_images_2025-18.png","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/blog_header_images_2025-18.png","width":2400,"height":1256},{"@type":"BreadcrumbList","@id":"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.couchbase.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Couchbase Partners with Arize AI to Enable Trustworthy, Production-Ready AI Agent Applications"}]},{"@type":"WebSite","@id":"https:\/\/www.couchbase.com\/blog\/#website","url":"https:\/\/www.couchbase.com\/blog\/","name":"The Couchbase Blog","description":"Couchbase, the NoSQL Database","publisher":{"@id":"https:\/\/www.couchbase.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.couchbase.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.couchbase.com\/blog\/#organization","name":"The Couchbase Blog","url":"https:\/\/www.couchbase.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png","width":218,"height":34,"caption":"The Couchbase Blog"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/093cd9f3536b4ab051e854456e8d9573","name":"Richard Young - Dir. Partner Solutions Architecture, Arize AI","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/1570866773537.jpegfe78e2371572797afb80e27c299842d9","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/1570866773537.jpeg","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/1570866773537.jpeg","caption":"Richard Young - Dir. Partner Solutions Architecture, Arize AI"},"sameAs":["https:\/\/arize.com"],"url":"https:\/\/www.couchbase.com\/blog\/author\/richardyoung\/"}]}},"acf":[],"authors":[{"term_id":10107,"user_id":85604,"is_guest":0,"slug":"richardyoung","display_name":"Richard Young - Dir. Partner Solutions Architecture, Arize AI","avatar_url":{"url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/1570866773537.jpeg","url2x":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/1570866773537.jpeg"},"0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""},{"term_id":10106,"user_id":85603,"is_guest":0,"slug":"tanvijohari","display_name":"Tanvi Johari, Software Engineer, Couchbase","avatar_url":{"url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/1736197907696-1.jpeg","url2x":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/1736197907696-1.jpeg"},"0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""}],"_links":{"self":[{"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/posts\/17061","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/users\/85604"}],"replies":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/comments?post=17061"}],"version-history":[{"count":0,"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/posts\/17061\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/media\/17063"}],"wp:attachment":[{"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/media?parent=17061"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/categories?post=17061"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/tags?post=17061"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/ppma_author?post=17061"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}