{"id":17798,"date":"2026-01-19T11:10:11","date_gmt":"2026-01-19T19:10:11","guid":{"rendered":"https:\/\/www.couchbase.com\/blog\/?p=17798"},"modified":"2026-01-19T11:13:00","modified_gmt":"2026-01-19T19:13:00","slug":"codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog","status":"publish","type":"post","link":"https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/","title":{"rendered":"Codelab: Building an AI Agent With Couchbase AI Services &#038; Agent Catalog"},"content":{"rendered":"<p>In this CodeLab, you will learn how to build a <strong>Hotel Search Agent<\/strong> using <strong>LangChain<\/strong>, <strong>Couchbase AI Services<\/strong>, and <strong>Agent Catalog<\/strong>. We will also incorporate <strong>Arize Phoenix<\/strong> for observability and evaluation to ensure our agent performs reliably.<\/p>\n<p>This tutorial takes you from zero to a fully functional agent that can search for hotels, filter by amenities, and answer natural language queries using real-world data.<\/p>\n<p>Note: You can find the full Google CodeLab notebook for this CodeLab <a href=\"https:\/\/colab.research.google.com\/github\/couchbase-examples\/agent-catalog-quickstart\/blob\/main\/notebooks\/hotel_search_agent_langchain\/hotel_search_agent_tutorial.ipynb\">here<\/a>. <\/p>\n<h2>What Are Couchbase AI Services?<\/h2>\n<p>Building AI applications often involves juggling multiple services: a vector database for memory, an inference provider for LLMs (like OpenAI or Anthropic), and separate infrastructure for embedding models.<\/p>\n<p><strong>Couchbase AI Services<\/strong> streamlines this by providing a unified platform where your operational data, vector search, and AI models live together. It offers:<\/p>\n<ul class=\"list-items\">\n<li><strong>LLM inference and embeddings API:<\/strong> Access popular LLMs (like Llama 3) and embedding models directly within Couchbase Capella, with no external API keys, no extra infrastructure, and no data egress. Your application data stays inside Capella. Queries, vectors, and model inference all happen where the data lives. This enables secure, low-latency AI experiences while meeting privacy, compliance requirements. Thus, the key value: data and AI together, without sending sensitive information outside your system.<\/li>\n<li><strong>Unified platform:<\/strong> Database + Vectorization + Search + Model<\/li>\n<li><strong>Integrated vector search:<\/strong> Perform semantic search directly on your JSON data with millisecond latency.<\/li>\n<\/ul>\n<h3>Why Is This Needed?<\/h3>\n<p>As we move from simple chatbots to <strong>agentic workflows<\/strong>, where AI models autonomously use tools, latency, and complexity of setup become bottlenecks. By co-locating your data and AI services, you reduce the operational overhead and latency. Furthermore, tools like the <strong>Agent Catalog<\/strong> help with managing hundreds of agent prompts and tools and provide built in logging for your agents.<\/p>\n<h2>Prerequisites<\/h2>\n<p>Before we begin, ensure you have:<\/p>\n<ul class=\"list-items\">\n<li>A <a href=\"https:\/\/cloud.couchbase.com\">Couchbase Capella<\/a> account.<\/li>\n<li>Python 3.10+ installed.<\/li>\n<li>Basic familiarity with Python and Jupyter notebooks.<\/li>\n<\/ul>\n<h3>Create a Cluster in Couchbase Capella<\/h3>\n<ol>\n<li>Log into Couchbase Capella.<\/li>\n<li>Create a new cluster or use an existing one. Note that the cluster needs to run the latest version of Couchbase Server 8.0 with the Data, Query, Index, and the Eventing services.<\/li>\n<li>Create a bucket.<\/li>\n<li>Create a scope and collection for your data.<\/li>\n<\/ol>\n<h2>Step 1: Install Dependencies<\/h2>\n<p>We&#8217;ll start by installing the necessary packages. This includes the <code style=\"color:#78af89\">couchbase-infrastructure<\/code> helper for setup, the <code style=\"color:#78af89\">agentc<\/code> CLI for the catalog, and the LangChain integration packages.<\/p>\n<pre class=\"lang:default decode:true \">\r\n%pip install -q \\\r\n    \"pydantic>=2.0.0,<3.0.0\" \\\r\n    \"python-dotenv>=1.0.0,<2.0.0\" \\\r\n    \"pandas>=2.0.0,<3.0.0\" \\\r\n    \"nest-asyncio>=1.6.0,<2.0.0\" \\\r\n    \"langchain-couchbase>=0.2.4,<0.5.0\" \\\r\n    \"langchain-openai>=0.3.11,<0.4.0\" \\\r\n    \"arize-phoenix>=11.37.0,<12.0.0\" \\\r\n    \"openinference-instrumentation-langchain>=0.1.29,<0.2.0\" \\\r\n    \"couchbase-infrastructure\"\r\n\r\n# Install Agent Catalog \r\n%pip install agentc==1.0.0\r\n<\/pre>\n<h2>Step 2: Infrastructure as Code<\/h2>\n<p>Instead of manually clicking through the UI, we use the <code style=\"color:#78af89\">couchbase-infrastructure<\/code> package to programmatically provision our Capella environment. This ensures a reproducible setup.<\/p>\n<p>We will:<\/p>\n<ol>\n<li>Create a Project and Cluster.<\/li>\n<li>Deploy an Embedding Model (<code style=\"color:#78af89\">nvidia\/llama-3.2-nv-embedqa-1b-v2<\/code>) and an LLM (<code style=\"color:#78af89\">meta\/llama3-8b-instruct<\/code>).<\/li>\n<li>Load the <code style=\"color:#78af89\">travel-sample<\/code> dataset.<\/li>\n<\/ol>\n<p>Couchbase AI Services provides OpenAI-compatible endpoints that are used by the agents. <\/p>\n<pre class=\"lang:default decode:true \">\r\nimport os\r\nfrom getpass import getpass\r\nfrom couchbase_infrastructure import CapellaConfig, CapellaClient\r\nfrom couchbase_infrastructure.resources import (\r\n    create_project,\r\n    create_developer_pro_cluster,\r\n    add_allowed_cidr,\r\n    load_sample_data,\r\n    create_database_user,\r\n    deploy_ai_model,\r\n    create_ai_api_key,\r\n)\r\n\r\n# 1. Collect Credentials\r\nmanagement_api_key = getpass(\"Enter your MANAGEMENT_API_KEY: \")\r\norganization_id = input(\"Enter your ORGANIZATION_ID: \")\r\n\r\nconfig = CapellaConfig(\r\n    management_api_key=management_api_key,\r\n    organization_id=organization_id,\r\n    project_name=\"agent-app\",\r\n    cluster_name=\"agent-app-cluster\",\r\n    db_username=\"agent_app_user\",\r\n    sample_bucket=\"travel-sample\",\r\n    # Using Couchbase AI Services for models\r\n    embedding_model_name=\"nvidia\/llama-3.2-nv-embedqa-1b-v2\",\r\n    llm_model_name=\"meta\/llama3-8b-instruct\",\r\n)\r\n\r\n# 2. Provision Cluster\r\nclient = CapellaClient(config)\r\norg_id = client.get_organization_id()\r\nproject_id = create_project(client, org_id, config.project_name)\r\ncluster_id = create_developer_pro_cluster(client, org_id, project_id, config.cluster_name, config)\r\n\r\n# 3. Network & Data Setup\r\nadd_allowed_cidr(client, org_id, project_id, cluster_id, \"0.0.0.0\/0\") # Allow all IPs for tutorial\r\nload_sample_data(client, org_id, project_id, cluster_id, config.sample_bucket)\r\ndb_password = create_database_user(client, org_id, project_id, cluster_id, config.db_username, config.sample_bucket)\r\n\r\n# 4. Deploy AI Models\r\nprint(\"Deploying AI Models...\")\r\ndeploy_ai_model(client, org_id, config.embedding_model_name, \"agent-hub-embedding-model\", \"embedding\", config)\r\ndeploy_ai_model(client, org_id, config.llm_model_name, \"agent-hub-llm-model\", \"llm\", config)\r\n\r\n# 5. Generate API Keys\r\napi_key = create_ai_api_key(client, org_id, config.ai_model_region)\r\n<\/pre>\n<p>Ensure to follow the steps to setup the security root certificate. Secure connections to Couchbase Capella require a root certificate for TLS verification. You can find this in the <em>## \ud83d\udcdc Root Certificate Setup<\/em> section of the Google Colab Notebook. <\/p>\n<h2>Step 3: Integrating Agent Catalog<\/h2>\n<p>The <a href=\"https:\/\/docs.couchbase.com\/ai\/build\/integrate-agent-with-catalog.html\">Agent Catalog<\/a> is a powerful tool for managing the lifecycle of your agent's capabilities. Instead of hardcoding prompts and tool definitions in your Python files, you manage them as versioned assets. You can centralize and reuse your tools across your development teams. You can also examine and monitor agent responses with the Agent Tracer.<\/p>\n<h3>Initialize and Download Assets<\/h3>\n<p>First, we initialize the catalog and download our pre-defined prompts and tools.<\/p>\n<pre class=\"lang:default decode:true \">\r\n!git init\r\n!agentc init\r\n\r\n# Download example tools and prompts\r\n!mkdir -p prompts tools\r\n!wget -O prompts\/hotel_search_assistant.yaml https:\/\/raw.githubusercontent.com\/couchbase-examples\/agent-catalog-quickstart\/refs\/heads\/main\/notebooks\/hotel_search_agent_langchain\/prompts\/hotel_search_assistant.yaml\r\n!wget -O tools\/search_vector_database.py https:\/\/raw.githubusercontent.com\/couchbase-examples\/agent-catalog-quickstart\/refs\/heads\/main\/notebooks\/hotel_search_agent_langchain\/tools\/search_vector_database.py\r\n!wget -O agentcatalog_index.json https:\/\/raw.githubusercontent.com\/couchbase-examples\/agent-catalog-quickstart\/refs\/heads\/main\/notebooks\/hotel_search_agent_langchain\/agentcatalog_index.json\r\n<\/pre>\n<h3>Index and Publish<\/h3>\n<p>We use <code style=\"color:#78af89\">agentc<\/code> to index our local files and publish them to Couchbase. This stores the metadata in your database, making it searchable and discoverable by the agent at runtime.<\/p>\n<pre class=\"lang:default decode:true \">\r\n# Create local index of tools and prompts\r\n!agentc index .\r\n\r\n# Upload to Couchbase\r\n!agentc publish\r\n<\/pre>\n<h2>Step 4: Preparing the Vector Store<\/h2>\n<p>To enable our agent to search for hotels semantically (e.g., \"cozy place near the beach\"), we need to generate vector embeddings for our hotel data.<\/p>\n<p>We define a helper to format our hotel data into a rich text representation, prioritizing location and amenities.<\/p>\n<pre class=\"lang:default decode:true \">\r\nfrom langchain_couchbase.vectorstores import CouchbaseVectorStore\r\n\r\ndef load_hotel_data_to_couchbase(cluster, bucket_name, scope_name, collection_name, embeddings, index_name):\r\n    # Check if data exists\r\n    # ... (omitted for brevity) ...\r\n\r\n    # Generate rich text for each hotel\r\n    # e.g., \"Le Clos Fleuri in Giverny, France. Amenities: Free breakfast: Yes...\"\r\n    hotel_texts = get_hotel_texts() \r\n    \r\n    # Initialize Vector Store connected to Capella\r\n    vector_store = CouchbaseVectorStore(\r\n        cluster=cluster,\r\n        bucket_name=bucket_name,\r\n        scope_name=scope_name,\r\n        collection_name=collection_name,\r\n        embedding=embeddings,\r\n        index_name=index_name,\r\n    )\r\n    \r\n    # Batch upload texts\r\n    vector_store.add_texts(texts=hotel_texts)\r\n    print(f\"Successfully loaded {len(hotel_texts)} hotel embeddings\")\r\n<\/pre>\n<h2>Step 5: Building the LangChain Agent<\/h2>\n<p>We use the <code style=\"color:#78af89\">Agent Catalog<\/code> to fetch our tool definitions and prompts dynamically.  The code remains generic, while your capabilities (tools) and personality (prompts) are managed separately. We will also create our <code style=\"color:#a15b80\">ReAct<\/code> agents.<\/p>\n<pre class=\"lang:default decode:true \">\r\nimport agentc\r\nfrom langchain.agents import AgentExecutor, create_react_agent\r\nfrom langchain_core.prompts import PromptTemplate\r\nfrom langchain_core.tools import Tool\r\n\r\ndef create_langchain_agent(self, catalog, span):\r\n    # 1. Setup AI Services using Capella endpoints\r\n    embeddings, llm = setup_ai_services(framework=\"langchain\")\r\n    \r\n    # 2. Discover Tools from Catalog\r\n    # The catalog.find() method searches your published catalog\r\n    tool_search = catalog.find(\"tool\", name=\"search_vector_database\")\r\n    \r\n    tools = [\r\n        Tool(\r\n            name=tool_search.meta.name,\r\n            description=tool_search.meta.description,\r\n            func=tool_search.func, # The actual python function\r\n        ),\r\n    ]\r\n\r\n    # 3. Discover Prompt from Catalog\r\n    hotel_prompt = catalog.find(\"prompt\", name=\"hotel_search_assistant\")\r\n    \r\n    # 4. Construct the Prompt Template\r\n    custom_prompt = PromptTemplate(\r\n        template=hotel_prompt.content.strip(),\r\n        input_variables=[\"input\", \"agent_scratchpad\"],\r\n        partial_variables={\r\n            \"tools\": \"\\n\".join([f\"{tool.name}: {tool.description}\" for tool in tools]),\r\n            \"tool_names\": \", \".join([tool.name for tool in tools]),\r\n        },\r\n    )\r\n\r\n    # 5. Create the ReAct Agent\r\n    agent = create_react_agent(llm, tools, custom_prompt)\r\n    \r\n    agent_executor = AgentExecutor(\r\n        agent=agent,\r\n        tools=tools,\r\n        verbose=True,\r\n        handle_parsing_errors=True, # Auto-correct formatting errors\r\n        max_iterations=5,\r\n        return_intermediate_steps=True,\r\n    )\r\n    \r\n    return agent_executor\r\n<\/pre>\n<h2>Step 6: Running the Agent<\/h2>\n<p>With the agent initialized, we can perform complex queries. The agent will:<\/p>\n<ol style=\"padding-bottom:10px !important;\">\n<li>Receive the user input.<\/li>\n<li>Decide it needs to use the <code style=\"color:#78af89\">search_vector_database<\/code> tool.<\/li>\n<li>Execute the search against Capella.<\/li>\n<li>Synthesize the results into a natural language response.<\/li>\n<\/ol>\n<pre class=\"lang:default decode:true \">\r\n# Initialize Agent Catalog\r\ncatalog = agentc.catalog.Catalog()\r\nspan = catalog.Span(name=\"Hotel Support Agent\", blacklist=set())\r\n\r\n# Create the agent\r\nagent_executor = couchbase_client.create_langchain_agent(catalog, span)\r\n\r\n# Run a query\r\nquery = \"Find hotels in Giverny with free breakfast\"\r\nresponse = agent_executor.invoke({\"input\": query})\r\n\r\nprint(f\"User: {query}\")\r\nprint(f\"Agent: {response['output']}\")\r\n<\/pre>\n<p><strong>Example Output:<\/strong><\/p>\n<p style=\"margin-left:50px;\">\n<strong>Agent:<\/strong> I found a hotel in Giverny that offers free breakfast called <strong>Le Clos Fleuri<\/strong>. It is located at 5 rue de la D\u00eeme, 27620 Giverny. It offers free internet and parking as well.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/agent_catalog_image.png\" alt=\"\" \/><\/p>\n<p><strong>Note:<\/strong> In Capella Model Services, the model outputs can be <a href=\"https:\/\/docs.couchbase.com\/ai\/build\/model-service\/configure-value-adds.html#caching\">cached<\/a> (both semantic and standard cache). The caching mechanism enhances the RAG's efficiency and speed, particularly when dealing with repeated or similar queries. When a query is first processed, the LLM generates a response and then stores this response in Couchbase. When similar queries come in later, the cached responses are returned. The caching duration can be configured in the Capella Model services. <\/p>\n<h3>Adding Semantic Caching<\/h3>\n<p>Caching is particularly valuable in scenarios where users may submit similar queries multiple times or where certain pieces of information are frequently requested. By storing these in a cache, we can significantly reduce the time it takes to respond to these queries, improving the user experience.<\/p>\n<pre class=\"lang:default decode:true \">\r\n\r\n## Semantic Caching Demonstration\r\n\r\n# This section demonstrates how to enable and use Semantic Caching with Capella Model Services.\r\n# Semantic caching stores responses for queries and reuses them for semantically similar future queries, significantly reducing latency and cost.\r\n\r\n# 1. Setup LLM with Semantic Caching enabled\r\n# We pass the \"X-cb-cache\": \"semantic\" header to enable the feature\r\nprint(\" Setting up LLM with Semantic Caching enabled...\")\r\nllm_with_cache = ChatOpenAI(\r\n    model=os.environ[\"CAPELLA_API_LLM_MODEL\"],\r\n    base_url=os.environ[\"CAPELLA_API_LLM_ENDPOINT\"] + \"\/v1\" if not os.environ[\"CAPELLA_API_LLM_ENDPOINT\"].endswith(\"\/v1\") else os.environ[\"CAPELLA_API_LLM_ENDPOINT\"],\r\n    api_key=os.environ[\"CAPELLA_API_LLM_KEY\"],\r\n    temperature=0, # Deterministic for caching\r\n    default_headers={\"X-cb-cache\": \"semantic\"}\r\n)\r\n\r\n# 2. Define a query and a semantically similar variation\r\nquery_1 = \"What are the best hotels in Paris with a view of the Eiffel Tower?\"\r\nquery_2 = \"Recommend some hotels in Paris where I can see the Eiffel Tower.\"\r\n\r\nprint(f\"\\n Query 1: {query_1}\")\r\nprint(f\" Query 2 (Semantically similar): {query_2}\")\r\n\r\n# 3. First execution (Cache Miss)\r\nprint(\"\\n Executing Query 1 (First run - Cache MISS)...\")\r\nstart_time = time.time()\r\nresponse_1 = llm_with_cache.invoke(query_1)\r\nend_time = time.time()\r\ntime_1 = end_time - start_time\r\nprint(f\" Time taken: {time_1:.4f} seconds\")\r\nprint(f\" Response: {response_1.content[:100]}...\")\r\n\r\n# 4. Second execution (Cache Hit)\r\n# The system should recognize query_2 is semantically similar to query_1 and return the cached response\r\nprint(\"\\n Executing Query 2 (Semantically similar - Cache HIT)...\")\r\nstart_time = time.time()\r\nresponse_2 = llm_with_cache.invoke(query_2)\r\nend_time = time.time()\r\ntime_2 = end_time - start_time\r\nprint(f\" Time taken: {time_2:.4f} seconds\")\r\nprint(f\" Response: {response_2.content[:100]}...\")\r\n<\/pre>\n<h2>Step 7: Observability With Arize Phoenix<\/h2>\n<p>In production, you need to know why an agent gave a specific answer. We use <strong>Arize Phoenix<\/strong> to trace the agent's \"thought process\" (the ReAct chain).<\/p>\n<p>We can also run evaluations to check for <strong>hallucinations<\/strong> or <strong>relevance<\/strong>.<\/p>\n<pre class=\"lang:default decode:true \">\r\nimport phoenix as px\r\nfrom phoenix.evals import llm_classify, LENIENT_QA_PROMPT_TEMPLATE\r\n\r\n# 1. Start Phoenix Server\r\nsession = px.launch_app()\r\n\r\n# 2. Instrument LangChain\r\nfrom openinference.instrumentation.langchain import LangChainInstrumentor\r\nLangChainInstrumentor().instrument()\r\n\r\n# ... Run your agent queries ...\r\n\r\n# 3. Evaluate Results\r\n# We use an LLM-as-a-judge to grade our agent's responses\r\nhotel_qa_results = llm_classify(\r\n    data=hotel_eval_df[[\"input\", \"output\", \"reference\"]],\r\n    model=evaluator_llm,\r\n    template=LENIENT_QA_PROMPT_TEMPLATE,\r\n    rails=[\"correct\", \"incorrect\"],\r\n    provide_explanation=True,\r\n)\r\n<\/pre>\n<p>By inspecting the Phoenix UI, you can visualize the exact sequence of tool calls and see the latency of each step in the chain.<\/p>\n<h2>Conclusion<\/h2>\n<p>We have successfully built a robust Hotel Search Agent. This architecture leverages:<\/p>\n<ol>\n<li><strong>Couchbase AI Services:<\/strong> For a unified, low-latency data and AI layer.<\/li>\n<li><strong>Agent Catalog:<\/strong> For organized, versioned management of agent tools and prompts. Agent catalog also provides tracing. It provides users to use SQL++ with traces, leverage the performance  of Couchbase, and get insight into details of prompts and tools in the same platform.<\/li>\n<li><strong>LangChain:<\/strong> For flexible orchestration.<\/li>\n<li><strong>Arize Phoenix:<\/strong> For observability.<\/li>\n<\/ol>\n<p>This approach scales well for teams building complex, multi-agent systems where data management and tool discovery are critical challenges.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In this CodeLab, you will learn how to build a Hotel Search Agent using LangChain, Couchbase AI Services, and Agent Catalog. We will also incorporate Arize Phoenix for observability and evaluation to ensure our agent performs reliably. This tutorial takes [&hellip;]<\/p>\n","protected":false},"author":49,"featured_media":17828,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"footnotes":""},"categories":[10122],"tags":[10170],"ppma_author":[9023],"class_list":["post-17798","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence-ai","tag-ai"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v26.7 (Yoast SEO v26.7) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Codelab: Building an AI Agent With Couchbase AI Services &amp; Agent Catalog - The Couchbase Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Codelab: Building an AI Agent With Couchbase AI Services &amp; Agent Catalog\" \/>\n<meta property=\"og:description\" content=\"In this CodeLab, you will learn how to build a Hotel Search Agent using LangChain, Couchbase AI Services, and Agent Catalog. We will also incorporate Arize Phoenix for observability and evaluation to ensure our agent performs reliably. This tutorial takes [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/\" \/>\n<meta property=\"og:site_name\" content=\"The Couchbase Blog\" \/>\n<meta property=\"article:published_time\" content=\"2026-01-19T19:10:11+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-01-19T19:13:00+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/CodeLab_-Building-a-RAG-Application-With-Couchbase-Capella-Model-Services-and-LangChain-1.png\" \/>\n\t<meta property=\"og:image:width\" content=\"2400\" \/>\n\t<meta property=\"og:image:height\" content=\"1256\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Laurent Doguin\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@ldoguin\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"unstructured.io\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/\"},\"author\":{\"name\":\"Laurent Doguin\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/c0aa9b8f1ed51b7a9e2f7cb755994a5e\"},\"headline\":\"Codelab: Building an AI Agent With Couchbase AI Services &#038; Agent Catalog\",\"datePublished\":\"2026-01-19T19:10:11+00:00\",\"dateModified\":\"2026-01-19T19:13:00+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/\"},\"wordCount\":1042,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/CodeLab_-Building-a-RAG-Application-With-Couchbase-Capella-Model-Services-and-LangChain-1.png\",\"keywords\":[\"AI\"],\"articleSection\":[\"Artificial Intelligence (AI)\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/\",\"url\":\"https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/\",\"name\":\"Codelab: Building an AI Agent With Couchbase AI Services & Agent Catalog - The Couchbase Blog\",\"isPartOf\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/CodeLab_-Building-a-RAG-Application-With-Couchbase-Capella-Model-Services-and-LangChain-1.png\",\"datePublished\":\"2026-01-19T19:10:11+00:00\",\"dateModified\":\"2026-01-19T19:13:00+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/#primaryimage\",\"url\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/CodeLab_-Building-a-RAG-Application-With-Couchbase-Capella-Model-Services-and-LangChain-1.png\",\"contentUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/CodeLab_-Building-a-RAG-Application-With-Couchbase-Capella-Model-Services-and-LangChain-1.png\",\"width\":2400,\"height\":1256},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.couchbase.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Codelab: Building an AI Agent With Couchbase AI Services &#038; Agent Catalog\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#website\",\"url\":\"https:\/\/www.couchbase.com\/blog\/\",\"name\":\"The Couchbase Blog\",\"description\":\"Couchbase, the NoSQL Database\",\"publisher\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.couchbase.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#organization\",\"name\":\"The Couchbase Blog\",\"url\":\"https:\/\/www.couchbase.com\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png\",\"contentUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png\",\"width\":218,\"height\":34,\"caption\":\"The Couchbase Blog\"},\"image\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/c0aa9b8f1ed51b7a9e2f7cb755994a5e\",\"name\":\"Laurent Doguin\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/image\/12929ce99397769f362b7a90d6b85071\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/b8c466908092b46634af916b6921f30187a051e4367ded7ac9b1a3f2c5692fd2?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/b8c466908092b46634af916b6921f30187a051e4367ded7ac9b1a3f2c5692fd2?s=96&d=mm&r=g\",\"caption\":\"Laurent Doguin\"},\"description\":\"Laurent is a nerdy metal head who lives in Paris. He mostly writes code in Java and structured text in AsciiDoc, and often talks about data, reactive programming and other buzzwordy stuff. He is also a former Developer Advocate for Clever Cloud and Nuxeo where he devoted his time and expertise to helping those communities grow bigger and stronger. He now runs Developer Relations at Couchbase.\",\"sameAs\":[\"https:\/\/x.com\/ldoguin\"],\"honorificPrefix\":\"Mr\",\"birthDate\":\"1985-06-07\",\"gender\":\"male\",\"award\":[\"Devoxx Champion\",\"Couchbase Legend\"],\"knowsAbout\":[\"Java\"],\"knowsLanguage\":[\"English\",\"French\"],\"jobTitle\":\"Director Developer Relation & Strategy\",\"worksFor\":\"Couchbase\",\"url\":\"https:\/\/www.couchbase.com\/blog\/author\/laurent-doguin\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Codelab: Building an AI Agent With Couchbase AI Services & Agent Catalog - The Couchbase Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/","og_locale":"en_US","og_type":"article","og_title":"Codelab: Building an AI Agent With Couchbase AI Services & Agent Catalog","og_description":"In this CodeLab, you will learn how to build a Hotel Search Agent using LangChain, Couchbase AI Services, and Agent Catalog. We will also incorporate Arize Phoenix for observability and evaluation to ensure our agent performs reliably. This tutorial takes [&hellip;]","og_url":"https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/","og_site_name":"The Couchbase Blog","article_published_time":"2026-01-19T19:10:11+00:00","article_modified_time":"2026-01-19T19:13:00+00:00","og_image":[{"width":2400,"height":1256,"url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/CodeLab_-Building-a-RAG-Application-With-Couchbase-Capella-Model-Services-and-LangChain-1.png","type":"image\/png"}],"author":"Laurent Doguin","twitter_card":"summary_large_image","twitter_creator":"@ldoguin","twitter_misc":{"Written by":"unstructured.io","Est. reading time":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/#article","isPartOf":{"@id":"https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/"},"author":{"name":"Laurent Doguin","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/c0aa9b8f1ed51b7a9e2f7cb755994a5e"},"headline":"Codelab: Building an AI Agent With Couchbase AI Services &#038; Agent Catalog","datePublished":"2026-01-19T19:10:11+00:00","dateModified":"2026-01-19T19:13:00+00:00","mainEntityOfPage":{"@id":"https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/"},"wordCount":1042,"commentCount":0,"publisher":{"@id":"https:\/\/www.couchbase.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/#primaryimage"},"thumbnailUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/CodeLab_-Building-a-RAG-Application-With-Couchbase-Capella-Model-Services-and-LangChain-1.png","keywords":["AI"],"articleSection":["Artificial Intelligence (AI)"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/","url":"https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/","name":"Codelab: Building an AI Agent With Couchbase AI Services & Agent Catalog - The Couchbase Blog","isPartOf":{"@id":"https:\/\/www.couchbase.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/#primaryimage"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/#primaryimage"},"thumbnailUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/CodeLab_-Building-a-RAG-Application-With-Couchbase-Capella-Model-Services-and-LangChain-1.png","datePublished":"2026-01-19T19:10:11+00:00","dateModified":"2026-01-19T19:13:00+00:00","breadcrumb":{"@id":"https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/#primaryimage","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/CodeLab_-Building-a-RAG-Application-With-Couchbase-Capella-Model-Services-and-LangChain-1.png","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/CodeLab_-Building-a-RAG-Application-With-Couchbase-Capella-Model-Services-and-LangChain-1.png","width":2400,"height":1256},{"@type":"BreadcrumbList","@id":"https:\/\/www.couchbase.com\/blog\/codelab-building-an-ai-agent-with-couchbase-ai-services-agent-catalog\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.couchbase.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Codelab: Building an AI Agent With Couchbase AI Services &#038; Agent Catalog"}]},{"@type":"WebSite","@id":"https:\/\/www.couchbase.com\/blog\/#website","url":"https:\/\/www.couchbase.com\/blog\/","name":"The Couchbase Blog","description":"Couchbase, the NoSQL Database","publisher":{"@id":"https:\/\/www.couchbase.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.couchbase.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.couchbase.com\/blog\/#organization","name":"The Couchbase Blog","url":"https:\/\/www.couchbase.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png","width":218,"height":34,"caption":"The Couchbase Blog"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/c0aa9b8f1ed51b7a9e2f7cb755994a5e","name":"Laurent Doguin","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/image\/12929ce99397769f362b7a90d6b85071","url":"https:\/\/secure.gravatar.com\/avatar\/b8c466908092b46634af916b6921f30187a051e4367ded7ac9b1a3f2c5692fd2?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/b8c466908092b46634af916b6921f30187a051e4367ded7ac9b1a3f2c5692fd2?s=96&d=mm&r=g","caption":"Laurent Doguin"},"description":"Laurent is a nerdy metal head who lives in Paris. He mostly writes code in Java and structured text in AsciiDoc, and often talks about data, reactive programming and other buzzwordy stuff. He is also a former Developer Advocate for Clever Cloud and Nuxeo where he devoted his time and expertise to helping those communities grow bigger and stronger. He now runs Developer Relations at Couchbase.","sameAs":["https:\/\/x.com\/ldoguin"],"honorificPrefix":"Mr","birthDate":"1985-06-07","gender":"male","award":["Devoxx Champion","Couchbase Legend"],"knowsAbout":["Java"],"knowsLanguage":["English","French"],"jobTitle":"Director Developer Relation & Strategy","worksFor":"Couchbase","url":"https:\/\/www.couchbase.com\/blog\/author\/laurent-doguin\/"}]}},"authors":[{"term_id":9023,"user_id":49,"is_guest":0,"slug":"laurent-doguin","display_name":"Laurent Doguin","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/b8c466908092b46634af916b6921f30187a051e4367ded7ac9b1a3f2c5692fd2?s=96&d=mm&r=g","author_category":"","last_name":"Doguin","first_name":"Laurent","job_title":"","user_url":"","description":"Laurent is a nerdy metal head who lives in Paris. He mostly writes code in Java and structured text in AsciiDoc, and often talks about data, reactive programming and other buzzwordy stuff. He is also a former Developer Advocate for Clever Cloud and Nuxeo where he devoted his time and expertise to helping those communities grow bigger and stronger. He now runs Developer Relations at Couchbase."}],"_links":{"self":[{"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/posts\/17798","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/users\/49"}],"replies":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/comments?post=17798"}],"version-history":[{"count":0,"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/posts\/17798\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/media\/17828"}],"wp:attachment":[{"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/media?parent=17798"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/categories?post=17798"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/tags?post=17798"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/ppma_author?post=17798"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}