{"id":17851,"date":"2026-01-26T12:58:14","date_gmt":"2026-01-26T20:58:14","guid":{"rendered":"https:\/\/www.couchbase.com\/blog\/?p=17851"},"modified":"2026-01-28T08:31:59","modified_gmt":"2026-01-28T16:31:59","slug":"migrate-your-existing-agents-to-couchbase-ai-services","status":"publish","type":"post","link":"https:\/\/www.couchbase.com\/blog\/pt\/migrate-your-existing-agents-to-couchbase-ai-services\/","title":{"rendered":"Migrate Your Existing Agents to Couchbase AI Services"},"content":{"rendered":"<p><span style=\"font-weight: 400\">A typical AI agent application in 2025 usually involves:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">A cloud-hosted LLM<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">A vector database for retrieval<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">A separate operational database<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Prompt management and tool management tools<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Observability and tracing frameworks<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Grades de prote\u00e7\u00e3o<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400\">Each tool solves a problem. Collectively, however, they can potentially create architectural sprawl with unpredictable latency, rising operational costs, and governance blind spots. As a result, a lot of AI agents never move beyond demos or internal prototypes because the complexity escalates too fast.<\/span><\/p>\n<p><span style=\"font-weight: 400\">This post walks through how we migrated an existing AI agent application to Couchbase AI Services and the Agent Catalog, moving to a single production-ready AI platform.\u00a0<\/span><\/p>\n<h2><span style=\"font-weight: 400\">The Core Problem: Fragmentation Kills Production AI<\/span><\/h2>\n<p><span style=\"font-weight: 400\">It&#8217;s important to understand why agentic systems struggle in production. Most AI agents today are built from too many loosely coupled parts: prompts live in one system, vectors in another, conversations are logged inconsistently, tools are invoked without clear traceability <\/span><span style=\"font-weight: 400\">-<\/span><span style=\"font-weight: 400\"> making agent behavior difficult to debug. At the same time, sending enterprise data to third-party LLM endpoints introduces compliance and security risks. Finally, governance is usually treated as an afterthought; many frameworks emphasize what an agent can do, but fail to explain why it made a decision, which prompt or tool influenced it, or whether that decision should have been allowed at all. This is an unacceptable gap for real business workflows.<\/span><\/p>\n<h2><span style=\"font-weight: 400\">What Are Couchbase AI Services?<\/span><\/h2>\n<p><span style=\"font-weight: 400\">Building AI applications often involves juggling multiple services: a vector database for memory, an inference provider for LLMs (like OpenAI or Anthropic), and separate infrastructure for embedding models.<\/span><\/p>\n<p><b>Couchbase AI Services<\/b><span style=\"font-weight: 400\"> streamlines this by providing a unified platform where your operational data, vector search, and AI models live together. It offers:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400\"><b>LLM inference and embeddings API:<\/b><span style=\"font-weight: 400\"> Access popular LLMs (like Llama 3) and embedding models directly within Couchbase Capella, with no external API keys, no extra infrastructure, and no data egress. Your application data stays inside Capella. Queries, vectors, and model inference all happen where the data lives. This enables secure, low-latency AI experiences while meeting privacy and compliance requirements. The key value: data and AI together, with sensitive information kept inside your system.<\/span><\/li>\n<li style=\"font-weight: 400\"><b>Unified platform:<\/b><span style=\"font-weight: 400\"> Maintain your database, vectorization, search, and model in a central location.<\/span><\/li>\n<li style=\"font-weight: 400\"><b>Integrated Vector Search:<\/b><span style=\"font-weight: 400\"> Perform semantic search directly on your JSON data with millisecond latency.<\/span><\/li>\n<\/ul>\n<h3><span style=\"font-weight: 400\">Why Is This Needed?<\/span><\/h3>\n<p><span style=\"font-weight: 400\">As we move from simple chatbots to <\/span><b>agentic workflows<\/b> <span style=\"font-weight: 400\">-<\/span><span style=\"font-weight: 400\"> where AI models autonomously use tools <\/span><span style=\"font-weight: 400\">-<\/span><span style=\"font-weight: 400\"> latency and setup complexity become major bottlenecks. Couchbase AI Services takes a platform-first approach. By co-locating your data and AI services, it reduces operational overhead and latency. In addition, tools like the <\/span><b>Cat\u00e1logo de agentes<\/b><span style=\"font-weight: 400\"> help manage hundreds of agent prompts and tools, while providing built-in logging and telemetry for agents.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">At this point, the question shifts from why a platform-first approach matters to how it works in practice.<\/span><\/p>\n<p><span style=\"font-weight: 400\">So let\u2019s explore how you can migrate an existing agentic application, and improve its performance, governance, and reliability along the way.<\/span><\/p>\n<h2><span style=\"font-weight: 400\">What the Current App Looks Like<\/span><\/h2>\n<p><span style=\"font-weight: 400\">The current application is an HR Sourcing Agent designed to automate the initial screening of candidates. The main job of the agent application is to ingest raw resume files (PDFs), understand the content of the resumes using an LLM, and structure the unstructured data into a queryable format enriched with semantic embeddings in Couchbase. It allows HR professionals to upload a new job description and get results for the best-suited candidates using Couchbase vector search.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">In its current state, the HR Sourcing App is a Python-based microservice that wraps an LLM with the Google ADK. It manually wires together model definitions, agent prompts, and execution pipelines. While functional, the architecture requires the developer to manage session state in memory, handle retry logic, clean raw model outputs, and maintain the integration between the LLM and the database manually. Also, there is no built-in telemetry for our agent.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">\u200b\u200bThe app manually instantiates a model provider. In this specific case, it connects to a hosted open source model (Qwen 2.5-72B via Nebius) using the <\/span><a href=\"https:\/\/www.litellm.ai\/\"><span style=\"font-weight: 400\">LiteLLM<\/span><\/a><span style=\"font-weight: 400\"> wrapper. The app has to manually spin up a runtime environment for the agent. It initializes an InMemorySessionService to track the state of the conversation (even if short-lived) and a Runner to execute the user&#8217;s input (the resume text) against the agent pipeline.<\/span><\/p>\n<h2><span style=\"font-weight: 400\">Migrating the Agent Application to Couchbase AI Services<\/span><\/h2>\n<p><span style=\"font-weight: 400\">Now let\u2019s dive into how to migrate the core logic of our agent to use Couchbase AI Services and the Agent Catalog.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">The new agent uses a LangChain ReAct agent to process job descriptions, it performs intelligent candidate matching using vector search and provides ranked candidate recommendations with explanations.\u00a0<\/span><\/p>\n<h3><span style=\"font-weight: 400\">Pr\u00e9-requisitos<\/span><\/h3>\n<p><span style=\"font-weight: 400\">Before we begin, ensure you have:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Python 3.10+ installed.<\/span><\/li>\n<\/ul>\n<h3><span style=\"font-weight: 400\">Install Dependencies<\/span><\/h3>\n<p><span style=\"font-weight: 400\">We&#8217;ll start by installing the necessary packages. This includes the <\/span><span style=\"font-weight: 400\">agente<\/span><span style=\"font-weight: 400\"> CLI for the catalog and the LangChain integration packages.<\/span><\/p>\n<pre class=\"lang:default decode:true\">%pip install -q \\\r\n    \"pydantic&gt;=2.0.0,&lt;3.0.0\" \\ \"python-dotenv&gt;=1.0.0,&lt;2.0.0\" \\ \"pandas&gt;=2.0.0,&lt;3.0.0\" \\ \"nest-asyncio&gt;=1.6.0,&lt;2.0.0\" \\ \"langchain-couchbase&gt;=0.2.4,&lt;0.5.0\" \\ \"langchain-openai&gt;=0.3.11,&lt;0.4.0\" \\ \"arize-phoenix&gt;=11.37.0,&lt;12.0.0\" \\ \"openinference-instrumentation-langchain&gt;=0.1.29,&lt;0.2.0\"\r\n\r\n# Install Agent Catalog \r\n%pip install agentc==1.0.0<\/pre>\n<h3><span style=\"font-weight: 400\">Centralized Model Service (Couchbase AI Model Services Integration)<\/span><\/h3>\n<p><span style=\"font-weight: 400\">In the original <\/span><i><span style=\"font-weight: 400\">adk_resume_agent.py<\/span><\/i><span style=\"font-weight: 400\">, we had to manually instantiate LiteLLM, manage specific provider API keys (Nebius, OpenAI, etc.), and handle the connection logic inside our application code. We will migrate the code to use Couchbase.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">Couchbase AI Services provides OpenAI-compatible endpoints that are used by the agents. For the LLM and embeddings, we use the LangChain OpenAI package, which integrates directly with the LangChain Couchbase connector.<\/span><\/p>\n<h4><span style=\"font-weight: 400\">Habilitar servi\u00e7os de IA<\/span><\/h4>\n<ol>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Navigate to Capella\u2019s AI Services section on the UI.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Deploy the Embeddings and LLM models.<\/span>\n<ul>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">You need to launch an embedding and an LLM for this demo in the same region as the Capella cluster where the data will be stored.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Deploy an LLM that has tool calling capabilities such as <\/span><i><span style=\"font-weight: 400\">mistralai\/mistral-7b-instruct-v0.3<\/span><\/i><span style=\"font-weight: 400\">. Para embeddings, voc\u00ea pode escolher um modelo como o <\/span><i><span style=\"font-weight: 400\">nvidia\/llama-3.2-nv-embedqa-1b-v2<\/span><\/i><span style=\"font-weight: 400\">.<\/span><\/li>\n<\/ul>\n<\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Note the endpoint URL and generate API keys.<\/span><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400\">Para obter mais detalhes sobre o lan\u00e7amento de modelos de IA, voc\u00ea pode consultar o <\/span><a href=\"https:\/\/docs.couchbase.com\/ai\/build\/model-service\/model-service.html\"><span style=\"font-weight: 400\">documenta\u00e7\u00e3o oficial<\/span><\/a><span style=\"font-weight: 400\">.<\/span><\/p>\n<h4><span style=\"font-weight: 400\">Implementing the Code Logic for LLM and Embedding Models<\/span><\/h4>\n<p><span style=\"font-weight: 400\">We need to configure the endpoints for Capella Model Services. Capella Model Services are compatible with the OpenAI API format, so we can use the standard <\/span><span style=\"font-weight: 400\">langchain-openai<\/span><span style=\"font-weight: 400\"> library by pointing it to our Capella endpoint. We initialize the embedding model with <\/span><span style=\"font-weight: 400\">Aberturas do OpenAIEmbeddings<\/span><span style=\"font-weight: 400\"> and the LLM with <\/span><span style=\"font-weight: 400\">ChatOpenAI<\/span><span style=\"font-weight: 400\">, but point it to Capella. <\/span><\/p>\n<pre class=\"lang:default decode:true\"># Model Services Config\r\nCAPELLA_API_ENDPOINT = getpass.getpass(\"Capella Model Services Endpoint: \")\r\nCAPELLA_API_LLM_MODEL = \"mistralai\/mistral-7b-instruct-v0.3\"\r\nCAPELLA_API_LLM_KEY = getpass.getpass(\"LLM API Key: \")\r\nCAPELLA_API_EMBEDDING_MODEL = \"nvidia\/llama-3.2-nv-embedqa-1b-v2\"\r\nCAPELLA_API_EMBEDDINGS_KEY = getpass.getpass(\"Embedding API Key: \")\r\n\r\ndef setup_ai_services(temperature: float = 0.0):\r\n    embeddings = None\r\n    llm = None\r\n\r\n    if not embeddings and os.getenv(\"CAPELLA_API_ENDPOINT\") and os.getenv(\"CAPELLA_API_EMBEDDINGS_KEY\"):\r\n        try:\r\n            endpoint = os.getenv(\"CAPELLA_API_ENDPOINT\")\r\n            api_key = os.getenv(\"CAPELLA_API_EMBEDDINGS_KEY\")\r\n            model = os.getenv(\"CAPELLA_API_EMBEDDING_MODEL\", \"Snowflake\/snowflake-arctic-embed-l-v2.0\")\r\n\r\n            api_base = endpoint if endpoint.endswith('\/v1') else f\"{endpoint}\/v1\"\r\n\r\n            embeddings = OpenAIEmbeddings(\r\n                model=model,\r\n                api_key=api_key,\r\n                base_url=api_base,\r\n                check_embedding_ctx_length=False,\r\n            )\r\n        except Exception as e:\r\n            logger.error(f\"Couchbase AI embeddings failed: {e}\")\r\n\r\n    if not llm and os.getenv(\"CAPELLA_API_ENDPOINT\") and os.getenv(\"CAPELLA_API_LLM_KEY\"):\r\n        try:\r\n            endpoint = os.getenv(\"CAPELLA_API_ENDPOINT\")\r\n            llm_key = os.getenv(\"CAPELLA_API_LLM_KEY\")\r\n            llm_model = os.getenv(\"CAPELLA_API_LLM_MODEL\", \"deepseek-ai\/DeepSeek-R1-Distill-Llama-8B\")\r\n\r\n            api_base = endpoint if endpoint.endswith('\/v1') else f\"{endpoint}\/v1\"\r\n\r\n            llm = ChatOpenAI(\r\n                model=llm_model,\r\n                base_url=api_base,\r\n                api_key=llm_key,\r\n                temperature=temperature,\r\n            )\r\n            test_response = llm.invoke(\"Hello\")\r\n        except Exception as e:\r\n            logger.error(f\"Couchbase AI LLM failed: {e}\")\r\n            llm = None<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400\">Instead of hardcoding model providers, the agent now connects to a unified Capella endpoint, which acts as an API gateway for both the LLM and the embedding model.\u00a0<\/span><\/p>\n<h3><span style=\"font-weight: 400\">Decoupling Prompts and Tools With Agent Catalog<\/span><\/h3>\n<p><span style=\"font-weight: 400\">The Agent Catalog is a powerful tool for managing the lifecycle of your agent&#8217;s capabilities. Instead of hardcoding prompts and tool definitions in your Python files, you manage them as versioned assets. You can centralize and reuse your tools across your development teams. You can also examine and monitor agent responses with the Agent Tracer. These features provide visibility, control, and traceability for agent development and deployment. Your teams can build agents with confidence, knowing they can be audited and managed effectively.\u00a0\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">Without the ability to back-trace agent behavior, it becomes impossible to automate the ongoing trust, validation, and corroboration of the autonomous decisions made by agents. In the Agent Catalog, this is performed by evaluating both the agentic code and its conversation transcript with its LLM to assess the appropriateness of its pending decision or MCP tool lookup.\u00a0\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">So let\u2019s incorporate Agent Catalog in the project.\u00a0<\/span><\/p>\n<h4><span style=\"font-weight: 400\">Adding the Vector Search Tool\u00a0<\/span><\/h4>\n<p><span style=\"font-weight: 400\">We will start by adding our tool definition for the Agent Catalog. In this case we have the vector search tool.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">To add a new Python function as a tool for your agent, you can use the Agent Catalog command-line tool\u2019s add command:<\/span><\/p>\n<p><span style=\"font-weight: 400\">agentc add\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">If you have an existing Python tool that you want to add to the Agent Catalog, add agentc to your imports and the @agentc.catalog.tool decorator to your tool definition. In our example, we define a Python function for performing vector search as our tool.\u00a0<\/span><\/p>\n<pre class=\"lang:default decode:true\">\"\"\"\r\nVector search tool for finding candidates based on job descriptions.\r\nThis tool uses Couchbase vector search to find the most relevant candidates.\r\n\r\nUpdated for Agent Catalog v1.0.0 with @tool decorator.\r\n\"\"\"\r\n\r\nimport os\r\nimport logging\r\nfrom typing import List, Dict, Any\r\nfrom datetime import timedelta\r\n\r\nfrom agentc_core.tool import tool\r\nfrom couchbase.cluster import Cluster\r\nfrom couchbase.auth import PasswordAuthenticator\r\nfrom couchbase.options import ClusterOptions\r\nfrom couchbase.vector_search import VectorQuery, VectorSearch\r\nfrom couchbase.search import SearchRequest, MatchNoneQuery\r\n\r\nlogger = logging.getLogger(__name__)\r\n\r\n\r\ndef generate_embedding(text: str, embeddings_client) -&gt; List[float]:\r\n    \"\"\"Generate embeddings for text using the provided embeddings client.\"\"\"\r\n    try:\r\n        # Use the embeddings client to generate embeddings\r\n        result = embeddings_client.embed_query(text)\r\n        return result\r\n    except Exception as e:\r\n        logger.error(f\"Error generating embedding: {e}\")\r\n        return [0.0] * 1024  # Return zero vector as fallback\r\n\r\n\r\n@tool(\r\n    name=\"search_candidates_vector\",\r\n    description=\"Search for candidates using vector similarity based on a job description. Returns matching candidate profiles ranked by relevance.\",\r\n    annotations={\"category\": \"hr\", \"type\": \"search\"}\r\n)\r\ndef search_candidates_vector(\r\n    job_description: str,\r\n    num_results: int = 5,\r\n    embeddings_client=None,\r\n) -&gt; str:\r\n    \"\"\"\r\n    Search for candidates using vector similarity based on job description.\r\n\r\n    Args:\r\n        job_description: The job description text to search against\r\n        num_results: Number of top candidates to return (default: 5)\r\n        embeddings_client: The embeddings client for generating query embeddings\r\n\r\n    Returns:\r\n        Formatted string with candidate information\r\n    \"\"\"\r\n    try:\r\n        # Get environment variables\r\n        bucket_name = os.getenv(\"CB_BUCKET\", \"travel-sample\")\r\n        scope_name = os.getenv(\"CB_SCOPE\", \"agentc_data\")\r\n        collection_name = os.getenv(\"CB_COLLECTION\", \"candidates\")\r\n        index_name = os.getenv(\"CB_INDEX\", \"candidates_index\")\r\n\r\n        # Connect to Couchbase\r\n        cluster = get_cluster_connection()\r\n        if not cluster:\r\n            return \"Error: Could not connect to database\"\r\n\r\n        bucket = cluster.bucket(bucket_name)\r\n        scope = bucket.scope(scope_name)\r\n        collection = scope.collection(collection_name)  # Use scope.collection(), not bucket.collection()\r\n\r\n        # Generate query embedding\r\n        logger.info(f\"Generating embedding for job description...\")\r\n        if embeddings_client is None:\r\n            return \"Error: Embeddings client not provided\"\r\n\r\n        query_embedding = generate_embedding(job_description, embeddings_client)\r\n\r\n        # Perform vector search\r\n        logger.info(f\"Performing vector search with index: {index_name}\")\r\n        search_req = SearchRequest.create(MatchNoneQuery()).with_vector_search(\r\n            VectorSearch.from_vector_query(\r\n                VectorQuery(\"embedding\", query_embedding, num_candidates=num_results * 2)\r\n            )\r\n        )\r\n\r\n        result = scope.search(index_name, search_req, timeout=timedelta(seconds=20))\r\n        rows = list(result.rows())\r\n\r\n        if not rows:\r\n            return \"No candidates found matching the job description.\"\r\n\r\n        # Fetch candidate details\r\n        candidates = []\r\n        for row in rows[:num_results]:\r\n            try:\r\n                doc = collection.get(row.id, timeout=timedelta(seconds=5))\r\n                if doc and doc.value:\r\n                    data = doc.value\r\n                    data[\"_id\"] = row.id\r\n                    data[\"_score\"] = row.score\r\n                    candidates.append(data)\r\n            except Exception as e:\r\n                logger.warning(f\"Error fetching candidate {row.id}: {e}\")\r\n                continue\r\n\r\n        # Format results\r\n        if not candidates:\r\n            return \"No candidate details could be retrieved.\"\r\n\r\n        result_text = f\"Found {len(candidates)} matching candidates:\\n\\n\"\r\n\r\n        for i, candidate in enumerate(candidates, 1):\r\n            result_text += f\"**Candidate {i}: {candidate.get('name', 'Unknown')}**\\n\"\r\n            result_text += f\"- Match Score: {candidate.get('_score', 0):.4f}\\n\"\r\n            result_text += f\"- Email: {candidate.get('email', 'N\/A')}\\n\"\r\n            result_text += f\"- Location: {candidate.get('location', 'N\/A')}\\n\"\r\n            result_text += f\"- Years of Experience: {candidate.get('years_experience', 0)}\\n\"\r\n\r\n            skills = candidate.get('skills', [])\r\n            if skills:\r\n                result_text += f\"- Skills: {', '.join(skills[:10])}\\n\"\r\n\r\n            technical_skills = candidate.get('technical_skills', [])\r\n            if technical_skills:\r\n                result_text += f\"- Technical Skills: {', '.join(technical_skills[:10])}\\n\"\r\n\r\n            summary = candidate.get('summary', '')\r\n            if summary:\r\n                # Truncate summary if too long\r\n                summary_text = summary[:200] + \"...\" if len(summary) &gt; 200 else summary\r\n                result_text += f\"- Summary: {summary_text}\\n\"\r\n\r\n            result_text += \"\\n\"\r\n\r\n        return result_text\r\n\r\n    except Exception as e:\r\n        logger.error(f\"Error in vector search: {e}\")\r\n        import traceback\r\n        traceback.print_exc()\r\n        return f\"Error performing candidate search: {str(e)}\"\r\n<\/pre>\n<h4><span style=\"font-weight: 400\">Adding the Prompts\u00a0<\/span><\/h4>\n<p><span style=\"font-weight: 400\">In the original architecture, the agent&#8217;s instructions were buried inside the Python code as large string variables, making them difficult to version or update without a full deployment. With the Agent Catalog, we now define our &#8220;HR Recruiter&#8221; persona as a standalone, managed asset using prompts. Using a structured YAML definition (record_kind: prompt), we create the hr_recruiter_assistant. This definition doesn&#8217;t just hold the text; it encapsulates the entire behavior of the agent, strictly defining the ReAct pattern (Thought \u2192 Action \u2192 Observation) that guides the LLM to use the vector search tool effectively.<\/span><\/p>\n<pre class=\"lang:default decode:true\">record_kind: prompt\r\nname: hr_recruiter_assistant\r\ndescription: AI-powered HR recruiter assistant that helps match candidates to job descriptions using vector search\r\nannotations:\r\n  category: hr\r\n  type: recruitment\r\ncontent: |\r\n  You are an expert HR recruiter assistant with deep knowledge of talent acquisition and candidate matching.\r\n  Your role is to help HR professionals find the best candidates for job openings by analyzing job descriptions\r\n  and searching through a database of candidate profiles.\r\n\r\n  You have access to the following tools:\r\n  {tools}\r\n\r\n  Use the following format for your responses:\r\n\r\n  Question: the input question or job description you must analyze\r\n  Thought: think about what information you need to find the best candidates\r\n  Action: the action to take, should be one of [{tool_names}]\r\n  Action Input: the input to the action (for candidate search, provide the job description text)\r\n  Observation: the result of the action\r\n  ... (this Thought\/Action\/Action Input\/Observation can repeat N times)\r\n  Thought: I now have enough information to provide recommendations\r\n  Final Answer: Provide a comprehensive summary of the top candidates including:\r\n  - Candidate names and key qualifications\r\n  - Skills match percentage and relevance\r\n  - Years of experience\r\n  - Why each candidate is a good fit for the role\r\n  - Any notable strengths or unique qualifications\r\n\r\n  IMPORTANT GUIDELINES:\r\n  - Always use the search_candidates_vector tool to find candidates\r\n  - Analyze the job description to understand required skills and experience\r\n  - Provide detailed reasoning for candidate recommendations\r\n  - Highlight both technical skills and soft skills when relevant\r\n  - Be specific about match percentages and scores\r\n  - Format your final answer in a clear, professional manner\r\n\r\n  Begin!\r\n\r\n  Question: {input}\r\n  Thought: {agent_scratchpad}\r\n<\/pre>\n<h4><span style=\"font-weight: 400\">Index and Publishing the Local Files<\/span><\/h4>\n<p><span style=\"font-weight: 400\">Usamos <\/span><span style=\"font-weight: 400\">agente<\/span><span style=\"font-weight: 400\"> to index our local files and publish them to Couchbase. This stores the metadata in the database, making it searchable and discoverable by the agent at runtime.<\/span><\/p>\n<pre class=\"lang:default decode:true\"># Create local index of tools and prompts\r\nagentc index .\r\n\r\n# Upload to Couchbase\r\nagentc publish\r\n<\/pre>\n<p><span style=\"font-weight: 400\">In our code, we initialize the Catalog and use catalog.find() to retrieve verified prompts and tools. We no longer hardcode prompts; instead, we fetch them.<\/span><\/p>\n<pre class=\"lang:default decode:true\"># BEFORE: Hardcoded Prompt Strings\r\n# parse_instruction = \"You are a resume parsing assistant...\"\r\n\r\nimport agentc\r\nfrom agentc import Catalog, Span\r\n\r\n# AFTER: Dynamic Asset Loading\r\ncatalog = Catalog()\r\n\r\n# Load the \"search\" tool dynamically\r\ntool_result = catalog.find(\"tool\", name=\"search_candidates_vector\")\r\n\r\n# Load the \"recruiter\" persona dynamically\r\nprompt_result = catalog.find(\"prompt\", name=\"hr_recruiter_assistant\")\r\n\r\n# We act on the retrieved metadata\r\ntools = [Tool(name=tool_result.meta.name, func=...)]\r\n<\/pre>\n<h3><span style=\"font-weight: 400\">Standardized Reasoning Engine (LangChain Integration)<\/span><\/h3>\n<p><span style=\"font-weight: 400\">The previous app used a custom SequentialAgent pipeline. While flexible, it meant we had to maintain our own execution loops, error handling, and retry logic for the agent&#8217;s reasoning steps.<\/span><span style=\"font-weight: 400\"><br \/>\n<\/span><\/p>\n<p><span style=\"font-weight: 400\">By leveraging the Agent Catalog&#8217;s compatibility with LangChain, we switched to a standard ReAct (Reason + Act) agent architecture. We simply feed the tools and prompts fetched from the catalog directly into create_react_agent.<\/span><\/p>\n<p><span style=\"font-weight: 400\">What\u2019s the benefit? We get industry-standard reasoning loops <\/span><span style=\"font-weight: 400\">- <\/span><span style=\"font-weight: 400\">Thought -&gt; Action -&gt; Observation <\/span><span style=\"font-weight: 400\">-<\/span><span style=\"font-weight: 400\"> out of the box. The agent can now autonomously decide to search for &#8220;React Developers,&#8221; analyze the results, and then perform a second search for &#8220;Frontend Engineers&#8221; if the first yields few results. something the linear ADK pipeline struggled with.<\/span><\/p>\n<pre class=\"lang:default decode:true\">   def create_langchain_agent(self, catalog: Catalog, embeddings, llm):\r\n\r\n        try:\r\n           \r\n            # Load tools from catalog using v1.0.0 API\r\n            tool_result = catalog.find(\"tool\", name=\"search_candidates_vector\")\r\n\r\n            # Create tool wrapper that injects embeddings client\r\n            def search_with_embeddings(job_description: str) -&gt; str:\r\n                return tool_result.func(\r\n                    job_description=job_description,\r\n                    num_results=5,\r\n                    embeddings_client=embeddings,\r\n                )\r\n\r\n            tools = [\r\n                Tool(\r\n                    name=tool_result.meta.name,\r\n                    description=tool_result.meta.description,\r\n                    func=search_with_embeddings,\r\n                ),\r\n            ]\r\n\r\n            # Load prompt from catalog using v1.0.0 API\r\n\r\n            prompt_result = catalog.find(\"prompt\", name=\"hr_recruiter_assistant\")\r\n            if prompt_result is None:\r\n                raise ValueError(\"Could not find hr_recruiter_assistant prompt in catalog. Run 'agentc index' first.\")\r\n\r\n            custom_prompt = PromptTemplate(\r\n                template=prompt_result.content.strip(),\r\n                input_variables=[\"input\", \"agent_scratchpad\"],\r\n                partial_variables={\r\n                    \"tools\": \"\\n\".join([f\"{tool.name}: {tool.description}\" for tool in tools]),\r\n                    \"tool_names\": \", \".join([tool.name for tool in tools]),\r\n                },\r\n            )\r\n\r\n            # Create agent\r\n            agent = create_react_agent(llm, tools, custom_prompt)\r\n            agent_executor = AgentExecutor(\r\n                agent=agent,\r\n                tools=tools,\r\n                verbose=True,\r\n                handle_parsing_errors=handle_parsing_error,\r\n                max_iterations=5,\r\n                max_execution_time=120,\r\n                early_stopping_method=\"force\",\r\n                return_intermediate_steps=True,\r\n            )\r\n\r\n            logger.info(\"LangChain ReAct agent created successfully\")\r\n            return agent_executor<\/pre>\n<h3><span style=\"font-weight: 400\">Built-in Observability (Agent Tracing)<\/span><\/h3>\n<p><span style=\"font-weight: 400\">In the previous agent application, observability was limited to print() statements. There was no way to &#8220;replay&#8221; an agent&#8217;s session to understand why it rejected a specific candidate.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Agent Catalog provides tracing. It allows users to use SQL++ with traces, leverage the performance of Couchbase, and get insight into details of prompts and tools in the same platform.<\/span><\/p>\n<p><span style=\"font-weight: 400\">We can add Transactional Observability using catalog.Span(). We wrap the execution logic in a context manager that logs every thought, action, and result back to Couchbase. We can now view a full &#8220;trace&#8221; of the recruitment session in the Capella UI, showing exactly how the LLM processed a candidate&#8217;s resume.\u00a0<\/span><\/p>\n<pre class=\"lang:default decode:true\">application_span = catalog.Span(name=\"HR Recruiter Agent\")\r\n\r\n# AFTER: granular observability\r\nwith span.new(name=\"job_matching_query\") as query_span:\r\n    # Log the input\r\n    query_span.log(UserContent(value=job_description))\r\n    \r\n    # Run the agent\r\n    response = agent.invoke({\"input\": job_description})\r\n    \r\n    # Log the agent's final decision\r\n    query_span.log(AssistantContent(value=response[\"output\"]))\r\n\r\n<\/pre>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-17852\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/Screenshot-2026-01-26-at-12.56.04-PM.png\" alt=\"\" width=\"1428\" height=\"914\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/Screenshot-2026-01-26-at-12.56.04-PM.png 1428w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/Screenshot-2026-01-26-at-12.56.04-PM-300x192.png 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/Screenshot-2026-01-26-at-12.56.04-PM-1024x655.png 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/Screenshot-2026-01-26-at-12.56.04-PM-768x492.png 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/Screenshot-2026-01-26-at-12.56.04-PM-18x12.png 18w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/Screenshot-2026-01-26-at-12.56.04-PM-1320x845.png 1320w\" sizes=\"auto, (max-width: 1428px) 100vw, 1428px\" \/><\/p>\n<h2><b>Conclus\u00e3o<\/b><\/h2>\n<p><span style=\"font-weight: 400\">AI agents fail in production not because LLMs lack capability, but because agentic systems can become too complex. By adopting a platform-first approach with Couchbase AI Services and the Agent Catalog, we transformed a complex agent into a governed, scalable agentic system.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">If you\u2019re building AI agents today, the real question isn\u2019t <\/span><i><span style=\"font-weight: 400\">which LLM to use<\/span><\/i> <span style=\"font-weight: 400\">-<\/span><span style=\"font-weight: 400\"> it\u2019s how you\u2019ll run agents safely, observably, and at scale. Couchbase AI Services are built for exactly that.<\/span><\/p>","protected":false},"excerpt":{"rendered":"<p>A typical AI agent application in 2025 usually involves: A cloud-hosted LLM A vector database for retrieval A separate operational database Prompt management and tool management tools Observability and tracing frameworks Guardrails Each tool solves a problem. Collectively, however, they [&hellip;]<\/p>","protected":false},"author":49,"featured_media":17853,"comment_status":"open","ping_status":"open","sticky":true,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"footnotes":""},"categories":[10092],"tags":[],"ppma_author":[9023],"class_list":["post-17851","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-services"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v26.7 (Yoast SEO v26.7) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Migrate Your Existing Agents to Couchbase AI Services - The Couchbase Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.couchbase.com\/blog\/pt\/migrate-your-existing-agents-to-couchbase-ai-services\/\" \/>\n<meta property=\"og:locale\" content=\"pt_BR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Migrate Your Existing Agents to Couchbase AI Services\" \/>\n<meta property=\"og:description\" content=\"A typical AI agent application in 2025 usually involves: A cloud-hosted LLM A vector database for retrieval A separate operational database Prompt management and tool management tools Observability and tracing frameworks Guardrails Each tool solves a problem. Collectively, however, they [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.couchbase.com\/blog\/pt\/migrate-your-existing-agents-to-couchbase-ai-services\/\" \/>\n<meta property=\"og:site_name\" content=\"The Couchbase Blog\" \/>\n<meta property=\"article:published_time\" content=\"2026-01-26T20:58:14+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-01-28T16:31:59+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/Migrate-Your-Existing-Agents-to-Couchbase-AI-Services-1.png\" \/>\n\t<meta property=\"og:image:width\" content=\"2400\" \/>\n\t<meta property=\"og:image:height\" content=\"1256\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Laurent Doguin\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@ldoguin\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"unstructured.io\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"9 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/migrate-your-existing-agents-to-couchbase-ai-services\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/migrate-your-existing-agents-to-couchbase-ai-services\/\"},\"author\":{\"name\":\"Laurent Doguin\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/c0aa9b8f1ed51b7a9e2f7cb755994a5e\"},\"headline\":\"Migrate Your Existing Agents to Couchbase AI Services\",\"datePublished\":\"2026-01-26T20:58:14+00:00\",\"dateModified\":\"2026-01-28T16:31:59+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/migrate-your-existing-agents-to-couchbase-ai-services\/\"},\"wordCount\":1801,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/migrate-your-existing-agents-to-couchbase-ai-services\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/Migrate-Your-Existing-Agents-to-Couchbase-AI-Services-1.png\",\"articleSection\":[\"AI Services\"],\"inLanguage\":\"pt-BR\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.couchbase.com\/blog\/migrate-your-existing-agents-to-couchbase-ai-services\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/migrate-your-existing-agents-to-couchbase-ai-services\/\",\"url\":\"https:\/\/www.couchbase.com\/blog\/migrate-your-existing-agents-to-couchbase-ai-services\/\",\"name\":\"Migrate Your Existing Agents to Couchbase AI Services - The Couchbase Blog\",\"isPartOf\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/migrate-your-existing-agents-to-couchbase-ai-services\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/migrate-your-existing-agents-to-couchbase-ai-services\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/Migrate-Your-Existing-Agents-to-Couchbase-AI-Services-1.png\",\"datePublished\":\"2026-01-26T20:58:14+00:00\",\"dateModified\":\"2026-01-28T16:31:59+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/migrate-your-existing-agents-to-couchbase-ai-services\/#breadcrumb\"},\"inLanguage\":\"pt-BR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.couchbase.com\/blog\/migrate-your-existing-agents-to-couchbase-ai-services\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-BR\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/migrate-your-existing-agents-to-couchbase-ai-services\/#primaryimage\",\"url\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/Migrate-Your-Existing-Agents-to-Couchbase-AI-Services-1.png\",\"contentUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/Migrate-Your-Existing-Agents-to-Couchbase-AI-Services-1.png\",\"width\":2400,\"height\":1256},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/migrate-your-existing-agents-to-couchbase-ai-services\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.couchbase.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Migrate Your Existing Agents to Couchbase AI Services\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#website\",\"url\":\"https:\/\/www.couchbase.com\/blog\/\",\"name\":\"The Couchbase Blog\",\"description\":\"Couchbase, the NoSQL Database\",\"publisher\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.couchbase.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"pt-BR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#organization\",\"name\":\"The Couchbase Blog\",\"url\":\"https:\/\/www.couchbase.com\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-BR\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png\",\"contentUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png\",\"width\":218,\"height\":34,\"caption\":\"The Couchbase Blog\"},\"image\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/c0aa9b8f1ed51b7a9e2f7cb755994a5e\",\"name\":\"Laurent Doguin\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-BR\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/image\/12929ce99397769f362b7a90d6b85071\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/b8c466908092b46634af916b6921f30187a051e4367ded7ac9b1a3f2c5692fd2?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/b8c466908092b46634af916b6921f30187a051e4367ded7ac9b1a3f2c5692fd2?s=96&d=mm&r=g\",\"caption\":\"Laurent Doguin\"},\"description\":\"Laurent is a nerdy metal head who lives in Paris. He mostly writes code in Java and structured text in AsciiDoc, and often talks about data, reactive programming and other buzzwordy stuff. He is also a former Developer Advocate for Clever Cloud and Nuxeo where he devoted his time and expertise to helping those communities grow bigger and stronger. He now runs Developer Relations at Couchbase.\",\"sameAs\":[\"https:\/\/x.com\/ldoguin\"],\"honorificPrefix\":\"Mr\",\"birthDate\":\"1985-06-07\",\"gender\":\"male\",\"award\":[\"Devoxx Champion\",\"Couchbase Legend\"],\"knowsAbout\":[\"Java\"],\"knowsLanguage\":[\"English\",\"French\"],\"jobTitle\":\"Director Developer Relation & Strategy\",\"worksFor\":\"Couchbase\",\"url\":\"https:\/\/www.couchbase.com\/blog\/pt\/author\/laurent-doguin\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Migrate Your Existing Agents to Couchbase AI Services - The Couchbase Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.couchbase.com\/blog\/pt\/migrate-your-existing-agents-to-couchbase-ai-services\/","og_locale":"pt_BR","og_type":"article","og_title":"Migrate Your Existing Agents to Couchbase AI Services","og_description":"A typical AI agent application in 2025 usually involves: A cloud-hosted LLM A vector database for retrieval A separate operational database Prompt management and tool management tools Observability and tracing frameworks Guardrails Each tool solves a problem. Collectively, however, they [&hellip;]","og_url":"https:\/\/www.couchbase.com\/blog\/pt\/migrate-your-existing-agents-to-couchbase-ai-services\/","og_site_name":"The Couchbase Blog","article_published_time":"2026-01-26T20:58:14+00:00","article_modified_time":"2026-01-28T16:31:59+00:00","og_image":[{"width":2400,"height":1256,"url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/Migrate-Your-Existing-Agents-to-Couchbase-AI-Services-1.png","type":"image\/png"}],"author":"Laurent Doguin","twitter_card":"summary_large_image","twitter_creator":"@ldoguin","twitter_misc":{"Written by":"unstructured.io","Est. reading time":"9 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.couchbase.com\/blog\/migrate-your-existing-agents-to-couchbase-ai-services\/#article","isPartOf":{"@id":"https:\/\/www.couchbase.com\/blog\/migrate-your-existing-agents-to-couchbase-ai-services\/"},"author":{"name":"Laurent Doguin","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/c0aa9b8f1ed51b7a9e2f7cb755994a5e"},"headline":"Migrate Your Existing Agents to Couchbase AI Services","datePublished":"2026-01-26T20:58:14+00:00","dateModified":"2026-01-28T16:31:59+00:00","mainEntityOfPage":{"@id":"https:\/\/www.couchbase.com\/blog\/migrate-your-existing-agents-to-couchbase-ai-services\/"},"wordCount":1801,"commentCount":0,"publisher":{"@id":"https:\/\/www.couchbase.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/migrate-your-existing-agents-to-couchbase-ai-services\/#primaryimage"},"thumbnailUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/Migrate-Your-Existing-Agents-to-Couchbase-AI-Services-1.png","articleSection":["AI Services"],"inLanguage":"pt-BR","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.couchbase.com\/blog\/migrate-your-existing-agents-to-couchbase-ai-services\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.couchbase.com\/blog\/migrate-your-existing-agents-to-couchbase-ai-services\/","url":"https:\/\/www.couchbase.com\/blog\/migrate-your-existing-agents-to-couchbase-ai-services\/","name":"Migrate Your Existing Agents to Couchbase AI Services - The Couchbase Blog","isPartOf":{"@id":"https:\/\/www.couchbase.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.couchbase.com\/blog\/migrate-your-existing-agents-to-couchbase-ai-services\/#primaryimage"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/migrate-your-existing-agents-to-couchbase-ai-services\/#primaryimage"},"thumbnailUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/Migrate-Your-Existing-Agents-to-Couchbase-AI-Services-1.png","datePublished":"2026-01-26T20:58:14+00:00","dateModified":"2026-01-28T16:31:59+00:00","breadcrumb":{"@id":"https:\/\/www.couchbase.com\/blog\/migrate-your-existing-agents-to-couchbase-ai-services\/#breadcrumb"},"inLanguage":"pt-BR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.couchbase.com\/blog\/migrate-your-existing-agents-to-couchbase-ai-services\/"]}]},{"@type":"ImageObject","inLanguage":"pt-BR","@id":"https:\/\/www.couchbase.com\/blog\/migrate-your-existing-agents-to-couchbase-ai-services\/#primaryimage","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/Migrate-Your-Existing-Agents-to-Couchbase-AI-Services-1.png","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/Migrate-Your-Existing-Agents-to-Couchbase-AI-Services-1.png","width":2400,"height":1256},{"@type":"BreadcrumbList","@id":"https:\/\/www.couchbase.com\/blog\/migrate-your-existing-agents-to-couchbase-ai-services\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.couchbase.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Migrate Your Existing Agents to Couchbase AI Services"}]},{"@type":"WebSite","@id":"https:\/\/www.couchbase.com\/blog\/#website","url":"https:\/\/www.couchbase.com\/blog\/","name":"Blog do Couchbase","description":"Couchbase, o banco de dados NoSQL","publisher":{"@id":"https:\/\/www.couchbase.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.couchbase.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"pt-BR"},{"@type":"Organization","@id":"https:\/\/www.couchbase.com\/blog\/#organization","name":"Blog do Couchbase","url":"https:\/\/www.couchbase.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"pt-BR","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png","width":218,"height":34,"caption":"The Couchbase Blog"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/c0aa9b8f1ed51b7a9e2f7cb755994a5e","name":"Laurent Doguin","image":{"@type":"ImageObject","inLanguage":"pt-BR","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/image\/12929ce99397769f362b7a90d6b85071","url":"https:\/\/secure.gravatar.com\/avatar\/b8c466908092b46634af916b6921f30187a051e4367ded7ac9b1a3f2c5692fd2?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/b8c466908092b46634af916b6921f30187a051e4367ded7ac9b1a3f2c5692fd2?s=96&d=mm&r=g","caption":"Laurent Doguin"},"description":"Laurent \u00e9 um nerd metaleiro que mora em Paris. Em sua maior parte, ele escreve c\u00f3digo em Java e texto estruturado em AsciiDoc, e frequentemente fala sobre dados, programa\u00e7\u00e3o reativa e outras coisas que est\u00e3o na moda. Ele tamb\u00e9m foi Developer Advocate do Clever Cloud e do Nuxeo, onde dedicou seu tempo e experi\u00eancia para ajudar essas comunidades a crescerem e se fortalecerem. Atualmente, ele dirige as Rela\u00e7\u00f5es com Desenvolvedores na Couchbase.","sameAs":["https:\/\/x.com\/ldoguin"],"honorificPrefix":"Mr","birthDate":"1985-06-07","gender":"male","award":["Devoxx Champion","Couchbase Legend"],"knowsAbout":["Java"],"knowsLanguage":["English","French"],"jobTitle":"Director Developer Relation & Strategy","worksFor":"Couchbase","url":"https:\/\/www.couchbase.com\/blog\/pt\/author\/laurent-doguin\/"}]}},"authors":[{"term_id":9023,"user_id":49,"is_guest":0,"slug":"laurent-doguin","display_name":"Laurent Doguin","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/b8c466908092b46634af916b6921f30187a051e4367ded7ac9b1a3f2c5692fd2?s=96&d=mm&r=g","author_category":"","last_name":"Doguin","first_name":"Laurent","job_title":"","user_url":"","description":"Laurent \u00e9 um nerd metaleiro que mora em Paris. Em sua maior parte, ele escreve c\u00f3digo em Java e texto estruturado em AsciiDoc, e frequentemente fala sobre dados, programa\u00e7\u00e3o reativa e outras coisas que est\u00e3o na moda. Ele tamb\u00e9m foi Developer Advocate do Clever Cloud e do Nuxeo, onde dedicou seu tempo e experi\u00eancia para ajudar essas comunidades a crescerem e se fortalecerem. Atualmente, ele dirige as Rela\u00e7\u00f5es com Desenvolvedores na Couchbase."}],"_links":{"self":[{"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/posts\/17851","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/users\/49"}],"replies":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/comments?post=17851"}],"version-history":[{"count":0,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/posts\/17851\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/media\/17853"}],"wp:attachment":[{"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/media?parent=17851"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/categories?post=17851"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/tags?post=17851"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/ppma_author?post=17851"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}