{"id":17061,"date":"2025-04-23T04:50:54","date_gmt":"2025-04-23T11:50:54","guid":{"rendered":"https:\/\/www.couchbase.com\/blog\/?p=17061"},"modified":"2025-06-16T10:43:34","modified_gmt":"2025-06-16T17:43:34","slug":"couchbase-partners-arize-ai","status":"publish","type":"post","link":"https:\/\/www.couchbase.com\/blog\/pt\/couchbase-partners-arize-ai\/","title":{"rendered":"Couchbase faz parceria com Arize AI para permitir aplicativos de agentes de IA confi\u00e1veis e prontos para produ\u00e7\u00e3o"},"content":{"rendered":"<p>\u00c0 medida que as empresas buscam implementar aplicativos de agentes de IA prontos para produ\u00e7\u00e3o, a observabilidade do modelo de linguagem grande (LLM) surgiu como um requisito essencial para garantir o desempenho e a confian\u00e7a. As organiza\u00e7\u00f5es precisam ter visibilidade de como os agentes interagem com os dados, tomam decis\u00f5es e recuperam informa\u00e7\u00f5es para manter a confiabilidade, a seguran\u00e7a e a conformidade. Sem a observabilidade adequada, as empresas correm o risco de implantar modelos que produzam resultados inconsistentes, imprecisos ou tendenciosos, levando a experi\u00eancias de usu\u00e1rio ruins e inefici\u00eancias operacionais. A nova parceria entre <a href=\"https:\/\/www.couchbase.com\/blog\/pt\/\" target=\"_blank\" rel=\"noopener\">Couchbase <\/a>e <a href=\"https:\/\/arize.com\/\" target=\"_blank\" rel=\"noopener\">Arize AI<\/a> desempenha um papel fundamental ao trazer recursos robustos de monitoramento, avalia\u00e7\u00e3o e otimiza\u00e7\u00e3o para aplicativos orientados por IA.<\/p>\n<p>A integra\u00e7\u00e3o do Couchbase e do Arize AI oferece uma solu\u00e7\u00e3o avan\u00e7ada para a cria\u00e7\u00e3o e o monitoramento de aplicativos RAG (Retrieval Augmented Generation) e de agentes em escala. Ao aproveitar o banco de dados vetorial de alto desempenho do Couchbase, a plataforma de observabilidade da Arize AI e os recursos de monitoramento aprimorados, as empresas podem criar, implantar e otimizar com confian\u00e7a as solu\u00e7\u00f5es Agentic RAG na produ\u00e7\u00e3o.<\/p>\n<p>Neste blog, vamos criar um chatbot de controle de qualidade Agentic RAG usando o LangGraph e o Couchbase <a href=\"https:\/\/couchbaselabs.github.io\/agent-catalog\/index.html#\" target=\"_blank\" rel=\"noopener\">Cat\u00e1logo de agentes<\/a> componente do rec\u00e9m-anunciado <a href=\"https:\/\/www.couchbase.com\/blog\/pt\/products\/ai-services\/\" target=\"_blank\" rel=\"noopener\">Servi\u00e7os de IA da Capella<\/a> (em visualiza\u00e7\u00e3o), al\u00e9m de avaliar e otimizar seu desempenho com o Arize AI. Esse \u00e9 um exemplo tang\u00edvel de como o Couchbase e o Arize AI permitem que os desenvolvedores aprimorem os fluxos de trabalho de recupera\u00e7\u00e3o, melhorem a precis\u00e3o da resposta e monitorem as intera\u00e7\u00f5es alimentadas por LLM em tempo real.<\/p>\n<h2>O valor da parceria entre a Couchbase e a Arize AI<\/h2>\n<p>Ao unir for\u00e7as, a Couchbase e a Arize AI est\u00e3o revolucionando a forma como os desenvolvedores criam e avaliam aplicativos de agentes de IA. Os desenvolvedores podem criar aplicativos de agente sofisticados aproveitando o Couchbase Capella como uma plataforma de dados \u00fanica para cache LLM, mem\u00f3ria de agente de longo e curto prazo, casos de uso de incorpora\u00e7\u00e3o de vetor, an\u00e1lises e cargas de trabalho operacionais, juntamente com sua estrutura de desenvolvimento de agente favorita para orquestrar fluxos de trabalho de agente.<\/p>\n<p>O Couchbase Agent Catalog aprimora ainda mais esse sistema, fornecendo um armazenamento centralizado para fluxos de trabalho de v\u00e1rios agentes em uma organiza\u00e7\u00e3o que permite o armazenamento, o gerenciamento e a descoberta de v\u00e1rias ferramentas de agentes, o controle de vers\u00e3o imediato e a depura\u00e7\u00e3o de rastreamento LLM.<\/p>\n<p>Para garantir alta confiabilidade e transpar\u00eancia, o Arize AI oferece recursos essenciais de observabilidade, incluindo:<\/p>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li aria-level=\"1\"><b>Rastreamento de chamadas de fun\u00e7\u00e3o do agente<\/b>: O Arize permite o monitoramento detalhado das chamadas de fun\u00e7\u00e3o do agente, incluindo etapas de recupera\u00e7\u00e3o e intera\u00e7\u00f5es LLM, para rastrear como as respostas s\u00e3o geradas.<\/li>\n<li aria-level=\"1\"><b>Benchmarking de conjunto de dados<\/b>: Os desenvolvedores podem criar um conjunto de dados estruturado para avaliar e comparar o desempenho do agente ao longo do tempo.<\/li>\n<li aria-level=\"1\"><b>Avalia\u00e7\u00e3o de desempenho com o LLM como juiz<\/b>: Usando avaliadores integrados, o Arize aproveita os LLMs para avaliar a precis\u00e3o da resposta, a relev\u00e2ncia e a efic\u00e1cia geral do agente.<\/li>\n<li aria-level=\"1\"><b>Experimentando estrat\u00e9gias de recupera\u00e7\u00e3o<\/b>: Ao ajustar os tamanhos dos blocos, as sobreposi\u00e7\u00f5es e o n\u00famero de documentos recuperados (valor K), os desenvolvedores podem analisar seu impacto no desempenho do agente.<\/li>\n<li aria-level=\"1\"><b>An\u00e1lise comparativa no Arize<\/b>: A plataforma permite compara\u00e7\u00f5es lado a lado de diferentes estrat\u00e9gias de recupera\u00e7\u00e3o, ajudando as equipes a determinar a configura\u00e7\u00e3o ideal para seu agente.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3 style=\"font-size: 1.5em;\">A import\u00e2ncia da observabilidade do LLM<\/h3>\n<p>Para garantir que os aplicativos de IA tenham um bom desempenho na produ\u00e7\u00e3o, as empresas precisam de uma estrutura de avalia\u00e7\u00e3o robusta. Ferramentas de observabilidade como o Arize AI permitem que os desenvolvedores:<\/p>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li aria-level=\"1\">Avaliar os resultados do LLM com base em fatores como relev\u00e2ncia, taxas de alucina\u00e7\u00e3o e lat\u00eancia<\/li>\n<li aria-level=\"1\">Conduzir avalia\u00e7\u00f5es sistem\u00e1ticas para medir o impacto de altera\u00e7\u00f5es imediatas, modifica\u00e7\u00f5es de recupera\u00e7\u00e3o e ajustes de par\u00e2metros<\/li>\n<li aria-level=\"1\">Organizar conjuntos de dados abrangentes para avaliar o desempenho em diferentes casos de uso<\/li>\n<li aria-level=\"1\">Automatize os processos de avalia\u00e7\u00e3o nos pipelines de CI\/CD, garantindo a confiabilidade consistente dos aplicativos<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>Usando um LLM como juiz, o Arize AI permite que os desenvolvedores me\u00e7am a efic\u00e1cia do agente usando avaliadores pr\u00e9-testados, t\u00e9cnicas de avalia\u00e7\u00e3o personalizadas em v\u00e1rios n\u00edveis e benchmarking de desempenho em grande escala. Ao executar milhares de avalia\u00e7\u00f5es, as equipes podem iterar rapidamente e refinar as solicita\u00e7\u00f5es do LLM, os m\u00e9todos de recupera\u00e7\u00e3o e os fluxos de trabalho do agente para melhorar a qualidade geral do aplicativo.<\/p>\n<h3 style=\"font-size: 1.5em;\">Cria\u00e7\u00e3o de um chatbot de controle de qualidade RAG aut\u00eantico<\/h3>\n<p>O Agentic RAG combina o poder da gera\u00e7\u00e3o tradicional aumentada por recupera\u00e7\u00e3o com a tomada de decis\u00e3o inteligente. Nessa implementa\u00e7\u00e3o, permitimos que um LLM decida dinamicamente se a recupera\u00e7\u00e3o \u00e9 necess\u00e1ria com base no contexto da consulta.<\/p>\n<div id=\"attachment_17062\" style=\"width: 910px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-17062\" class=\"wp-image-17062 size-large\" style=\"border: 1px solid Gainsboro;\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image1-1-1024x373.png\" alt=\"Arize AI for Agentic RAG with Couchbase\" width=\"900\" height=\"328\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image1-1-1024x373.png 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image1-1-300x109.png 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image1-1-768x280.png 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image1-1-1536x560.png 1536w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image1-1-1320x481.png 1320w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image1-1.png 1615w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><p id=\"caption-attachment-17062\" class=\"wp-caption-text\">Ilustra\u00e7\u00e3o que descreve o fluxo de trabalho do agente de <a href=\"https:\/\/github.com\/langchain-ai\/langgraph\/blob\/main\/examples\/rag\/langgraph_agentic_rag.ipynb\" target=\"_blank\" rel=\"noopener\">Exemplo de RAG aut\u00eantico da Langgraph<\/a>.<\/p><\/div>\n<h2>Implementa\u00e7\u00e3o passo a passo<\/h2>\n<p>O restante deste blog se baseia no texto que o acompanha <a href=\"https:\/\/github.com\/Arize-ai\/tutorials\/blob\/main\/python\/llm\/agents\/couchbase_langgraph_agentic_rag.ipynb\" target=\"_blank\" rel=\"noopener\">caderno de tutoriais<\/a>. Antes de criar e implantar um agente de IA observ\u00e1vel, voc\u00ea precisar\u00e1 configurar seu ambiente de desenvolvimento.<\/p>\n<h3 style=\"font-size: 1.5em;\">Pr\u00e9-requisitos:<\/h3>\n<ol>\n<li style=\"list-style-type: none;\">\n<ol>\n<li aria-level=\"1\">Para acompanhar este tutorial, voc\u00ea precisar\u00e1 <a href=\"https:\/\/app.arize.com\/auth\/join\" target=\"_blank\" rel=\"noopener\">inscrever-se no Arize<\/a> e obtenha suas chaves de espa\u00e7o, API e desenvolvedor. Voc\u00ea pode ver as <a href=\"https:\/\/docs.arize.com\/arize\/llm-tracing\/quickstart-llm#get-your-api-keys\" target=\"_blank\" rel=\"noopener\">guia aqui<\/a>. Voc\u00ea tamb\u00e9m precisar\u00e1 de um <a href=\"https:\/\/openai.com\/\" target=\"_blank\" rel=\"noopener\">Chave da API da OpenAI<\/a>.<\/li>\n<li aria-level=\"1\">Voc\u00ea precisar\u00e1 configurar seu cluster do Couchbase fazendo o seguinte:\n<ol>\n<li aria-level=\"2\">Crie uma conta em <a href=\"https:\/\/cloud.couchbase.com\/\" target=\"_blank\" rel=\"noopener\">Nuvem do Couchbase<\/a><\/li>\n<li aria-level=\"2\">Crie um cluster gratuito com os servi\u00e7os de dados, \u00edndice e pesquisa ativados*<\/li>\n<li aria-level=\"2\">Criar credenciais de acesso ao cluster<\/li>\n<li aria-level=\"2\">Permitir acesso ao cluster a partir de seu computador local<\/li>\n<li aria-level=\"2\">Crie um bucket para armazenar seus documentos<\/li>\n<li aria-level=\"2\">Criar um <a href=\"https:\/\/docs.couchbase.com\/cloud\/vector-search\/create-vector-search-index-ui.html\" target=\"_blank\" rel=\"noopener\">\u00edndice de pesquisa<\/a><\/li>\n<\/ol>\n<\/li>\n<li aria-level=\"1\">Crie ferramentas e prompts exigidos pelos agentes usando o Couchbase Agent Catalog (para obter instru\u00e7\u00f5es de instala\u00e7\u00e3o e outras, explore a documenta\u00e7\u00e3o <a href=\"https:\/\/couchbaselabs.github.io\/agent-catalog\/index.html\" target=\"_blank\" rel=\"noopener\">aqui<\/a>)<\/li>\n<\/ol>\n<\/li>\n<\/ol>\n<p>*O servi\u00e7o de pesquisa ser\u00e1 usado para realizar a pesquisa sem\u00e2ntica posteriormente, quando usarmos o cat\u00e1logo de agentes.<\/p>\n<hr \/>\n<h3 style=\"font-size: 1.5em;\">1) Criar um chatbot Agentic RAG usando LangGraph, Couchbase como armazenamento de vetores e Agent Catalog para gerenciar agentes de IA<\/h3>\n<h4 style=\"font-size: 1.25em;\">Configura\u00e7\u00e3o de depend\u00eancias<\/h4>\n<pre class=\"nums:false wrap:true lang:python decode:true\">%pip install -qU langchain-openai langchain-community langchain langgraph langgraph.prebuilt openai langchain-couchbase agentc langchain-huggingface langchain_core\r\n\r\n%pip install -qq \"arize-phoenix[evals]&gt;=7.0.0\" \"arize-otel&gt;=0.7.0\" \"openinference-instrumentation-openai&gt;=0.1.18\" \"openinference-instrumentation-langchain&gt;=0.1.29\"<\/pre>\n<h4 style=\"font-size: 1.25em;\">Conex\u00e3o com o Couchbase<\/h4>\n<p>Usaremos o Couchbase como nosso armazenamento de vetores. Veja a seguir como configurar a conex\u00e3o:<\/p>\n<pre class=\"nums:false wrap:true lang:python decode:true\">from datetime import timedelta\r\nfrom couchbase.auth import PasswordAuthenticator\r\nfrom couchbase.cluster import Cluster\r\nfrom couchbase.options import ClusterOptions\r\nfrom langchain_couchbase.vectorstores import CouchbaseSearchVectorStore\r\nfrom langchain_huggingface import HuggingFaceEmbeddings\r\n\r\nConfigura\u00e7\u00f5es do cluster #\r\nCB_CONN_STRING = \"sua-cadeia-de-conex\u00e3o\"\r\nCB_USERNAME = \"seu nome de usu\u00e1rio\"\r\nCB_PASSWORD = \"sua senha\"\r\nBUCKET_NAME = \"seu-nome-do-bucket\"\r\nSCOPE_NAME = \"your-scope-name\" (nome do seu escopo)\r\nCOLLECTION_NAME = \"your-collection-name\" (nome da sua cole\u00e7\u00e3o)\r\nSEARCH_INDEX_NAME = \"seu-nome-de-indexo-de-pesquisa\"\r\n\r\n# Conectar-se ao cluster do couchbase\r\nauth = PasswordAuthenticator(CB_USERNAME, CB_PASSWORD)\r\noptions = ClusterOptions(auth)\r\noptions.apply_profile(\"wan_development\")\r\ncluster = Cluster(CB_CONN_STRING, options)\r\ncluster.wait_until_ready(timedelta(seconds=5))\r\n\r\n# Inicializar o armazenamento de vetores\r\nembeddings = HuggingFaceEmbeddings(model_name=\"sentence-transformers\/all-MiniLM-L12-v2\")\r\nvector_store = CouchbaseVectorStore(\r\n    cluster=cluster,\r\n    nome_do_balde=nome_do_balde,\r\n    nome_do_escopo=NOME_DO_EsCOPO,\r\n    collection_name=COLLECTION_NAME,\r\n    embedding=embeddings,\r\n    index_name=SEARCH_INDEX_NAME,\r\n)<\/pre>\n<h4 style=\"font-size: 1.25em;\">Ingest\u00e3o de documentos<\/h4>\n<p>Criaremos uma fun\u00e7\u00e3o auxiliar para carregar e indexar documentos com par\u00e2metros de fragmenta\u00e7\u00e3o configur\u00e1veis:<\/p>\n<pre class=\"nums:false wrap:true lang:python decode:true\">from langchain_community.document_loaders import WebBaseLoader\r\nfrom langchain_text_splitters import RecursiveCharacterTextSplitter\r\n\r\ndef reset_vector_store(vector_store, chunk_size=1024, chunk_overlap=20):\r\n    try:    \r\n        # Excluir documentos existentes\r\n        resultados = vector_store.similarity_search(\r\n            k=1000,\r\n            query=\"\",\r\n            search_options={\r\n                \"query\": {\"field\": \"metadata.source\", \"match\": \"lilian_weng_blog\"}\r\n            },\r\n        )\r\n        if results:\r\n            deleted_ids = [result.id for result in results]\r\n            vector_store.delete(ids=deleted_ids)\r\n            \r\n        # Carregar documentos de URLs\r\n        urls = [\r\n            \"https:\/\/lilianweng.github.io\/posts\/2024-07-07-hallucination\/\",\r\n\"https:\/\/lilianweng.github.io\/posts\/2023-03-15-prompt-engineering\/\",\r\n\"https:\/\/lilianweng.github.io\/posts\/2023-10-25-adv-attack-llm\/\",\r\n        ]\r\n        docs = [WebBaseLoader(url).load() for url in urls]\r\n        docs_list = [item for sublist in docs for item in sublist]\r\n\r\n        # Usar RecursiveCharacterTextSplitter com par\u00e2metros configur\u00e1veis\r\n        text_splitter = RecursiveCharacterTextSplitter(\r\n            chunk_size=chunk_size,\r\n            chunk_overlap=chunk_overlap,\r\n            separators=[\"\\n\\n\", \"\\n\", \" \", \"\"],\r\n        )\r\n        doc_splits = text_splitter.split_documents(docs_list)\r\n        # Adicionar metadados aos documentos\r\n        para doc em doc_splits:\r\n            doc.metadata[\"source\"] = \"lilian_weng_blog\"\r\n            \r\n        # Adicionar documentos ao armazenamento de vetores\r\n        vector_store.add_documents(doc_splits)\r\n        return vector_store\r\n    except ValueError as e:\r\n        print(f \"Error: {e}\")\r\n\r\n# Inicializar com as configura\u00e7\u00f5es padr\u00e3o\r\nreset_vector_store(vector_store)<\/pre>\n<h4 style=\"font-size: 1.25em;\">Configura\u00e7\u00e3o da ferramenta Retriever<\/h4>\n<p>Busque nossa ferramenta retriever no Cat\u00e1logo de Agentes usando o comando <code>agente<\/code> provedor. No futuro, quando mais ferramentas (e\/ou prompts) forem necess\u00e1rias e o aplicativo se tornar mais complexo, o SDK e a CLI do Agent Catalog poder\u00e3o ser usados para buscar automaticamente as ferramentas com base no caso de uso (pesquisa sem\u00e2ntica) ou pelo nome.<\/p>\n<p>Para obter instru\u00e7\u00f5es sobre como essa ferramenta foi criada e mais recursos do cat\u00e1logo de agentes, consulte a documenta\u00e7\u00e3o <a href=\"https:\/\/couchbaselabs.github.io\/agent-catalog\/index.html\" target=\"_blank\" rel=\"noopener\">aqui<\/a>.<\/p>\n<pre class=\"nums:false wrap:true lang:python decode:true\">importar agentc.langchain\r\nimport agentc\r\nfrom langchain_core.tools import tool\r\n\r\nprovedor = agentc.Provider(\r\n    decorator=lambda t: tool(t.func),\r\n    secrets={\"CB_USERNAME\": CB_USERNAME,\r\n            \"CB_PASSWORD\": CB_PASSWORD,\r\n            \"CB_CONN_STRING\": CB_CONN_STRING})\r\n\r\n# Obter a ferramenta retriever do cat\u00e1logo de agentes\r\nretriever_tool = provider.get_item(name=\"retriever_tool\", item_type=\"tool\")\r\ntools = retriever_tool<\/pre>\n<h4 style=\"font-size: 1.25em;\">Defini\u00e7\u00e3o do estado do agente<\/h4>\n<p>We will define a graph of agents to help all involved agents communicate with each other better. Agents communicate through a `state` object that is passed around to each node and modified with output from that node.<\/p>\n<p>Nosso estado ser\u00e1 uma lista de <em>mensagens<\/em> e cada n\u00f3 em nosso gr\u00e1fico ser\u00e1 anexado a ele:<\/p>\n<pre class=\"nums:false wrap:true lang:python decode:true\">from typing import Annotated, Sequence, TypedDict\r\nfrom langchain_core.messages import BaseMessage\r\nfrom langgraph.graph.message import add_messages\r\n\r\nclass AgentState(TypedDict):\r\n    # A fun\u00e7\u00e3o add_messages define como uma atualiza\u00e7\u00e3o deve ser processada\r\n    # O padr\u00e3o \u00e9 substituir. add_messages diz \"append\" (anexar)\r\n    mensagens: Annotated[Sequence[BaseMessage], add_messages]<\/pre>\n<h4 style=\"font-size: 1.25em;\">Cria\u00e7\u00e3o de n\u00f3s de agente<\/h4>\n<p>Definiremos os principais componentes de nosso pipeline de agentes:<\/p>\n<p>N\u00f3s: Fun\u00e7\u00e3o de verifica\u00e7\u00e3o de relev\u00e2ncia, reescritor de consultas, agente principal, gera\u00e7\u00e3o de respostas<\/p>\n<pre class=\"nums:false wrap:true lang:python decode:true\">from typing import Annotated, Literal, Sequence, TypedDict\r\nfrom langchain import hub\r\nfrom langchain_core.messages import BaseMessage, HumanMessage\r\nfrom langchain_core.output_parsers import StrOutputParser\r\nfrom langchain_core.prompts import PromptTemplate\r\nfrom langchain_core.pydantic_v1 import BaseModel, Field\r\nfrom langchain_openai import ChatOpenAI\r\nfrom langgraph.prebuilt import tools_condition\r\n\r\n### Relevance Checking Function\r\ndef grade_documents(state) -&gt; Literal[\"generate\", \"rewrite\"]:\r\n\u00a0\u00a0\u00a0\u00a0\"\"\"\r\n\u00a0\u00a0\u00a0\u00a0Determines whether the retrieved documents are relevant to the question.\r\n\u00a0\u00a0\u00a0\u00a0\"\"\"\r\n\u00a0\u00a0\u00a0\u00a0print(\"---CHECK RELEVANCE---\")\r\n\u00a0\u00a0\u00a0\u00a0# Data model\r\n\u00a0\u00a0\u00a0\u00a0class grade(BaseModel):\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"\"\"Binary score for relevance check.\"\"\"\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0binary_score: str = Field(description=\"Relevance score 'yes' or 'no'\")\r\n\u00a0\u00a0\u00a0\u00a0# LLM\r\n\u00a0\u00a0\u00a0\u00a0model = ChatOpenAI(temperature=0, model=\"gpt-4o\", streaming=True)\r\n\u00a0\u00a0\u00a0\u00a0llm_with_tool = model.with_structured_output(grade)\r\n\u00a0\u00a0\u00a0\u00a0#fetch a prompt from Agent Catalog\r\n\u00a0\u00a0\u00a0\u00a0grade_documents_prompt = PromptTemplate(\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0template=provider.get_item(name=\"grade_documents\", item_type=\"prompt\").prompt.render(),\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0input_variables=[\"context\", \"question\"],\r\n\u00a0\u00a0\u00a0\u00a0)\r\n\u00a0\u00a0\u00a0\u00a0# Chain\r\n\u00a0\u00a0\u00a0\u00a0chain = grade_documents_prompt | llm_with_tool\r\n\u00a0\u00a0\u00a0\u00a0messages = state[\"messages\"]\r\n\u00a0\u00a0\u00a0\u00a0last_message = messages[-1]\r\n\u00a0\u00a0\u00a0\u00a0question = messages[0].content\r\n\u00a0\u00a0\u00a0\u00a0docs = last_message.content\r\n\r\n\u00a0\u00a0\u00a0\u00a0scored_result = chain.invoke({\"question\": question, \"context\": docs})\r\n\u00a0\u00a0\u00a0\u00a0score = scored_result.binary_score\r\n\u00a0\u00a0\u00a0\u00a0if score == \"yes\":\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0print(\"---DECISION: DOCS RELEVANT---\")\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0return \"generate\"\r\n\u00a0\u00a0\u00a0\u00a0else:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0print(\"---DECISION: DOCS NOT RELEVANT---\")\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0print(score)\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0return \"rewrite\"\r\n### Main Agent Node\r\ndef agent(state):\r\n\u00a0\u00a0\u00a0\u00a0\"\"\"\r\n\u00a0\u00a0\u00a0\u00a0Invokes the agent model to generate a response or use tools.\r\n\u00a0\u00a0\u00a0\u00a0\"\"\"\r\n\u00a0\u00a0\u00a0\u00a0print(\"---CALL AGENT---\")\r\n\u00a0\u00a0\u00a0\u00a0messages = state[\"messages\"]\r\n\u00a0\u00a0\u00a0\u00a0model = ChatOpenAI(temperature=0, streaming=True, model=\"gpt-4-turbo\")\r\n\u00a0\u00a0\u00a0\u00a0model = model.bind_tools(tools)\r\n\u00a0\u00a0\u00a0\u00a0response = model.invoke(messages)\r\n\u00a0\u00a0\u00a0\u00a0return {\"messages\": [response]}\r\n### Query Rewriting Node\r\ndef rewrite(state):\r\n\u00a0\u00a0\u00a0\u00a0\"\"\"\r\n\u00a0\u00a0\u00a0\u00a0Transform the query to produce a better question.\r\n\u00a0\u00a0\u00a0\u00a0\"\"\"\r\n\u00a0\u00a0\u00a0\u00a0print(\"---TRANSFORM QUERY---\")\r\n\u00a0\u00a0\u00a0\u00a0messages = state[\"messages\"]\r\n\u00a0\u00a0\u00a0\u00a0question = messages[0].content\r\n\u00a0\u00a0\u00a0\u00a0msg = [\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0HumanMessage(\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0content=f\"\"\" \\n\u00a0\r\n\u00a0\u00a0\u00a0\u00a0Look at the input and try to reason about the underlying semantic intent \/ meaning. \\n\u00a0\r\n\u00a0\u00a0\u00a0\u00a0Here is the initial question:\r\n\u00a0\u00a0\u00a0\u00a0\\n ------- \\n\r\n\u00a0\u00a0\u00a0\u00a0{question}\u00a0\r\n\u00a0\u00a0\u00a0\u00a0\\n ------- \\n\r\n\u00a0\u00a0\u00a0\u00a0Formulate an improved question: \"\"\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0)\r\n\u00a0\u00a0\u00a0\u00a0]\r\n\u00a0\u00a0\u00a0\u00a0model = ChatOpenAI(temperature=0, model=\"gpt-4-0125-preview\", streaming=True)\r\n\u00a0\u00a0\u00a0\u00a0response = model.invoke(msg)\r\n\u00a0\u00a0\u00a0\u00a0return {\"messages\": [response]}\r\n\r\n### Response Generation Node\r\ndef generate(state):\r\n\u00a0\u00a0\u00a0\u00a0\"\"\"\r\n\u00a0\u00a0\u00a0\u00a0Generate answer using retrieved documents\r\n\u00a0\u00a0\u00a0\u00a0\"\"\"\r\n\u00a0\u00a0\u00a0\u00a0print(\"---GENERATE---\")\r\n\u00a0\u00a0\u00a0\u00a0messages = state[\"messages\"]\r\n\u00a0\u00a0\u00a0\u00a0question = messages[0].content\r\n\u00a0\u00a0\u00a0\u00a0last_message = messages[-1]\r\n\u00a0\u00a0\u00a0\u00a0docs = last_message.content\r\n\r\n\u00a0\u00a0\u00a0\u00a0# Prompt\r\n\u00a0\u00a0\u00a0\u00a0prompt = hub.pull(\"rlm\/rag-prompt\")\r\n\u00a0\u00a0\u00a0\u00a0llm = ChatOpenAI(model_name=\"gpt-4o-mini\", temperature=0, streaming=True)\r\n\u00a0\u00a0\u00a0\u00a0rag_chain = prompt | llm | StrOutputParser()\r\n\r\n\u00a0\u00a0\u00a0\u00a0# Run\r\n\u00a0\u00a0\u00a0\u00a0response = rag_chain.invoke({\"context\": docs, \"question\": question})\r\n\u00a0\u00a0\u00a0\u00a0return {\"messages\": [response]}<\/pre>\n<h4 style=\"font-size: 1.25em;\">Cria\u00e7\u00e3o do gr\u00e1fico do agente<\/h4>\n<p>Agora, conectaremos os n\u00f3s em um fluxo de trabalho coerente:<\/p>\n<pre class=\"nums:false wrap:true lang:default decode:true\">from langgraph.graph import END, StateGraph, START\r\nfrom langgraph.prebuilt import ToolNode\r\n\r\n# Definir um novo gr\u00e1fico\r\nfluxo de trabalho = StateGraph(AgentState)\r\n\r\n# Definir os n\u00f3s\r\nworkflow.add_node(\"agent\", agent)\r\nretrieve = ToolNode(retriever_tool)\r\nworkflow.add_node(\"retrieve\", retrieve)\r\nworkflow.add_node(\"rewrite\", rewrite)\r\nfluxo de trabalho.add_node(\"generate\", generate)\r\n\r\n# Definir bordas\r\nworkflow.add_edge(START, \"agent\")\r\n\r\n# Bordas condicionais baseadas na decis\u00e3o do agente\r\nworkflow.add_conditional_edges(\r\n    \"agent\",\r\n    tools_condition,\r\n    {\r\n        \"tools\": \"retrieve\",\r\n        END: END,\r\n    },\r\n)\r\n\r\n# Bordas condicionais ap\u00f3s a recupera\u00e7\u00e3o com base na relev\u00e2ncia do documento\r\nworkflow.add_conditional_edges(\r\n    \"retrieve\",\r\n    grade_documents,\r\n)\r\nworkflow.add_edge(\"generate\", END)\r\nfluxo de trabalho.add_edge(\"rewrite\", \"agent\")\r\n\r\n# Compilar o gr\u00e1fico\r\ngraph = workflow.compile()<\/pre>\n<h4 style=\"font-size: 1.25em;\">Visualiza\u00e7\u00e3o do gr\u00e1fico do agente<\/h4>\n<p>Vamos visualizar nosso fluxo de trabalho para entend\u00ea-lo melhor:<\/p>\n<pre class=\"nums:false wrap:true lang:python decode:true\">de IPython.display import Image, display\r\n\r\ntry:\r\n    display(Image(graph.get_graph(xray=True).draw_mermaid_png()))\r\nexcept Exception:\r\n    # Isso requer algumas depend\u00eancias extras e \u00e9 opcional\r\n    passar<\/pre>\n<hr \/>\n<h3 style=\"font-size: 1.5em;\">2) Rastrear as chamadas de fun\u00e7\u00e3o do agente usando o Arize, capturando consultas de recupera\u00e7\u00e3o, respostas LLM e uso de ferramentas<\/h3>\n<p>O Arize oferece observabilidade abrangente para o nosso sistema de agentes. Vamos configurar o rastreamento:<\/p>\n<pre class=\"nums:false wrap:true lang:python decode:true\">from arize.otel import register\r\n# Configurar o provedor de rastreamento\r\ntracer_provider = register(\r\n    space_id = SPACE_ID,\r\n    api_key = API_KEY,\r\n    project_name = \"langgraph-agentic-rag\",\r\n)\r\n# Importar o instrumentador autom\u00e1tico do OpenInference\r\nfrom openinference.instrumentation.langchain import LangChainInstrumentor\r\n# Instrumento LangChain\r\nLangChainInstrumentor().instrument(tracer_provider=tracer_provider)<\/pre>\n<p>Agora vamos executar o agente para ver como ele funciona:<\/p>\n<pre class=\"nums:false wrap:true lang:python decode:true\">importar pprint\r\nentradas = {\r\n    \"messages\": [\r\n        (\"user\", \"O que Lilian Weng diz sobre os tipos de ataques advers\u00e1rios a LLMs?\"),\r\n    ]\r\n}\r\npara sa\u00edda em graph.stream(inputs):\r\n    for key, value in output.items():\r\n        pprint.pprint(f \"Sa\u00edda do n\u00f3 '{chave}':\")\r\n        pprint.pprint(value, indent=2, width=80, depth=None)<\/pre>\n<p>Isso executar\u00e1 nosso gr\u00e1fico de agentes e produzir\u00e1 informa\u00e7\u00f5es detalhadas de cada n\u00f3 \u00e0 medida que ele processa a consulta. No Arize, voc\u00ea poder\u00e1 ver uma visualiza\u00e7\u00e3o de rastreamento que mostra o fluxo de execu\u00e7\u00e3o, a lat\u00eancia e os detalhes de cada chamada de fun\u00e7\u00e3o.<\/p>\n<div id=\"attachment_17064\" style=\"width: 910px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-17064\" class=\"wp-image-17064 size-large\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image3-1-1024x614.png\" alt=\"Tracing visualization from Arize platform\" width=\"900\" height=\"540\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image3-1-1024x614.png 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image3-1-300x180.png 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image3-1-768x461.png 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image3-1-1536x921.png 1536w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image3-1-1320x792.png 1320w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image3-1.png 1999w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><p id=\"caption-attachment-17064\" class=\"wp-caption-text\">Visualiza\u00e7\u00e3o de rastreamento da plataforma Arize<\/p><\/div>\n<hr \/>\n<h3 style=\"font-size: 1.5em;\">3) Avaliar o desempenho gerando um conjunto de dados com consultas e respostas esperadas<\/h3>\n<p>Para avaliar sistematicamente nosso sistema, precisamos de um conjunto de dados de refer\u00eancia:<\/p>\n<pre class=\"nums:false wrap:true lang:python decode:true\">import pandas as pd\r\nfrom langchain_openai import ChatOpenAI\r\n# Definir um modelo para gerar perguntas\r\nGEN_TEMPLATE = \"\"\"\r\nVoc\u00ea \u00e9 um assistente que gera perguntas de Q&amp;A sobre o conte\u00fado abaixo.\r\nAs perguntas devem envolver o conte\u00fado, fatos e n\u00fameros espec\u00edficos, nomes e elementos da hist\u00f3ria. N\u00e3o fa\u00e7a perguntas cuja resposta n\u00e3o esteja no conte\u00fado.\r\nResponda com uma pergunta por linha. N\u00e3o inclua nenhuma numera\u00e7\u00e3o no in\u00edcio de cada linha. N\u00e3o inclua nenhum t\u00edtulo de categoria.\r\nGere 10 perguntas. Certifique-se de que n\u00e3o haja perguntas duplicadas.\r\n[START CONTENT]\r\n{content}\r\n[END CONTENT]\r\n\"\"\"\r\n# Carregue o conte\u00fado sobre o qual voc\u00ea deseja gerar perguntas\r\ncontent = \"\"\"\r\nLilian Weng discute v\u00e1rios aspectos de ataques advers\u00e1rios a LLMs e t\u00e9cnicas de engenharia de prompt. Certifique-se de usar o nome de Lilian Weng nas perguntas.\r\n\"\"\"\r\n# Formatar o modelo com o conte\u00fado\r\nformatted_template = GEN_TEMPLATE.format(content=content)\r\n# Inicializar o modelo de idioma\r\nmodel = ChatOpenAI(model=\"gpt-4o\", max_tokens=1300)\r\n# Gerar perguntas\r\nresponse = model.invoke(formatted_template)\r\nquestions_content = response.content\r\nquestions = questions_content.strip().split(\"\\n\")\r\n# Criar um dataframe para armazenar as perguntas\r\nquestions_df = pd.DataFrame(questions, columns=[\"input\"])<\/pre>\n<hr \/>\n<h3 style=\"font-size: 1.5em;\">4) Avalie o desempenho usando o LLM como juiz<\/h3>\n<p>Usaremos a avalia\u00e7\u00e3o baseada em LLM para avaliar a qualidade das respostas do nosso agente:<\/p>\n<pre class=\"nums:false lang:python decode:true\">from phoenix.evals import (\r\n    RAG_RELEVANCY_PROMPT_RAILS_MAP,\r\n    RAG_RELEVANCY_PROMPT_TEMPLATE,\r\n    QA_PROMPT_RAILS_MAP,\r\n    QA_PROMPT_TEMPLATE,\r\n    OpenAIModel,\r\n    llm_classify\r\n)\r\n\r\n# Os trilhos s\u00e3o usados para manter a sa\u00edda em valores espec\u00edficos com base no modelo\r\nRELEVANCE_RAILS = lista (RAG_RELEVANCY_PROMPT_RAILS_MAP.values())\r\nQA_RAILS = list(QA_PROMPT_RAILS_MAP.values())\r\n\r\nrelevance_eval_df = llm_classify(\r\n    dataframe=response_df,\r\n    template=RAG_RELEVANCY_PROMPT_TEMPLATE,\r\n    model=OpenAIModel(model=\"gpt-4o\"),\r\n    rails=RELEVANCE_RAILS,\r\n    provide_explanation=True,\r\n    include_prompt=True,\r\n    concurrency=4,\r\n)\r\n\r\ncorrectness_eval_df = llm_classify(\r\n    dataframe=response_df,\r\n    template=QA_PROMPT_TEMPLATE,\r\n    model=OpenAIModel(model=\"gpt-4o\"),\r\n    rails=QA_RAILS,\r\n    provide_explanation=True,\r\n    include_prompt=True,\r\n    concurrency=4,\r\n)\r\n<\/pre>\n<hr \/>\n<h3 style=\"font-size: 1.5em;\">5) Experimente as configura\u00e7\u00f5es de recupera\u00e7\u00e3o<\/h3>\n<p>Agora vamos experimentar diferentes configura\u00e7\u00f5es para otimizar nosso sistema:<\/p>\n<pre class=\"nums:false wrap:true lang:python decode:true\"># Crie um conjunto de dados no Arize para armazenar nossos experimentos\r\nfrom arize.experimental.datasets import ArizeDatasetsClient\r\nfrom uuid import uuid1\r\nfrom arize.experimental.datasets.experiments.types import (\r\n    ExperimentTaskResultColumnNames,\r\n    EvaluationResultColumnNames,\r\n)\r\nfrom arize.experimental.datasets.utils.constants import GENERATIVE\r\n# Configurar o cliente arize\r\narize_client = ArizeDatasetsClient(developer_key=DEVELOPER_KEY, api_key=API_KEY)\r\ndataset_name = \"rag-experiments-\" + str(uuid1())[:3]\r\ndataset_id = arize_client.create_dataset(\r\n    space_id=SPACE_ID,\r\n    dataset_name=nome_do_conjunto_de_dados,\r\n    dataset_type=GENERATIVE,\r\n    data=questions_df,\r\n)\r\ndataset = arize_client.get_dataset(space_id=SPACE_ID, dataset_id=dataset_id)\r\n# Definir mapeamentos de coluna para resultados de tarefas e avalia\u00e7\u00f5es\r\ntask_cols = ExperimentTaskResultColumnNames(\r\n    example_id=\"example_id\", result=\"output\"\r\n)\r\nrelevance_evaluator_cols = EvaluationResultColumnNames(\r\n    label=\"relev\u00e2ncia\",\r\n    explanation=\"relevance_explanation\",\r\n)\r\ncorrectness_evaluator_cols = EvaluationResultColumnNames(\r\n    label=\"correctness\",\r\n    explanation=\"correctness_explanation\",\r\n)\r\n\r\n# Fun\u00e7\u00e3o para registrar experimentos no Arize\r\ndef log_experiment_to_arize(experiment_df, experiment_name):\r\n    experiment_df[\"example_id\"] = dataset[\"id\"]\r\n    return arize_client.log_experiment(\r\n        space_id=SPACE_ID,\r\n        nome_do_experimento=nome_do_experimento + \"-\" + str(uuid1())[:2],\r\n        experiment_df=experiment_df,\r\n        task_columns=task_cols,\r\n        evaluator_columns={\r\n            \"correctness\" (corre\u00e7\u00e3o): correctness_evaluator_cols (corre\u00e7\u00e3o_avaliador_cols),\r\n            \"relevance\": relevance_evaluator_cols,\r\n        },\r\n        dataset_name=nome_do_conjunto_de_dados,\r\n    )<\/pre>\n<p>Agora, faremos experimentos com diferentes configura\u00e7\u00f5es:<\/p>\n<pre class=\"nums:false wrap:true lang:python decode:true\"># Experimento 1: blocos de 1024 tokens, k=2\r\nreset_vector_store(vector_store, chunk_size=1024, chunk_overlap=20)\r\nk_2_chunk_1024_overlap_20 = run_rag(questions_df, k_value=2)\r\nk_2_chunk_1024_overlap_20 = run_evaluators(k_2_chunk_1024_overlap_20)\r\n# Experimento 2: blocos de 1024 tokens, k=4\r\nk_4_chunk_1024_overlap_20 = run_rag(questions_df, k_value=4)\r\nk_4_chunk_1024_overlap_20 = run_evaluators(k_4_chunk_1024_overlap_20)\r\n# Experimento 3: peda\u00e7os menores (200 tokens), k=2\r\nreset_vector_store(vector_store, chunk_size=200, chunk_overlap=20)\r\nk_2_chunk_200_overlap_20 = run_rag(questions_df, k_value=2)\r\nk_2_chunk_200_overlap_20 = run_evaluators(k_2_chunk_200_overlap_20)\r\n# Experimento 4: blocos m\u00e9dios (500 tokens), k=2\r\nreset_vector_store(vector_store, chunk_size=500, chunk_overlap=20)\r\nk_2_chunk_500_overlap_20 = run_rag(questions_df, k_value=2)\r\nk_2_chunk_500_overlap_20 = run_evaluators(k_2_chunk_500_overlap_20)\r\n# Registre todos os experimentos no Arize\r\nlog_experiment_to_arize(k_2_chunk_1024_overlap_20, \"k_2_chunk_1024_overlap_20\")\r\nlog_experiment_to_arize(k_4_chunk_1024_overlap_20, \"k_4_chunk_1024_overlap_20\")\r\nlog_experiment_to_arize(k_2_chunk_200_overlap_20, \"k_2_chunk_200_overlap_20\")\r\nlog_experiment_to_arize(k_2_chunk_500_overlap_20, \"k_2_chunk_500_overlap_20\")<\/pre>\n<hr \/>\n<h3 style=\"font-size: 1.5em;\">6) Comparar experimentos no Arize<\/h3>\n<p>Depois de executar todos os experimentos, agora voc\u00ea pode visualiz\u00e1-los e compar\u00e1-los na interface do usu\u00e1rio do Arize. Os experimentos devem estar vis\u00edveis no espa\u00e7o de trabalho do Arize sob o nome do conjunto de dados que criamos anteriormente.<\/p>\n<div id=\"attachment_17065\" style=\"width: 910px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-17065\" class=\"wp-image-17065 size-large\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image2-1-1024x657.png\" alt=\"\" width=\"900\" height=\"577\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image2-1-1024x657.png 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image2-1-300x193.png 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image2-1-768x493.png 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image2-1-1536x986.png 1536w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image2-1-1320x847.png 1320w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/image2-1.png 1999w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><p id=\"caption-attachment-17065\" class=\"wp-caption-text\">Visualiza\u00e7\u00e3o de compara\u00e7\u00e3o de experimentos da plataforma Arize<\/p><\/div>\n<p>No Arize, voc\u00ea pode:<\/p>\n<ol>\n<li style=\"list-style-type: none;\">\n<ol>\n<li aria-level=\"1\">Comparar as m\u00e9tricas de desempenho geral entre diferentes configura\u00e7\u00f5es<\/li>\n<li aria-level=\"1\">Analisar o desempenho por pergunta para identificar padr\u00f5es<\/li>\n<li aria-level=\"1\">Examinar os detalhes do rastreamento para entender o fluxo de execu\u00e7\u00e3o<\/li>\n<li aria-level=\"1\">Veja as pontua\u00e7\u00f5es de relev\u00e2ncia e corre\u00e7\u00e3o de cada experimento<\/li>\n<li aria-level=\"1\">Veja as explica\u00e7\u00f5es para as decis\u00f5es de avalia\u00e7\u00e3o<\/li>\n<li aria-level=\"1\">Avalie os resultados usando um LLM como juiz para pontuar a relev\u00e2ncia e a corre\u00e7\u00e3o das respostas.<\/li>\n<li aria-level=\"1\">Otimize as configura\u00e7\u00f5es de recupera\u00e7\u00e3o experimentando tamanhos de blocos, configura\u00e7\u00f5es de sobreposi\u00e7\u00e3o e limites de recupera\u00e7\u00e3o de documentos.<\/li>\n<li aria-level=\"1\">Compare e analise experimentos no Arize para determinar as configura\u00e7\u00f5es de melhor desempenho.<\/li>\n<\/ol>\n<\/li>\n<\/ol>\n<h3 style=\"font-size: 1.5em;\">Inovar com o Couchbase e a Arize AI<\/h3>\n<p>A integra\u00e7\u00e3o do Couchbase e do Arize permite que as empresas criem aplicativos de GenAI robustos e prontos para a produ\u00e7\u00e3o, com fortes recursos de observabilidade e otimiza\u00e7\u00e3o. Ao aproveitar o Agentic RAG com decis\u00f5es de recupera\u00e7\u00e3o monitoradas, as organiza\u00e7\u00f5es podem aumentar a precis\u00e3o, reduzir as alucina\u00e7\u00f5es e garantir o desempenho ideal ao longo do tempo.<\/p>\n<p>\u00c0 medida que as empresas continuam a ultrapassar os limites da GenAI, a combina\u00e7\u00e3o do armazenamento vetorial de alto desempenho com a observabilidade da IA ser\u00e1 fundamental para a implementa\u00e7\u00e3o de aplicativos confi\u00e1veis e dimension\u00e1veis. Com o Couchbase e o Arize, as organiza\u00e7\u00f5es t\u00eam as ferramentas para enfrentar com confian\u00e7a os desafios da implementa\u00e7\u00e3o da GenAI corporativa.<\/p>\n<h2>Recursos adicionais<\/h2>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li aria-level=\"1\">Comece a usar <a href=\"https:\/\/cloud.couchbase.com\/\" target=\"_blank\" rel=\"noopener\">Plataforma de desenvolvedor do Couchbase Capella<\/a> gratuitamente<\/li>\n<li aria-level=\"1\">Entre em contato com a Arize AI para uma demonstra\u00e7\u00e3o <a href=\"https:\/\/arize.com\/request-a-demo\/\" target=\"_blank\" rel=\"noopener\">aqui<\/a><\/li>\n<li aria-level=\"1\">Leia mais postagens e tutoriais em <a href=\"https:\/\/www.couchbase.com\/blog\/pt\/category\/generative-ai-genai\/\" target=\"_blank\" rel=\"noopener\">IA generativa (GenAI)<\/a><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>&nbsp;<\/p>","protected":false},"excerpt":{"rendered":"<p>As enterprises look to deploy production-ready AI agent applications, Large Language Model (LLM) observability has emerged as a critical requirement for ensuring both performance and trust. Organizations need visibility into how agents interact with data, make decisions, and retrieve information [&hellip;]<\/p>","protected":false},"author":85604,"featured_media":17063,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"footnotes":""},"categories":[10123,10122,1815,3917,2242,9973,9921],"tags":[],"ppma_author":[10107,10106],"class_list":["post-17061","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-agentic-ai-apps","category-artificial-intelligence-ai","category-best-practices-and-tutorials","category-company","category-connectors","category-generative-ai-genai","category-partners"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.9 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Couchbase Partners with Arize AI to Enable Trustworthy, Production-Ready AI Agent Applications<\/title>\n<meta name=\"description\" content=\"Couchbase partners with Arize AI to bring observability to production-ready AI agent apps. Learn to build, monitor, optimize Agentic RAG solutions at scale.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.couchbase.com\/blog\/pt\/couchbase-partners-arize-ai\/\" \/>\n<meta property=\"og:locale\" content=\"pt_BR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Couchbase Partners with Arize AI to Enable Trustworthy, Production-Ready AI Agent Applications\" \/>\n<meta property=\"og:description\" content=\"Couchbase partners with Arize AI to bring observability to production-ready AI agent apps. Learn to build, monitor, optimize Agentic RAG solutions at scale.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.couchbase.com\/blog\/pt\/couchbase-partners-arize-ai\/\" \/>\n<meta property=\"og:site_name\" content=\"The Couchbase Blog\" \/>\n<meta property=\"article:published_time\" content=\"2025-04-23T11:50:54+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-06-16T17:43:34+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/blog_header_images_2025-18.png\" \/>\n\t<meta property=\"og:image:width\" content=\"2400\" \/>\n\t<meta property=\"og:image:height\" content=\"1256\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Richard Young - Dir. Partner Solutions Architecture, Arize AI, Tanvi Johari, Software Engineer, Couchbase\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Richard Young - Dir. Partner Solutions Architecture, Arize AI\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/\"},\"author\":{\"name\":\"Richard Young - Dir. Partner Solutions Architecture, Arize AI\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/093cd9f3536b4ab051e854456e8d9573\"},\"headline\":\"Couchbase Partners with Arize AI to Enable Trustworthy, Production-Ready AI Agent Applications\",\"datePublished\":\"2025-04-23T11:50:54+00:00\",\"dateModified\":\"2025-06-16T17:43:34+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/\"},\"wordCount\":1428,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/blog_header_images_2025-18.png\",\"articleSection\":[\"Agentic AI Applications\",\"Artificial Intelligence (AI)\",\"Best Practices and Tutorials\",\"Company\",\"Connectors\",\"Generative AI (GenAI)\",\"Partners\"],\"inLanguage\":\"pt-BR\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/\",\"url\":\"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/\",\"name\":\"Couchbase Partners with Arize AI to Enable Trustworthy, Production-Ready AI Agent Applications\",\"isPartOf\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/blog_header_images_2025-18.png\",\"datePublished\":\"2025-04-23T11:50:54+00:00\",\"dateModified\":\"2025-06-16T17:43:34+00:00\",\"description\":\"Couchbase partners with Arize AI to bring observability to production-ready AI agent apps. Learn to build, monitor, optimize Agentic RAG solutions at scale.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/#breadcrumb\"},\"inLanguage\":\"pt-BR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-BR\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/#primaryimage\",\"url\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/blog_header_images_2025-18.png\",\"contentUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/blog_header_images_2025-18.png\",\"width\":2400,\"height\":1256},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.couchbase.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Couchbase Partners with Arize AI to Enable Trustworthy, Production-Ready AI Agent Applications\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#website\",\"url\":\"https:\/\/www.couchbase.com\/blog\/\",\"name\":\"The Couchbase Blog\",\"description\":\"Couchbase, the NoSQL Database\",\"publisher\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.couchbase.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"pt-BR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#organization\",\"name\":\"The Couchbase Blog\",\"url\":\"https:\/\/www.couchbase.com\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-BR\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png\",\"contentUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png\",\"width\":218,\"height\":34,\"caption\":\"The Couchbase Blog\"},\"image\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/093cd9f3536b4ab051e854456e8d9573\",\"name\":\"Richard Young - Dir. Partner Solutions Architecture, Arize AI\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-BR\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/image\/fe78e2371572797afb80e27c299842d9\",\"url\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/1570866773537.jpeg\",\"contentUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/1570866773537.jpeg\",\"caption\":\"Richard Young - Dir. Partner Solutions Architecture, Arize AI\"},\"sameAs\":[\"https:\/\/arize.com\"],\"url\":\"https:\/\/www.couchbase.com\/blog\/pt\/author\/richardyoung\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Couchbase faz parceria com Arize AI para permitir aplicativos de agentes de IA confi\u00e1veis e prontos para produ\u00e7\u00e3o","description":"A Couchbase faz parceria com a Arize AI para levar observabilidade a aplicativos de agentes de IA prontos para produ\u00e7\u00e3o. Aprenda a criar, monitorar e otimizar solu\u00e7\u00f5es Agentic RAG em escala.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.couchbase.com\/blog\/pt\/couchbase-partners-arize-ai\/","og_locale":"pt_BR","og_type":"article","og_title":"Couchbase Partners with Arize AI to Enable Trustworthy, Production-Ready AI Agent Applications","og_description":"Couchbase partners with Arize AI to bring observability to production-ready AI agent apps. Learn to build, monitor, optimize Agentic RAG solutions at scale.","og_url":"https:\/\/www.couchbase.com\/blog\/pt\/couchbase-partners-arize-ai\/","og_site_name":"The Couchbase Blog","article_published_time":"2025-04-23T11:50:54+00:00","article_modified_time":"2025-06-16T17:43:34+00:00","og_image":[{"width":2400,"height":1256,"url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/blog_header_images_2025-18.png","type":"image\/png"}],"author":"Richard Young - Dir. Partner Solutions Architecture, Arize AI, Tanvi Johari, Software Engineer, Couchbase","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Richard Young - Dir. Partner Solutions Architecture, Arize AI","Est. reading time":"7 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/#article","isPartOf":{"@id":"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/"},"author":{"name":"Richard Young - Dir. Partner Solutions Architecture, Arize AI","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/093cd9f3536b4ab051e854456e8d9573"},"headline":"Couchbase Partners with Arize AI to Enable Trustworthy, Production-Ready AI Agent Applications","datePublished":"2025-04-23T11:50:54+00:00","dateModified":"2025-06-16T17:43:34+00:00","mainEntityOfPage":{"@id":"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/"},"wordCount":1428,"commentCount":0,"publisher":{"@id":"https:\/\/www.couchbase.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/#primaryimage"},"thumbnailUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/blog_header_images_2025-18.png","articleSection":["Agentic AI Applications","Artificial Intelligence (AI)","Best Practices and Tutorials","Company","Connectors","Generative AI (GenAI)","Partners"],"inLanguage":"pt-BR","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/","url":"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/","name":"Couchbase faz parceria com Arize AI para permitir aplicativos de agentes de IA confi\u00e1veis e prontos para produ\u00e7\u00e3o","isPartOf":{"@id":"https:\/\/www.couchbase.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/#primaryimage"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/#primaryimage"},"thumbnailUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/blog_header_images_2025-18.png","datePublished":"2025-04-23T11:50:54+00:00","dateModified":"2025-06-16T17:43:34+00:00","description":"A Couchbase faz parceria com a Arize AI para levar observabilidade a aplicativos de agentes de IA prontos para produ\u00e7\u00e3o. Aprenda a criar, monitorar e otimizar solu\u00e7\u00f5es Agentic RAG em escala.","breadcrumb":{"@id":"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/#breadcrumb"},"inLanguage":"pt-BR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/"]}]},{"@type":"ImageObject","inLanguage":"pt-BR","@id":"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/#primaryimage","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/blog_header_images_2025-18.png","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/blog_header_images_2025-18.png","width":2400,"height":1256},{"@type":"BreadcrumbList","@id":"https:\/\/www.couchbase.com\/blog\/couchbase-partners-arize-ai\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.couchbase.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Couchbase Partners with Arize AI to Enable Trustworthy, Production-Ready AI Agent Applications"}]},{"@type":"WebSite","@id":"https:\/\/www.couchbase.com\/blog\/#website","url":"https:\/\/www.couchbase.com\/blog\/","name":"Blog do Couchbase","description":"Couchbase, o banco de dados NoSQL","publisher":{"@id":"https:\/\/www.couchbase.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.couchbase.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"pt-BR"},{"@type":"Organization","@id":"https:\/\/www.couchbase.com\/blog\/#organization","name":"Blog do Couchbase","url":"https:\/\/www.couchbase.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"pt-BR","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png","width":218,"height":34,"caption":"The Couchbase Blog"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/093cd9f3536b4ab051e854456e8d9573","name":"Richard Young - Dir. Arquitetura de solu\u00e7\u00f5es para parceiros, Arize AI","image":{"@type":"ImageObject","inLanguage":"pt-BR","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/image\/fe78e2371572797afb80e27c299842d9","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/1570866773537.jpeg","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/1570866773537.jpeg","caption":"Richard Young - Dir. Partner Solutions Architecture, Arize AI"},"sameAs":["https:\/\/arize.com"],"url":"https:\/\/www.couchbase.com\/blog\/pt\/author\/richardyoung\/"}]}},"authors":[{"term_id":10107,"user_id":85604,"is_guest":0,"slug":"richardyoung","display_name":"Richard Young - Dir. Partner Solutions Architecture, Arize AI","avatar_url":{"url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/1570866773537.jpeg","url2x":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/1570866773537.jpeg"},"author_category":"","last_name":"Young - Dir. Partner Solutions Architecture, Arize AI","first_name":"Richard","job_title":"Dir. Partner Solutions Architecture, Arize AI","user_url":"https:\/\/arize.com","description":""},{"term_id":10106,"user_id":85603,"is_guest":0,"slug":"tanvijohari","display_name":"Tanvi Johari, Software Engineer, Couchbase","avatar_url":{"url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/1736197907696-1.jpeg","url2x":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/04\/1736197907696-1.jpeg"},"author_category":"","last_name":"Johari, Software Engineer, Couchbase","first_name":"Tanvi","job_title":"Software Engineer, Couchbase","user_url":"","description":""}],"_links":{"self":[{"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/posts\/17061","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/users\/85604"}],"replies":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/comments?post=17061"}],"version-history":[{"count":0,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/posts\/17061\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/media\/17063"}],"wp:attachment":[{"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/media?parent=17061"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/categories?post=17061"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/tags?post=17061"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/ppma_author?post=17061"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}