{"id":16068,"date":"2024-08-01T06:51:02","date_gmt":"2024-08-01T13:51:02","guid":{"rendered":"https:\/\/www.couchbase.com\/blog\/?p=16068"},"modified":"2025-06-13T20:19:05","modified_gmt":"2025-06-14T03:19:05","slug":"faster-llm-apps-semantic-cache-langchain-couchbase","status":"publish","type":"post","link":"https:\/\/www.couchbase.com\/blog\/pt\/faster-llm-apps-semantic-cache-langchain-couchbase\/","title":{"rendered":"Crie aplicativos LLM mais r\u00e1pidos e baratos com o Couchbase e o LangChain"},"content":{"rendered":"<h2><span style=\"font-weight: 400;\">Novo cache padr\u00e3o, sem\u00e2ntico e de conversa\u00e7\u00e3o com integra\u00e7\u00e3o LangChain<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">No cen\u00e1rio em r\u00e1pida evolu\u00e7\u00e3o do desenvolvimento de aplicativos de IA, a integra\u00e7\u00e3o de grandes modelos de linguagem (LLMs) com fontes de dados corporativos tornou-se um foco essencial. A capacidade de aproveitar o poder dos LLMs para gerar respostas de alta qualidade e contextualmente relevantes est\u00e1 transformando v\u00e1rios setores. No entanto, as equipes enfrentam desafios significativos para fornecer respostas confi\u00e1veis em alta velocidade e, ao mesmo tempo, reduzir os custos, especialmente \u00e0 medida que o volume de solicita\u00e7\u00f5es dos usu\u00e1rios aumenta. Al\u00e9m disso, como a maioria dos LLMs tem mem\u00f3ria limitada, existe a oportunidade de armazenar as conversas dos LLMs por um per\u00edodo de tempo prolongado e evitar que os usu\u00e1rios recomecem do zero quando a mem\u00f3ria de um LLM se esgotar.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">O Couchbase, l\u00edder em armazenamento em cache altamente dimension\u00e1vel e de baixa lat\u00eancia (<\/span><a href=\"https:\/\/www.couchbase.com\/blog\/pt\/customers\/linkedin\/\"><span style=\"font-weight: 400;\">Leia a hist\u00f3ria do LinkedIn<\/span><\/a><span style=\"font-weight: 400;\">), enfrenta esses desafios com solu\u00e7\u00f5es inovadoras. Novos aprimoramentos em nossa oferta de pesquisa vetorial e cache, bem como um pacote LangChain dedicado para desenvolvedores, facilitam a eleva\u00e7\u00e3o do desempenho e da confiabilidade dos aplicativos de IA generativa.<\/span><\/p>\n<h2>Couchbase Vector Search e Retrieval-Augmented Generation (RAG)<\/h2>\n<p><span style=\"font-weight: 400;\">A pesquisa vetorial do Couchbase permite que os usu\u00e1rios encontrem objetos semelhantes sem a necessidade de uma correspond\u00eancia exata. Trata-se de um recurso avan\u00e7ado que permite a busca e a recupera\u00e7\u00e3o eficientes de dados com base em incorpora\u00e7\u00f5es vetoriais, que s\u00e3o representa\u00e7\u00f5es matem\u00e1ticas de objetos em um n\u00famero muito grande de dimens\u00f5es. Por exemplo, a busca de sapatos \"marrom\" e \"couro\" em um cat\u00e1logo de produtos retornaria esses resultados, bem como sapatos de \"camur\u00e7a\", com cores que incluem \"mogno, castanho, caf\u00e9, bronze, castanho-avermelhado e cacau\".<\/span><\/p>\n<p><span style=\"font-weight: 400;\">O Retrieval-Augmented Generation (RAG) combina a pesquisa vetorial, recuperando informa\u00e7\u00f5es do banco de dados do Couchbase relacionadas ao prompt do usu\u00e1rio, e fornece o prompt e as informa\u00e7\u00f5es relacionadas relevantes. <\/span><span style=\"font-weight: 400;\">informa\u00e7\u00f5es a um modelo generativo para produzir respostas de LLM mais informadas e contextualmente apropriadas. Isso geralmente \u00e9 mais r\u00e1pido e menos dispendioso do que treinar um modelo personalizado. <\/span><span style=\"font-weight: 400;\">O <\/span><a href=\"https:\/\/www.couchbase.com\/blog\/pt\/couchbase-capella-advantages-speed-functionality-tco-over-redis\/\"><span style=\"font-weight: 400;\">arquitetura na mem\u00f3ria altamente dimension\u00e1vel<\/span><\/a><span style=\"font-weight: 400;\"> fornece acesso r\u00e1pido e eficiente \u00e0 pesquisa de dados relevantes de incorpora\u00e7\u00e3o de vetores. Para tornar um aplicativo RAG mais eficiente e com melhor desempenho, os desenvolvedores podem usar recursos de cache sem\u00e2ntico e de conversa\u00e7\u00e3o.\u00a0<\/span><\/p>\n<h2>Cache sem\u00e2ntico<\/h2>\n<p><span style=\"font-weight: 400;\">O armazenamento em cache sem\u00e2ntico \u00e9 uma t\u00e9cnica sofisticada de armazenamento em cache que usa embeddings de vetor para entender o contexto e a inten\u00e7\u00e3o por tr\u00e1s das consultas. Ao contr\u00e1rio dos m\u00e9todos tradicionais de armazenamento em cache que dependem de correspond\u00eancias exatas, o armazenamento em cache sem\u00e2ntico aproveita o significado e a relev\u00e2ncia dos dados. Isso significa que perguntas semelhantes, que de outra forma receberiam a mesma resposta de um LLM, n\u00e3o precisam fazer solicita\u00e7\u00f5es adicionais ao LLM. Continuando com o exemplo acima, um usu\u00e1rio que pesquisa \"Estou procurando sapatos de couro marrom no tamanho 10\" obteria os mesmos resultados que outro usu\u00e1rio que solicita \"Quero comprar sapatos tamanho 10 em couro na cor marrom\".\u00a0<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-large wp-image-16069\" style=\"border: solid 1px black;\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2024\/07\/image1-1024x551.jpg\" alt=\"Couchbase Semantic Cache\" width=\"900\" height=\"484\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1-1024x551.jpg 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1-300x161.jpg 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1-768x413.jpg 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1-1536x826.jpg 1536w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1-1320x710.jpg 1320w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1.jpg 1708w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><\/p>\n<p><span style=\"font-weight: 400;\">Os benef\u00edcios do cache sem\u00e2ntico, especialmente em volumes maiores, incluem:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\"><b>Melhoria da efici\u00eancia -<\/b><span style=\"font-weight: 400;\"> Tempos de recupera\u00e7\u00e3o mais r\u00e1pidos devido \u00e0 compreens\u00e3o do contexto da consulta<\/span><\/li>\n<li style=\"font-weight: 400;\"><b>Custos mais baixos -<\/b><span style=\"font-weight: 400;\"> A redu\u00e7\u00e3o de chamadas para o LLM economiza tempo e dinheiro<\/span><\/li>\n<\/ul>\n<h2>\u00a0Cache de conversa\u00e7\u00e3o<\/h2>\n<p><span style=\"font-weight: 400;\">Enquanto o cache sem\u00e2ntico reduz o n\u00famero de chamadas para um LLM em uma ampla variedade de usu\u00e1rios, um cache de conversa\u00e7\u00e3o melhora a experi\u00eancia geral do usu\u00e1rio, ampliando o conhecimento de conversa\u00e7\u00e3o das intera\u00e7\u00f5es entre o usu\u00e1rio e o LLM. Ao aproveitar o hist\u00f3rico de perguntas e respostas, o LLM \u00e9 capaz de fornecer um contexto melhor \u00e0 medida que novas solicita\u00e7\u00f5es s\u00e3o enviadas.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\"> <img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-large wp-image-16070\" style=\"border: solid 1px black;\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2024\/07\/image2-1024x581.jpg\" alt=\"Couchbase conversational cache\" width=\"900\" height=\"511\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image2-1024x581.jpg 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image2-300x170.jpg 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image2-768x436.jpg 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image2-1536x872.jpg 1536w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image2-1320x749.jpg 1320w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image2.jpg 1720w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><\/span><\/p>\n<p><span style=\"font-weight: 400;\">Al\u00e9m disso, o cache de conversa\u00e7\u00e3o pode ser usado para ajudar a aplicar o racioc\u00ednio aos fluxos de trabalho dos agentes de IA. Um usu\u00e1rio pode perguntar: \"Este item funcionar\u00e1 bem com meus produtos comprados anteriormente?\" Primeiro, isso exige a resolu\u00e7\u00e3o da refer\u00eancia \"este item\", seguida do racioc\u00ednio de como determinar se ele funcionar\u00e1 bem com as compras anteriores.\"<\/span><\/p>\n<h3>Pacotes dedicados de LangChain-Couchbase<\/h3>\n<p><span style=\"font-weight: 400;\">A Couchbase apresentou recentemente os m\u00f3dulos LangChain projetados para desenvolvedores Python. Esse pacote simplifica a integra\u00e7\u00e3o dos recursos avan\u00e7ados do Couchbase em aplicativos de IA generativa via LangChain, facilitando para os desenvolvedores a implementa\u00e7\u00e3o de recursos poderosos, como pesquisa vetorial e cache sem\u00e2ntico.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">O pacote LangChain-Couchbase integra perfeitamente os recursos de pesquisa vetorial, cache sem\u00e2ntico e cache de conversa\u00e7\u00e3o do Couchbase em fluxos de trabalho de IA generativa. Essa integra\u00e7\u00e3o permite que os desenvolvedores criem aplicativos mais inteligentes e sens\u00edveis ao contexto com o m\u00ednimo de esfor\u00e7o.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Ao fornecer um pacote dedicado, o Couchbase garante que os desenvolvedores possam acessar e implementar facilmente recursos avan\u00e7ados sem lidar com configura\u00e7\u00f5es complexas. O pacote foi projetado para ser amig\u00e1vel ao desenvolvedor, permitindo uma integra\u00e7\u00e3o r\u00e1pida e eficiente.<\/span><\/p>\n<h3>Principais recursos<\/h3>\n<p><span style=\"font-weight: 400;\">O pacote LangChain-Couchbase oferece v\u00e1rios recursos importantes, incluindo:<\/span><\/p>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li style=\"font-weight: 400;\"><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/providers\/couchbase\/\"><b>Pesquisa vetorial<\/b><\/a> <b>-<\/b><span style=\"font-weight: 400;\"> Recupera\u00e7\u00e3o eficiente de dados com base em embeddings de vetores<\/span><\/li>\n<li style=\"font-weight: 400;\"><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/llm_caching\/#couchbase-cache\"><b>Cache padr\u00e3o<\/b><\/a><span style=\"font-weight: 400;\"> - Para obter correspond\u00eancias exatas mais r\u00e1pidas<\/span><\/li>\n<li style=\"font-weight: 400;\"><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/llm_caching\/#couchbase-semantic-cache\"><b>Cache sem\u00e2ntico<\/b><\/a> <b>-<\/b><span style=\"font-weight: 400;\"> Cache com reconhecimento de contexto para melhorar a relev\u00e2ncia da resposta<\/span><\/li>\n<li style=\"font-weight: 400;\"><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/memory\/couchbase_chat_message_history\/\"><b>Cache de conversa\u00e7\u00e3o<\/b><\/a> <span style=\"font-weight: 400;\">- Gerenciamento do contexto da conversa para aprimorar as intera\u00e7\u00f5es do usu\u00e1rio<\/span><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h2>Casos de uso e exemplos<\/h2>\n<p><span style=\"font-weight: 400;\">Os novos aprimoramentos do Couchbase podem ser aplicados em v\u00e1rios cen\u00e1rios, como:<\/span><\/p>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li style=\"font-weight: 400;\"><b>Chatbots de com\u00e9rcio eletr\u00f4nico -<\/b><span style=\"font-weight: 400;\"> Fornecimento de recomenda\u00e7\u00f5es de compras personalizadas com base nas prefer\u00eancias do usu\u00e1rio<\/span><\/li>\n<li style=\"font-weight: 400;\"><b>Suporte ao cliente -<\/b><span style=\"font-weight: 400;\"> Fornecer respostas precisas e contextualmente relevantes \u00e0s consultas dos clientes<\/span><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3>Trechos de c\u00f3digo ou tutoriais<\/h3>\n<p><span style=\"font-weight: 400;\">Os desenvolvedores podem encontrar trechos de c\u00f3digo e tutoriais para implementar o cache sem\u00e2ntico e o pacote LangChain-Couchbase em <\/span><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/llm_caching\/#couchbase-semantic-cache\"><span style=\"font-weight: 400;\">Site da LangChain<\/span><\/a><span style=\"font-weight: 400;\">. H\u00e1 tamb\u00e9m exemplos de c\u00f3digo de pesquisa vetorial no site do Couchbase <\/span><a href=\"https:\/\/github.com\/couchbase-examples\/\"><span style=\"font-weight: 400;\">Reposit\u00f3rio do GitHub<\/span><\/a><span style=\"font-weight: 400;\">. Esses recursos fornecem orienta\u00e7\u00e3o para ajudar os desenvolvedores a come\u00e7ar rapidamente.<\/span><\/p>\n<h3>Benef\u00edcios<\/h3>\n<p><span style=\"font-weight: 400;\">Os aprimoramentos do Couchbase nas ofertas de pesquisa vetorial e armazenamento em cache para aplicativos baseados em LLM proporcionam v\u00e1rios benef\u00edcios, incluindo maior efici\u00eancia, relev\u00e2ncia e personaliza\u00e7\u00e3o das respostas. Esses recursos foram projetados para enfrentar os desafios de criar aplicativos de IA geradores confi\u00e1veis, dimension\u00e1veis e econ\u00f4micos.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">O Couchbase est\u00e1 comprometido com a inova\u00e7\u00e3o cont\u00ednua, garantindo que nossa plataforma permane\u00e7a na vanguarda do desenvolvimento de aplicativos de IA. Os aprimoramentos futuros expandir\u00e3o ainda mais os recursos do Couchbase, permitindo que os desenvolvedores criem aplicativos ainda mais avan\u00e7ados e inteligentes.<\/span><\/p>\n<h3>Recursos adicionais<\/h3>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li><span style=\"font-weight: 400;\">Blog: <\/span><a href=\"https:\/\/www.couchbase.com\/blog\/pt\/an-overview-of-retrieval-augmented-generation\/\"><span style=\"font-weight: 400;\">Uma vis\u00e3o geral do RAG<\/span><\/a><\/li>\n<li><span style=\"font-weight: 400;\">Documentos: <\/span><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/providers\/couchbase\/\"><span style=\"font-weight: 400;\">Instalar a integra\u00e7\u00e3o Langchain-Couchbase<\/span><\/a><\/li>\n<li><span style=\"font-weight: 400;\">Documentos: <\/span><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/vectorstores\/couchbase\/\"><span style=\"font-weight: 400;\">Couchbase como armazenamento vetorial com LangChain<\/span><\/a><\/li>\n<li><span style=\"font-weight: 400;\">V\u00eddeo: <\/span><a href=\"https:\/\/www.youtube.com\/watch?v=sYy0ob2GqUo\"><span style=\"font-weight: 400;\">Pesquisa vetorial e h\u00edbrida<\/span><\/a><\/li>\n<li><span style=\"font-weight: 400;\">V\u00eddeo: <\/span><a href=\"https:\/\/www.youtube.com\/watch?v=_iveSnEikMQ&amp;t=1s\"><span style=\"font-weight: 400;\">Vector Search para aplicativos m\u00f3veis<\/span><\/a><\/li>\n<li><span style=\"font-weight: 400;\">Documentos: <\/span><a href=\"https:\/\/docs.couchbase.com\/cloud\/vector-search\/vector-search.html\"><span style=\"font-weight: 400;\">Pesquisa de vetores no Capella DBaaS<\/span><\/a><\/li>\n<li>Modelos suportados pelo LangChain e pelo Couchbase<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3>Modelos suportados via <a href=\"https:\/\/python.langchain.com\/v0.2\/api_reference\/couchbase\/index.html\">LangChain e Couchbase<\/a><\/h3>\n<table>\n<tbody>\n<tr>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/ai21\">AI21<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/azureopenai\">AzureOpenAI<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/bge_huggingface\">BGE em Hugging Face<\/a><\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/aleph_alpha\">Aleph Alpha<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/baichuan\">Embeddings de texto Baichuan<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/bookend\">IA de suporte de livros<\/a><\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/anyscale\">Anyscale<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/baidu_qianfan_endpoint\">Baidu Qianfan<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/clarifai\">Clarifai<\/a><\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/ascend\">ascender<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/bedrock\">Cama de pedra<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/cloudflare_workersai\">IA dos trabalhadores da Cloudflare<\/a><\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/awadb\">AwaDB<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/bge_huggingface\">BGE em Hugging Face<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/cohere\">Coes\u00e3o<\/a><\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/google_generative_ai\">Google Generative AI Embeddings<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/google_vertex_ai_palm\">Google Vertex AI PaLM<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/gpt4all\">GPT4All<\/a><\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/jina\">Jina<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/johnsnowlabs_embedding\">Laborat\u00f3rios John Snow<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/laser\">LASER<\/a><\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/llamacpp\">Llama.cpp<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/llamafile\">llamafile<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/localai\">LocalAI<\/a><\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/mini_max\">MiniMax<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/mistralai\">MistralAI<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/nlp_cloud\">Nuvem de PNL<\/a><\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/oci_generative_ai\">Infraestrutura de nuvem da Oracle IA generativa<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/ollama\">Ollama<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/openai\">OpenAI<\/a><\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/optimum_intel\">Incorpora\u00e7\u00e3o de documentos usando incorporadores otimizados e quantificados<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/oracleai\">Pesquisa vetorial de IA da Oracle: Gerar Embeddings<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/ovhcloud\">OVHcloud<\/a><\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/pinecone\">Incorpora\u00e7\u00e3o de pinhas<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/premai\">PremAI<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/sagemaker-endpoint\">SageMaker<\/a><\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/sambanova\">SambaNova<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/sentence_transformers\">Senten\u00e7a Transformers sobre o Face Hugging<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/tensorflowhub\">Hub do TensorFlow<\/a><\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/together\">IA em conjunto<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/upstage\">Palco superior<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/xinference\">Infer\u00eancia de Xorbits (Xinference)<\/a><\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/volcengine\">Motor Volc<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/voyageai\">IA do Voyage<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/yandex\">YandexGPT<\/a><\/td>\n<\/tr>\n<\/tbody>\n<\/table>","protected":false},"excerpt":{"rendered":"<p>New Standard, Semantic and Conversational Cache With LangChain Integration In the rapidly evolving landscape of AI application development, integrating large language models (LLMs) with enterprise data sources has become a critical focus. The ability to harness the power of LLMs [&hellip;]<\/p>","protected":false},"author":77912,"featured_media":16069,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"footnotes":""},"categories":[1814,10122,2225,1816,9973,9417,9937],"tags":[9963],"ppma_author":[9311],"class_list":["post-16068","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-application-design","category-artificial-intelligence-ai","category-cloud","category-couchbase-server","category-generative-ai-genai","category-performance","category-vector-search","tag-langchain"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.8 (Yoast SEO v25.8) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Build Faster and Cheaper LLM Apps With Couchbase and LangChain - The Couchbase Blog<\/title>\n<meta name=\"description\" content=\"The LangChain-Couchbase package integrates Couchbase&#039;s vector search, semantic cache, conversational cache for generative AI workflows.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.couchbase.com\/blog\/pt\/faster-llm-apps-semantic-cache-langchain-couchbase\/\" \/>\n<meta property=\"og:locale\" content=\"pt_BR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Build Faster and Cheaper LLM Apps With Couchbase and LangChain\" \/>\n<meta property=\"og:description\" content=\"The LangChain-Couchbase package integrates Couchbase&#039;s vector search, semantic cache, conversational cache for generative AI workflows.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.couchbase.com\/blog\/pt\/faster-llm-apps-semantic-cache-langchain-couchbase\/\" \/>\n<meta property=\"og:site_name\" content=\"The Couchbase Blog\" \/>\n<meta property=\"article:published_time\" content=\"2024-08-01T13:51:02+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-06-14T03:19:05+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1708\" \/>\n\t<meta property=\"og:image:height\" content=\"919\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Tim Rottach, Director of Product Line Marketing\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Tim Rottach, Director of Product Line Marketing\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/\"},\"author\":{\"name\":\"Tim Rottach, Director of Product Line Marketing\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/02596c1f54a5dd8d2094d919487485cc\"},\"headline\":\"Build Faster and Cheaper LLM Apps With Couchbase and LangChain\",\"datePublished\":\"2024-08-01T13:51:02+00:00\",\"dateModified\":\"2025-06-14T03:19:05+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/\"},\"wordCount\":1050,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1.jpg\",\"keywords\":[\"langchain\"],\"articleSection\":[\"Application Design\",\"Artificial Intelligence (AI)\",\"Couchbase Capella\",\"Couchbase Server\",\"Generative AI (GenAI)\",\"High Performance\",\"Vector Search\"],\"inLanguage\":\"pt-BR\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/\",\"url\":\"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/\",\"name\":\"Build Faster and Cheaper LLM Apps With Couchbase and LangChain - The Couchbase Blog\",\"isPartOf\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1.jpg\",\"datePublished\":\"2024-08-01T13:51:02+00:00\",\"dateModified\":\"2025-06-14T03:19:05+00:00\",\"description\":\"The LangChain-Couchbase package integrates Couchbase's vector search, semantic cache, conversational cache for generative AI workflows.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#breadcrumb\"},\"inLanguage\":\"pt-BR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-BR\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#primaryimage\",\"url\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1.jpg\",\"contentUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1.jpg\",\"width\":1708,\"height\":919,\"caption\":\"Couchbase Semantic Cache\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.couchbase.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Build Faster and Cheaper LLM Apps With Couchbase and LangChain\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#website\",\"url\":\"https:\/\/www.couchbase.com\/blog\/\",\"name\":\"The Couchbase Blog\",\"description\":\"Couchbase, the NoSQL Database\",\"publisher\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.couchbase.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"pt-BR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#organization\",\"name\":\"The Couchbase Blog\",\"url\":\"https:\/\/www.couchbase.com\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-BR\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png\",\"contentUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png\",\"width\":218,\"height\":34,\"caption\":\"The Couchbase Blog\"},\"image\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/02596c1f54a5dd8d2094d919487485cc\",\"name\":\"Tim Rottach, Director of Product Line Marketing\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-BR\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/image\/93228766273ae64ba068eecec5523b48\",\"url\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2021\/07\/timothy-rottach-couchbase.jpeg\",\"contentUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2021\/07\/timothy-rottach-couchbase.jpeg\",\"caption\":\"Tim Rottach, Director of Product Line Marketing\"},\"description\":\"Tim Rottach is Director of Product Line Marketing at Couchbase.\",\"url\":\"https:\/\/www.couchbase.com\/blog\/pt\/author\/timothy-rottach\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Build Faster and Cheaper LLM Apps With Couchbase and LangChain - The Couchbase Blog","description":"O pacote LangChain-Couchbase integra a pesquisa vetorial, o cache sem\u00e2ntico e o cache de conversa\u00e7\u00e3o do Couchbase para fluxos de trabalho de IA generativa.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.couchbase.com\/blog\/pt\/faster-llm-apps-semantic-cache-langchain-couchbase\/","og_locale":"pt_BR","og_type":"article","og_title":"Build Faster and Cheaper LLM Apps With Couchbase and LangChain","og_description":"The LangChain-Couchbase package integrates Couchbase's vector search, semantic cache, conversational cache for generative AI workflows.","og_url":"https:\/\/www.couchbase.com\/blog\/pt\/faster-llm-apps-semantic-cache-langchain-couchbase\/","og_site_name":"The Couchbase Blog","article_published_time":"2024-08-01T13:51:02+00:00","article_modified_time":"2025-06-14T03:19:05+00:00","og_image":[{"width":1708,"height":919,"url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1.jpg","type":"image\/jpeg"}],"author":"Tim Rottach, Director of Product Line Marketing","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Tim Rottach, Director of Product Line Marketing","Est. reading time":"6 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#article","isPartOf":{"@id":"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/"},"author":{"name":"Tim Rottach, Director of Product Line Marketing","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/02596c1f54a5dd8d2094d919487485cc"},"headline":"Build Faster and Cheaper LLM Apps With Couchbase and LangChain","datePublished":"2024-08-01T13:51:02+00:00","dateModified":"2025-06-14T03:19:05+00:00","mainEntityOfPage":{"@id":"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/"},"wordCount":1050,"commentCount":0,"publisher":{"@id":"https:\/\/www.couchbase.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#primaryimage"},"thumbnailUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1.jpg","keywords":["langchain"],"articleSection":["Application Design","Artificial Intelligence (AI)","Couchbase Capella","Couchbase Server","Generative AI (GenAI)","High Performance","Vector Search"],"inLanguage":"pt-BR","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/","url":"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/","name":"Build Faster and Cheaper LLM Apps With Couchbase and LangChain - The Couchbase Blog","isPartOf":{"@id":"https:\/\/www.couchbase.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#primaryimage"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#primaryimage"},"thumbnailUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1.jpg","datePublished":"2024-08-01T13:51:02+00:00","dateModified":"2025-06-14T03:19:05+00:00","description":"O pacote LangChain-Couchbase integra a pesquisa vetorial, o cache sem\u00e2ntico e o cache de conversa\u00e7\u00e3o do Couchbase para fluxos de trabalho de IA generativa.","breadcrumb":{"@id":"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#breadcrumb"},"inLanguage":"pt-BR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/"]}]},{"@type":"ImageObject","inLanguage":"pt-BR","@id":"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#primaryimage","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1.jpg","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1.jpg","width":1708,"height":919,"caption":"Couchbase Semantic Cache"},{"@type":"BreadcrumbList","@id":"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.couchbase.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Build Faster and Cheaper LLM Apps With Couchbase and LangChain"}]},{"@type":"WebSite","@id":"https:\/\/www.couchbase.com\/blog\/#website","url":"https:\/\/www.couchbase.com\/blog\/","name":"Blog do Couchbase","description":"Couchbase, o banco de dados NoSQL","publisher":{"@id":"https:\/\/www.couchbase.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.couchbase.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"pt-BR"},{"@type":"Organization","@id":"https:\/\/www.couchbase.com\/blog\/#organization","name":"Blog do Couchbase","url":"https:\/\/www.couchbase.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"pt-BR","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png","width":218,"height":34,"caption":"The Couchbase Blog"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/02596c1f54a5dd8d2094d919487485cc","name":"Tim Rottach, diretor de marketing de linha de produtos","image":{"@type":"ImageObject","inLanguage":"pt-BR","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/image\/93228766273ae64ba068eecec5523b48","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2021\/07\/timothy-rottach-couchbase.jpeg","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2021\/07\/timothy-rottach-couchbase.jpeg","caption":"Tim Rottach, Director of Product Line Marketing"},"description":"Tim Rottach \u00e9 diretor de marketing da linha de produtos da Couchbase.","url":"https:\/\/www.couchbase.com\/blog\/pt\/author\/timothy-rottach\/"}]}},"authors":[{"term_id":9311,"user_id":77912,"is_guest":0,"slug":"timothy-rottach","display_name":"Tim Rottach, Director of Product Line Marketing","avatar_url":{"url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2021\/07\/timothy-rottach-couchbase.jpeg","url2x":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2021\/07\/timothy-rottach-couchbase.jpeg"},"author_category":"","last_name":"Rottach","first_name":"Timothy","job_title":"","user_url":"","description":"Tim Rottach \u00e9 diretor de marketing da linha de produtos da Couchbase."}],"_links":{"self":[{"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/posts\/16068","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/users\/77912"}],"replies":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/comments?post=16068"}],"version-history":[{"count":0,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/posts\/16068\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/media\/16069"}],"wp:attachment":[{"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/media?parent=16068"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/categories?post=16068"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/tags?post=16068"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/ppma_author?post=16068"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}