{"id":16352,"date":"2024-09-23T14:08:57","date_gmt":"2024-09-23T21:08:57","guid":{"rendered":"https:\/\/www.couchbase.com\/blog\/?p=16352"},"modified":"2025-06-13T16:36:40","modified_gmt":"2025-06-13T23:36:40","slug":"ai-powered-recommendation-engine-llm-rag","status":"publish","type":"post","link":"https:\/\/www.couchbase.com\/blog\/pt\/ai-powered-recommendation-engine-llm-rag\/","title":{"rendered":"Do conceito ao c\u00f3digo: LLM + RAG com Couchbase"},"content":{"rendered":"<p><span style=\"font-weight: 400;\">As tecnologias GenAI s\u00e3o definitivamente um item de tend\u00eancia em 2023 e 2024 e, como eu trabalho para a\u00a0 <\/span><a href=\"https:\/\/tikalk.com\/\"><span style=\"font-weight: 400;\">Tikal<\/span><\/a><span style=\"font-weight: 400;\">que publica seu pr\u00f3prio relat\u00f3rio anual <\/span><a href=\"https:\/\/tikalk.com\/radar\/\"><span style=\"font-weight: 400;\">radar e tend\u00eancias tecnol\u00f3gicas<\/span><\/a><span style=\"font-weight: 400;\"> relat\u00f3rio, o LLM e a genAI n\u00e3o escaparam da minha aten\u00e7\u00e3o. Como desenvolvedor, frequentemente consulto chatbots de IA generativa para me ajudar a resolver todos os tipos de erros de TypeScript e problemas misteriosos de linting, uso ferramentas de assist\u00eancia genAI em meu IDE e para melhorar meus PRs. Essa tecnologia pode mudar sua vida.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Como pessoal t\u00e9cnico e, definitivamente, n\u00f3s, desenvolvedores de software, essa nova tend\u00eancia abre a oportunidade de integrar esses recursos a todos os projetos em que trabalhamos, e vejo meus amigos e colegas explorando essas op\u00e7\u00f5es, o que me levou \u00e0 decis\u00e3o: eu tamb\u00e9m deveria fazer isso!<\/span><\/p>\n<p><span style=\"font-weight: 400;\">E eu tinha exatamente o projeto:<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Sou um dan\u00e7arino amador que dan\u00e7a em uma trupe de dan\u00e7a amadora. Muitas vezes me pergunto como os artistas amadores podem explorar o vasto mundo dos eventos culturais locais e mundiais para poder entrar em contato e talvez receber o convite desejado para se apresentar. N\u00e3o temos os recursos, as conex\u00f5es e o conhecimento de tudo o que est\u00e1 dispon\u00edvel. \u00c9 claro que existem mecanismos de pesquisa e sites especializados, mas \u00e9 preciso saber o que e como pesquisar, por isso decidi usar o genAI para obter recomenda\u00e7\u00f5es.<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">Etapa 1 - Isso pode ser feito?<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">A verifica\u00e7\u00e3o da viabilidade de um mecanismo de recomenda\u00e7\u00e3o usando um dos LLMs incluiu a abertura de contas em v\u00e1rios servi\u00e7os de bate-papo da genAI e a realiza\u00e7\u00e3o da mesma pergunta:<\/span><\/p>\n<p><i><span style=\"font-weight: 400;\">Somos uma <\/span><\/i><b><i>amador<\/i><\/b> <b><i>Dan\u00e7a folcl\u00f3rica israelense<\/i><\/b><i><span style=\"font-weight: 400;\"> grupo, incluindo dan\u00e7arinos em <\/span><\/i><b><i>cadeiras de rodas<\/i><\/b><i><span style=\"font-weight: 400;\">. Estamos procurando por <\/span><\/i><b><i>cultural <\/i><\/b><i><span style=\"font-weight: 400;\">e<\/span><\/i><b><i> folclore<\/i><\/b> <b><i>eventos<\/i><\/b><i><span style=\"font-weight: 400;\"> e <\/span><\/i><b><i>festivais<\/i><\/b><i><span style=\"font-weight: 400;\"> em <\/span><\/i><b><i>Europa<\/i><\/b><i><span style=\"font-weight: 400;\"> para entrar em contato sobre a possibilidade de recebermos um convite para nos apresentarmos, desde que <\/span><\/i><b><i>cobrir nossas despesas<\/i><\/b><i><span style=\"font-weight: 400;\">. Voc\u00ea poderia recomendar alguns?<\/span><\/i><\/p>\n<p><span style=\"font-weight: 400;\">Os resultados no primeiro semestre de 2024 variaram entre os diferentes servi\u00e7os de bate-papo:<\/span><\/p>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Me direcionou para sites dedicados que eu poderia consultar para obter resultados<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Me deu resultados reais<\/span><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">Dos que retornaram resultados, classifiquei a qualidade dos resultados de acordo com a relev\u00e2ncia e a precis\u00e3o, e cheguei a <\/span><b>OpenAI<\/b><span style=\"font-weight: 400;\"> GPT-<\/span><b>3<\/b><span style=\"font-weight: 400;\"> como op\u00e7\u00e3o.<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">Etapa 2 - \u00c9 suficiente?<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">Lembrando que at\u00e9 mesmo um dos assistentes de bate-papo na Etapa 1 sugeriu que eu verificasse outros sites, e se eu pudesse incorporar alguns desses dados nos resultados?<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Considerando que tamb\u00e9m sou dependente de quem treinou o modelo e quando ele foi treinado, eu queria que minhas recomenda\u00e7\u00f5es fossem baseadas em mais fontes de dados e sabia que isso poderia ser feito com o RAG. Antes de tudo, o que \u00e9 o RAG?<\/span><\/p>\n<h3><span style=\"font-weight: 400;\">Gera\u00e7\u00e3o Aumentada de Recupera\u00e7\u00e3o (RAG)<\/span><\/h3>\n<p><span style=\"font-weight: 400;\">RAG \u00e9 o processo de enriquecimento e otimiza\u00e7\u00e3o dos resultados que voc\u00ea recebe do LLM adicionando dados \"externos\". Se eu puder adicionar resultados baseados na mesma pesquisa em fontes de dados externas (de sites dedicados), poderei expandir a variedade dos resultados que meu aplicativo fornecer\u00e1.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Para fazer isso, voc\u00ea precisar\u00e1 de:<\/span><\/p>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Fontes de dados externas - Para meu experimento, criei uma conta de avalia\u00e7\u00e3o para <\/span><a href=\"https:\/\/www.predicthq.com\/events\/upcoming-events\"><span style=\"font-weight: 400;\">API de eventos do predictHQ<\/span><\/a><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Armazenar meus dados externos \u00e9 um mecanismo que permite a pesquisa por similaridade e n\u00e3o uma correspond\u00eancia exata<\/span><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h4><span style=\"font-weight: 400;\">Tornar os dados acess\u00edveis para o RAG<\/span><\/h4>\n<p><span style=\"font-weight: 400;\">Depois de analisar os dados, sua apar\u00eancia e os recursos que eles cont\u00eam, \u00e9 hora de selecionar os recursos de dados que voc\u00ea gostaria de usar e torn\u00e1-los utiliz\u00e1veis para o RAG.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Para permitir uma pesquisa de similaridade, precisar\u00edamos transformar nossos dados em um formato que seja pesquis\u00e1vel e <\/span><b>compar\u00e1vel.<\/b><span style=\"font-weight: 400;\"> Como n\u00e3o estamos procurando correspond\u00eancias exatas, mas correspond\u00eancias semelhantes, h\u00e1 duas t\u00e9cnicas muito comuns para isso:<\/span><\/p>\n<table style=\"border: 1px solid black;\">\n<tbody>\n<tr>\n<td><b>T\u00e9cnica RAG<\/b><\/td>\n<td><b>Detalhes<\/b><\/td>\n<\/tr>\n<tr>\n<td>Pesquisa vetorial (tamb\u00e9m conhecida como RAG comum)<\/td>\n<td>As informa\u00e7\u00f5es e a pergunta s\u00e3o transformadas em <b>vetores<\/b> de n\u00fameros (pontos flutuantes).<\/p>\n<p>Os c\u00e1lculos matem\u00e1ticos s\u00e3o usados para determinar a similaridade entre a pergunta e os dados<\/td>\n<\/tr>\n<tr>\n<td>GraphRAG<\/td>\n<td>As informa\u00e7\u00f5es e a pergunta s\u00e3o transformadas em <b>gr\u00e1fico<\/b> v\u00e9rtices e bordas.<\/p>\n<p>As rela\u00e7\u00f5es do gr\u00e1fico s\u00e3o comparadas quanto \u00e0 similaridade<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><span style=\"font-weight: 400;\">O processo de cria\u00e7\u00e3o da representa\u00e7\u00e3o dos dados \u00e9 chamado de <\/span><b>incorpora\u00e7\u00e3o<\/b><span style=\"font-weight: 400;\">Neste artigo, vamos nos concentrar na pesquisa vetorial.<\/span><\/p>\n<h4><span style=\"font-weight: 400;\">M\u00e9trica de similaridade<\/span><\/h4>\n<p><span style=\"font-weight: 400;\">H\u00e1 tr\u00eas op\u00e7\u00f5es comuns (em resumo):<\/span><\/p>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Produto de pontos: C\u00e1lculo da similaridade com base no produto dos valores em cada vetor<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Cosseno: Com base no \u00e2ngulo entre os vetores<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">L2_norm: Dist\u00e2ncia euclidiana entre os vetores, com base no \u00e2ngulo entre os vetores e o comprimento de cada vetor<\/span><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">Leia mais sobre o <a href=\"https:\/\/www.couchbase.com\/blog\/pt\/vector-similarity-search\/\">op\u00e7\u00f5es de similaridade de vetores<\/a>.<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">Etapa 3 - Como fa\u00e7o isso?<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">Antes de nos aprofundarmos em como vamos fazer isso e em alguns c\u00f3digos e capturas de tela reais, vamos ver como essa arquitetura seria constru\u00edda e como o Couchbase entra em cena:<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-large wp-image-16356 aligncenter\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image4-1.jpg\" alt=\"\" width=\"673\" height=\"682\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image4-1.jpg 673w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image4-1-296x300.jpg 296w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image4-1-65x65.jpg 65w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image4-1-50x50.jpg 50w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image4-1-300x304.jpg 300w\" sizes=\"auto, (max-width: 673px) 100vw, 673px\" \/><br style=\"font-weight: 400;\" \/><br style=\"font-weight: 400;\" \/><\/p>\n<p><span style=\"font-weight: 400;\">O que isso significa na pr\u00e1tica \u00e9:<\/span><\/p>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Ingest\u00e3o app para:<\/span>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"2\"><span style=\"font-weight: 400;\">Obter dados da API externa<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"2\"><span style=\"font-weight: 400;\">Criar embeddings vetoriais<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"2\"><span style=\"font-weight: 400;\">Carregar dados em uma cole\u00e7\u00e3o do Couchbase<\/span><\/li>\n<\/ol>\n<\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Criar um \u00edndice de pesquisa vetorial no Couchbase<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Solicita\u00e7\u00e3o imediata para:<\/span>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"2\"><span style=\"font-weight: 400;\">Solicitar resultados aos dados do Couchbase<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"2\"><span style=\"font-weight: 400;\">Adicione os resultados da pesquisa de vetores ao prompt do LLM como contexto<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"2\"><span style=\"font-weight: 400;\">Devolver os resultados coesos aos usu\u00e1rios<\/span><\/li>\n<\/ol>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3><span style=\"font-weight: 400;\">Aplica\u00e7\u00e3o por ingest\u00e3o<\/span><\/h3>\n<p><span style=\"font-weight: 400;\">Esse processo foi provavelmente o mais longo, pois passei um tempo criando incorpora\u00e7\u00f5es em diferentes campos e em diferentes formatos. Para simplificar, acabei optando por usar apenas as informa\u00e7\u00f5es geogr\u00e1ficas que coletei:<\/span><\/p>\n<pre class=\"nums:false lang:python decode:true\">from langchain_openai import OpenAIEmbeddings\r\nembeddings_model = OpenAIEmbeddings(model=\"text-embedding-3-small\")\r\ntext = f \"Geo Info: {row['geo_info']}\"\r\n\r\nembedding = embeddings_model.embed_query(text)<\/pre>\n<p><span style=\"font-weight: 400;\">Para criar a incorpora\u00e7\u00e3o, optei por usar as incorpora\u00e7\u00f5es textuais como ponto de partida, ou seja, \"comparar\" a representa\u00e7\u00e3o de texto com a representa\u00e7\u00e3o de texto. A incorpora\u00e7\u00e3o em si inclui cerca de 1.500 n\u00fameros (esse \u00e9 o menor deles).<\/span><\/p>\n<p><span style=\"font-weight: 400;\">O c\u00f3digo em si n\u00e3o \u00e9 extremamente complexo, mas pode ser demorado. Criar uma incorpora\u00e7\u00e3o para 5.000 eventos levou aproximadamente uma hora em meu MacBook pro M1 de 16 GB.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">O c\u00f3digo completo usando o pandas2 pode ser encontrado em <\/span><a href=\"https:\/\/github.com\/sshahar1\/TikalMeetup2024Ingestion\"><span style=\"font-weight: 400;\">este<\/span><\/a><span style=\"font-weight: 400;\"> reposit\u00f3rio.<\/span><\/p>\n<h3><span style=\"font-weight: 400;\">Cole\u00e7\u00e3o e \u00edndice de pesquisa do Couchbase<\/span><\/h3>\n<p><span style=\"font-weight: 400;\">Para poder pesquisar dados semelhantes entre a pergunta e os resultados que preparamos com base em uma API externa, vamos<\/span><\/p>\n<ol>\n<li style=\"list-style-type: none;\">\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Criar uma cole\u00e7\u00e3o do Couchbase<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Fa\u00e7a upload dos dados preparados para uma cole\u00e7\u00e3o do Couchbase <\/span><b>incluindo os embeddings<\/b><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Crie um \u00edndice de pesquisa nos campos de incorpora\u00e7\u00e3o escolhendo o algoritmo de similaridade de vetores para comparar vetores<\/span><\/li>\n<\/ol>\n<\/li>\n<\/ol>\n<h4><span style=\"font-weight: 400;\">Nova cole\u00e7\u00e3o do Couchbase<\/span><\/h4>\n<p><span style=\"font-weight: 400;\">Para meu aplicativo, optei por usar o servi\u00e7o hospedado do Couchbase - Capella, cuja configura\u00e7\u00e3o \u00e9 muito f\u00e1cil. Eu me registrei, escolhi o servi\u00e7o de nuvem e criei um novo projeto.<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-large wp-image-16354\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image2-4-1024x118.png\" alt=\"\" width=\"900\" height=\"104\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image2-4-1024x118.png 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image2-4-300x35.png 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image2-4-768x89.png 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image2-4-1536x177.png 1536w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image2-4-1320x153.png 1320w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image2-4.png 1999w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><\/p>\n<p><span style=\"font-weight: 400;\">Clicando em meu projeto e navegando at\u00e9 a guia Ferramentas de dados, posso agora criar uma nova cole\u00e7\u00e3o para os dados que preparei:<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-large wp-image-16353\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image1-4-893x1024.png\" alt=\"\" width=\"893\" height=\"1024\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image1-4-893x1024.png 893w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image1-4-262x300.png 262w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image1-4-768x880.png 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image1-4-300x344.png 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image1-4.png 1284w\" sizes=\"auto, (max-width: 893px) 100vw, 893px\" \/><\/p>\n<p><span style=\"font-weight: 400;\">Para carregar os dados que preparei, h\u00e1 v\u00e1rias op\u00e7\u00f5es: como o tamanho do arquivo era bastante grande, optei por usar o <\/span><a href=\"https:\/\/docs.couchbase.com\/server\/current\/tools\/cbimport.html\"><span style=\"font-weight: 400;\">cbimport<\/span><\/a><span style=\"font-weight: 400;\"> utilit\u00e1rio para fazer isso.<\/span><\/p>\n<pre class=\"nums:false wrap:true lang:default decode:true\">.\/cbimport json --cluster couchbases:\/\/ --username  --password  --bucket  --scope-collection-exp \".\" --dataset for_collection.json --generate-key '%id%' --cacert  --format lines<\/pre>\n<p><span style=\"font-weight: 400;\">Observe que escolhi a op\u00e7\u00e3o <em>ID<\/em> dos documentos JSON para ser o documento <em>chave<\/em> na cole\u00e7\u00e3o.<\/span><\/p>\n<p><b>Lembre-se<\/b><span style=\"font-weight: 400;\"> que, antes de fazer isso, voc\u00ea precisa:<\/span><\/p>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Criar usu\u00e1rio\/senha de acesso ao banco de dados com privil\u00e9gio de grava\u00e7\u00e3o, no m\u00ednimo<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Abra o cluster para chamadas de seu host<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Fa\u00e7a o download de um certificado para o cluster<\/span><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">O esquema de documento inferido mostra que o <\/span><b>incorpora\u00e7\u00e3o<\/b><span style=\"font-weight: 400;\"> foi criado com o tipo de matriz de n\u00fameros:<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-large wp-image-16355\" style=\"border: 1px solid black;\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image3-2.png\" alt=\"\" width=\"648\" height=\"296\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image3-2.png 648w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image3-2-300x137.png 300w\" sizes=\"auto, (max-width: 648px) 100vw, 648px\" \/><\/p>\n<p><span style=\"font-weight: 400;\">Para permitir a pesquisa de similaridade de vetores, vamos criar o \u00edndice de pesquisa navegando at\u00e9 a guia Search (Pesquisa).<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Obviamente, devemos selecionar o campo de incorpora\u00e7\u00e3o para o \u00edndice de pesquisa, mas observe que h\u00e1 mais par\u00e2metros a serem definidos:<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-large wp-image-16357\" style=\"border: 1px solid black;\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image5-1-1024x311.png\" alt=\"\" width=\"900\" height=\"273\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image5-1-1024x311.png 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image5-1-300x91.png 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image5-1-768x233.png 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image5-1-1536x467.png 1536w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image5-1-1320x401.png 1320w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image5-1.png 1842w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><\/p>\n<p><span style=\"font-weight: 400;\">J\u00e1 discutimos qual \u00e9 a m\u00e9trica de similaridade, basta observar que o Couchbase suporta l2_norm (ou seja, dist\u00e2ncia euclidiana) e produto escalar.<\/span><b>produto de pontos\"<\/b><span style=\"font-weight: 400;\">o que pode ser mais vantajoso para meu sistema de recomenda\u00e7\u00e3o.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A pr\u00f3xima etapa \u00e9 escolher campos adicionais dos documentos que seriam retornados sempre que um vetor fosse dimerizado de forma semelhante \u00e0 pergunta:<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-large wp-image-16358\" style=\"border: 1px solid black;\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image6-1-1024x372.png\" alt=\"\" width=\"900\" height=\"327\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image6-1-1024x372.png 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image6-1-300x109.png 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image6-1-768x279.png 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image6-1-1536x558.png 1536w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image6-1-1320x479.png 1320w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image6-1.png 1999w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><\/p>\n<p><span style=\"font-weight: 400;\">Se voc\u00ea n\u00e3o adicionar pelo menos um campo, seu aplicativo falhar\u00e1 porque n\u00e3o haver\u00e1 nenhum dado retornado.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Aqui est\u00e1, a sele\u00e7\u00e3o dos campos de \u00edndice:<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-large wp-image-16359\" style=\"border: 1px solid black;\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image7-1-1024x517.png\" alt=\"\" width=\"900\" height=\"454\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image7-1-1024x517.png 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image7-1-300x152.png 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image7-1-768x388.png 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image7-1-1536x776.png 1536w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image7-1-1320x667.png 1320w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image7-1.png 1999w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><\/p>\n<p><span style=\"font-weight: 400;\">Chegamos a um ponto crucial em nosso projeto, agora podemos come\u00e7ar a executar a pesquisa de similaridade nos dados que preparamos, mas talvez voc\u00ea n\u00e3o consiga fazer uma pesquisa de similaridade que funcione na primeira tentativa. Descreverei algumas dicas para obter resultados da sua pesquisa de similaridade ou para verificar por que n\u00e3o est\u00e1 obtendo resultados:<\/span><\/p>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Certifique-se de que sua t\u00e9cnica de incorpora\u00e7\u00e3o, ao criar os dados e preparar uma pesquisa, seja id\u00eantica<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Comece com um formato simples e previs\u00edvel para as informa\u00e7\u00f5es que voc\u00ea deseja comparar. Por exemplo, , ,<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Certifique-se de que n\u00e3o haja informa\u00e7\u00f5es extras que sejam acidentalmente anexadas aos dados para os quais voc\u00ea est\u00e1 criando embeddings (por exemplo, eu tinha quebras de linha)<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Certifique-se de que a pesquisa de correspond\u00eancia exata funcione:<\/span>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"2\"><span style=\"font-weight: 400;\">Pesquisa dos dados exatos para os quais voc\u00ea criou embeddings<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"2\"><span style=\"font-weight: 400;\">Compare o vetor de incorpora\u00e7\u00e3o para garantir que sejam criadas incorpora\u00e7\u00f5es id\u00eanticas na parte de gera\u00e7\u00e3o e pesquisa (a depura\u00e7\u00e3o ser\u00e1 \u00fatil aqui). Se houver alguma diferen\u00e7a, volte \u00e0s etapas 1-3<\/span><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">Depois de ter uma busca por similaridade que funcione, adicione gradualmente mais campos, altere formatos, incorpora\u00e7\u00f5es e tudo o mais que achar que est\u00e1 faltando.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Lembre-se de que qualquer altera\u00e7\u00e3o nos embeddings significa:<\/span><\/p>\n<ol>\n<li style=\"list-style-type: none;\">\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Recriando os embeddings<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Carregamento dos dados de altera\u00e7\u00f5es em uma cole\u00e7\u00e3o truncada<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Alterar o \u00edndice de pesquisa, se necess\u00e1rio<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">\u00c9 necess\u00e1rio alterar o c\u00f3digo<\/span><\/li>\n<\/ol>\n<\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">Essas etapas podem ser demoradas, especialmente a cria\u00e7\u00e3o de incorpora\u00e7\u00f5es, portanto, talvez voc\u00ea queira come\u00e7ar por elas:<\/span><\/p>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Uma pequena parte de seus documentos<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Uma t\u00e9cnica de incorpora\u00e7\u00e3o pequena\/r\u00e1pida<\/span><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3><span style=\"font-weight: 400;\">Aplica\u00e7\u00e3o para LLM e RAG<\/span><\/h3>\n<p><span style=\"font-weight: 400;\">O que nosso aplicativo precisa fazer \u00e9:<\/span><\/p>\n<ol>\n<li style=\"list-style-type: none;\">\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Pe\u00e7a ao Couchbase para encontrar resultados semelhantes \u00e0 pergunta do usu\u00e1rio<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Adicione os resultados ao contexto da pergunta do LLM<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Fa\u00e7a uma pergunta ao LLM<\/span><\/li>\n<\/ol>\n<\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">Para simplificar, criei esse c\u00f3digo em Python como um notebook Jupyter, que voc\u00ea pode encontrar neste <\/span><a href=\"https:\/\/github.com\/sshahar1\/TikalMeetup2024\"><span style=\"font-weight: 400;\">reposit\u00f3rio<\/span><\/a><span style=\"font-weight: 400;\">. Usei as seguintes bibliotecas para fazer isso:<\/span><\/p>\n<ol>\n<li style=\"list-style-type: none;\">\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><a href=\"https:\/\/pypi.org\/project\/couchbase\/\"><span style=\"font-weight: 400;\">Couchbase<\/span><\/a><span style=\"font-weight: 400;\">: Conectar e autenticar no meu cluster Capella<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><a href=\"https:\/\/www.langchain.com\/langchain\"><span style=\"font-weight: 400;\">LangChain<\/span><\/a><span style=\"font-weight: 400;\">: uma estrutura para o desenvolvimento de aplicativos alimentados por grandes modelos de linguagem (LLMs), para:<\/span>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"2\"><span style=\"font-weight: 400;\">Embeddings<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"2\"><span style=\"font-weight: 400;\">Usando o Couchbase como um armazenamento de vetores<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"2\"><span style=\"font-weight: 400;\">\"Conversando\" com a OpenAI<\/span><\/li>\n<\/ol>\n<\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><a href=\"https:\/\/www.langchain.com\/langgraph\"><span style=\"font-weight: 400;\">LangGraph<\/span><\/a><span style=\"font-weight: 400;\">: Uma estrutura para criar aplicativos LLM com v\u00e1rios atores e com estado, para criar um fluxo do aplicativo LLM<\/span><\/li>\n<\/ol>\n<\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">Se voc\u00ea tem lido sobre, e at\u00e9 mesmo tentado criar seu pr\u00f3prio aplicativo LLM, provavelmente est\u00e1 familiarizado com o LangChain, que \u00e9 um conjunto de bibliotecas que permite escrever, criar, implantar e monitorar um aplicativo. Ele tem muitos agentes e extens\u00f5es que permitem integrar diferentes partes ao seu c\u00f3digo, como uma API de terceiros, um banco de dados, uma pesquisa na Web e muito mais.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Ultimamente, tamb\u00e9m tomei conhecimento do LangGraph, da casa do LangChain, que permite que voc\u00ea, como desenvolvedor, crie topologias mais complexas do aplicativo LLM com condi\u00e7\u00f5es, loops (o gr\u00e1fico n\u00e3o precisa ser um DAG!), intera\u00e7\u00e3o com o usu\u00e1rio e, talvez, o recurso mais procurado: Manter o estado.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Antes de examinarmos o c\u00f3digo, vamos dar uma olhada no arquivo de ambiente (.env) para ver quais credenciais e outros dados confidenciais s\u00e3o necess\u00e1rios:<\/span><\/p>\n<pre class=\"nums:false lang:default decode:true\">LANGSMITH_KEY=langsmithkey\r\nOPENAI_API_KEY=openaikey\r\nLANGCHAIN_PROJECT=myproject\r\nCOUCHBASE_CONNECTION_STRING=couchbase:\/\/mycluster.com\r\nCOUCHBASE_USER=meu usu\u00e1rio\r\nCOUCHBASE_PASS=mypass\r\nCOUCHBASE_BUCKET=mybucket\r\nCOUCHBASE_SCOPE=myscope\r\nCOUCHBASE_COLLECTION=mycollection\r\nCOUCHBASE_SEARCH_INDEX=meu \u00edndice de pesquisa\r\nLANGCHAIN_API_KEY=langchainapikey<\/pre>\n<p><span style=\"font-weight: 400;\">O estado de cada n\u00f3 do gr\u00e1fico \u00e9:<\/span><\/p>\n<pre class=\"nums:false lang:default decode:true\">from langgraph.graph import add_messages, StateGraph\r\nfrom typing_extensions import TypedDict\r\nfrom typing import Annotated\r\nfrom langgraph.checkpoint.sqlite import SqliteSaver\r\n\r\nclass State(TypedDict):\r\n  # As mensagens t\u00eam o tipo \"list\". A fun\u00e7\u00e3o `add_messages\r\n  # na anota\u00e7\u00e3o define como essa chave de estado deve ser atualizada\r\n  # (nesse caso, ela acrescenta mensagens \u00e0 lista, em vez de sobrescrev\u00ea-las)\r\n  mensagens: Annotated[list, add_messages]\r\n  event_type: str\r\n  local: str\r\n  r\u00f3tulos: str\r\n  \r\ngraph_builder = StateGraph(State)<\/pre>\n<p><span style=\"font-weight: 400;\">\u00c9 importante observar que, a menos que voc\u00ea defina um redutor, o estado ser\u00e1 substitu\u00eddo entre cada n\u00f3 do gr\u00e1fico. O membro messages da classe state tem um redutor que anexar\u00e1 as novas mensagens \u00e0 lista.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Para conectar-se ao Couchbase e us\u00e1-lo como um armazenamento de vetores para o aplicativo LLM, autenticamos o cluster e passamos a conex\u00e3o do cluster para o objeto LangChain do armazenamento de vetores:<\/span><\/p>\n<pre class=\"nums:false lang:default decode:true\">from langchain_openai import OpenAIEmbeddings\r\nimportar os\r\nfrom couchbase.cluster import Cluster\r\nfrom couchbase.options import ClusterOptions\r\nfrom couchbase.auth import PasswordAuthenticator\r\nfrom langchain_couchbase import CouchbaseVectorStore\r\n\r\nCOUCHBASE_CONNECTION_STRING = os.environ[\"COUCHBASE_CONNECTION_STRING\"]\r\nCOUCH_USER = os.environ[\"COUCHBASE_USER\"]\r\nCOUCH_PASS = os.environ[\"COUCHBASE_PASS\"]\r\nBUCKET_NAME = os.environ[\"COUCHBASE_BUCKET\"]\r\nSCOPE_NAME = os.environ[\"COUCHBASE_SCOPE\"]\r\nCOLLECTION_NAME = os.environ[\"COUCHBASE_COLLECTION\"]\r\nSEARCH_INDEX_NAME = os.environ[\"COUCHBASE_SEARCH_INDEX\"]\r\n\r\nauth = PasswordAuthenticator(COUCH_USER, COUCH_PASS)\r\noptions = ClusterOptions(auth)\r\ncluster = Cluster (COUCHBASE_CONNECTION_STRING, op\u00e7\u00f5es)\r\nembedding = OpenAIEmbeddings(model=\"text-embedding-3-small\")\r\n\r\nvector_store = CouchbaseVectorStore(\r\n  cluster=cluster,\r\n  bucket_name=NOME_DO_BALDE,\r\n  nome_do_escopo=NOME_DO_EsCOPO,\r\n  collection_name=COLLECTION_NAME,\r\n  embedding=embedding,\r\n  index_name=SEARCH_INDEX_NAME,\r\n)<\/pre>\n<p><span style=\"font-weight: 400;\">H\u00e1 dois detalhes importantes que devem ser levados em conta:<\/span><\/p>\n<ol>\n<li style=\"list-style-type: none;\">\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">A incorpora\u00e7\u00e3o no aplicativo <\/span><b>deve<\/b><span style=\"font-weight: 400;\"> ser id\u00eantico ao usado na parte de ingest\u00e3o<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">O nome padr\u00e3o do campo de incorpora\u00e7\u00e3o \u00e9 \"embedding\"; se o nome do respectivo campo for diferente em seu \u00edndice de pesquisa, voc\u00ea precisar\u00e1 defini-lo durante a instancia\u00e7\u00e3o do CouchbaseVectorStore (embedding_key)<\/span><\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">Neste momento, voc\u00ea est\u00e1 pronto para escrever seu aplicativo LangGraph e usar o Couchbase como armazenamento de vetores. Vamos montar tudo: cada gr\u00e1fico precisa de n\u00f3s, ponto inicial e bordas direcionadas.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Nosso gr\u00e1fico buscar\u00e1 dados do armazenamento de vetores e continuar\u00e1 a adicionar essas informa\u00e7\u00f5es ao contexto do prompt do LLM.<\/span><\/p>\n<pre class=\"nums:false lang:default decode:true\">from langchain_core.prompts import ChatPromptTemplate\r\nfrom langchain_core.output_parsers import StrOutputParser\r\nfrom langchain_openai import ChatOpenAI\r\n\r\nllm = ChatOpenAI(model=\"gpt-3.5-turbo\")\r\ntemplate = \"\"\"Voc\u00ea \u00e9 um bot \u00fatil que tem a finalidade de encontrar eventos para artistas que procuram locais nos EUA. Se voc\u00ea n\u00e3o puder responder com base no contexto fornecido, responda com uma\r\n  resposta gen\u00e9rica. Responda \u00e0 pergunta da forma mais verdadeira poss\u00edvel usando o contexto abaixo: {context}\r\nFormate tamb\u00e9m o resultado no formato Markdown.\r\n          Pergunta: {question}\"\"\"\"\r\n\r\nprompt = ChatPromptTemplate.from_template(template)\r\n\r\ngeneration_chain = prompt | llm | StrOutputParser()\r\n\r\n\r\ndef chatbot(state: State):\r\n  response = generation_chain.invoke({\"context\": state['messages'], \"question\": f \"We are a {state['event_type']} amateur group looking for {state['labels']} festivals in {state['location']}, can you please recommend some for us to reach to?\"})\r\n  state['messages'].append(response)\r\n  retornar estado\r\n\r\ndef search_couchbase(state: State):\r\n  query = f \"Geo Info: {state['location']}\"\r\n  \r\n  retriever = vector_store.as_retriever()\r\n  resultados = retriever.invoke(query)\r\n  for result in results:\r\n      text = f \"Title: {result.metadata['title']}\/{result.metadata['alternate_titles_flat']} - {result.metadata['description']} from {result.metadata['start']} to {result.metadata['end']}, location {result.metadata['geo_info']}. Labels {result.metadata['labels_flat']}, category {result.metadata['category']}\"\r\n      state['messages'].append(text)\r\n  retornar estado\r\n\r\ngraph_builder.add_node(\"vector_search\", search_couchbase)\r\ngraph_builder.add_node(\"chatbot\", chatbot)\r\ngraph_builder.set_entry_point(\"vector_search\")\r\ngraph_builder.add_edge(\"vector_search\", \"chatbot\")\r\ngraph_builder.set_finish_point(\"chatbot\")\r\n\r\nmem\u00f3ria = SqliteSaver.from_conn_string(\":memory:\")\r\ngraph = graph_builder.compile(checkpointer=memory)<\/pre>\n<p><span style=\"font-weight: 400;\">No c\u00f3digo acima, isso se traduz em dois n\u00f3s:<\/span><\/p>\n<ol>\n<li style=\"list-style-type: none;\">\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">vector_search (ponto de entrada)<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">chatbot (ponto de chegada)<\/span><\/li>\n<\/ol>\n<\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">Como uma imagem vale mais que mil palavras, usei o c\u00f3digo a seguir para visualizar o gr\u00e1fico para voc\u00ea:<\/span><\/p>\n<pre class=\"nums:false lang:default decode:true\">from IPython.display import Image, display\r\nfrom langchain_core.runnables.graph import CurveStyle, MermaidDrawMethod, NodeStyles\r\n\r\ndisplay(\r\n  Imagem(\r\n      graph.get_graph().draw_mermaid_png(\r\n          draw_method=MermaidDrawMethod.API,\r\n      )\r\n  )\r\n)<\/pre>\n<p><span style=\"font-weight: 400;\">O resultado foi o seguinte desenho:<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-16360 size-medium\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image8-186x300.png\" alt=\"\" width=\"186\" height=\"300\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image8-186x300.png 186w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image8-300x483.png 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/image8.png 370w\" sizes=\"auto, (max-width: 186px) 100vw, 186px\" \/><\/p>\n<p><span style=\"font-weight: 400;\">Para obter mais op\u00e7\u00f5es de visualiza\u00e7\u00e3o no langGraph, consulte <\/span><a href=\"https:\/\/github.com\/langchain-ai\/langgraph\/blob\/main\/examples\/visualization.ipynb\"><span style=\"font-weight: 400;\">este<\/span><\/a><span style=\"font-weight: 400;\"> Notebook Jupyter da LangGraph.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Perguntar ao vector store significa pesquisar dados com localiza\u00e7\u00e3o semelhante. Voc\u00ea pode notar que o formato da consulta \u00e9 o mesmo do texto incorporado; os resultados s\u00e3o adicionados ao estado para serem usados no pr\u00f3ximo n\u00f3.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">O n\u00f3 do chatbot obt\u00e9m as informa\u00e7\u00f5es das mensagens e as incorpora \u00e0 pergunta do prompt para o LLM.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Observe que o estado \u00e9 mantido no banco de dados sqlite na mem\u00f3ria. Para usar o gr\u00e1fico, fique \u00e0 vontade para usar o exemplo a seguir:<\/span><\/p>\n<pre class=\"nums:false lang:default decode:true\">from random import randint\r\nfrom IPython.core.display import Markdown\r\n\r\nsession_id = randint(1, 10000)\r\nconfig = {\"configurable\": {\"thread_id\": session_id}}\r\n\r\ninput_location = \"kansas\"\r\ninput_category = \"jaz\"\r\ninput_labels = \"grange\"\r\n\r\n# Transmita o gr\u00e1fico, cada sa\u00edda ser\u00e1 impressa quando estiver pronta\r\nfor event in graph.stream({\"event_type\": input_category, \"location\": input_location, \"labels\": input_labels}, config):\r\n  for value in event.values():\r\n      if len(value['messages']) &gt; 0:\r\n          display(Markdown(value['messages'][-1]))<\/pre>\n<p><span style=\"font-weight: 400;\">E a\u00ed est\u00e1, voc\u00ea criou um aplicativo LLM para recomendar eventos culturais para grupos amadores solicitarem convites.<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">Resumo<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">Come\u00e7ar a usar os aplicativos do LLM \u00e9 empolgante e, na minha humilde opini\u00e3o, \u00e9 um aumento divertido, empolgante e fact\u00edvel devido \u00e0 sua natureza r\u00e1pida; no entanto, tornar nosso aplicativo melhor e mais robusto esconde mais desafios.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Neste artigo, concentrei-me no desafio de aproveitar o conhecimento do nosso modo com dados externos por meio da t\u00e9cnica ou RAG e em como voc\u00ea pode aproveitar o Couchbase para fazer isso.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00c9 importante lembrar que a cria\u00e7\u00e3o de embeddings que o aplicativo LLM encontrar\u00e1 na pesquisa de vetores pode n\u00e3o funcionar em sua primeira tentativa. Verifique a formata\u00e7\u00e3o, tente come\u00e7ar com embeddings simples e use a depura\u00e7\u00e3o o m\u00e1ximo poss\u00edvel.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Tamb\u00e9m demonstrei os recursos do LangGraph da LangChain, que permite criar decis\u00f5es e fluxos complexos no aplicativo LLM.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Aproveite sua jornada com as inscri\u00e7\u00f5es para o LLM.<\/span><\/p>\n<p><br style=\"font-weight: 400;\" \/><br style=\"font-weight: 400;\" \/><\/p>","protected":false},"excerpt":{"rendered":"<p>GenAI technologies are definitely a trending item in 2023 and 2024, and because I work for\u00a0 Tikal, which publishes its own annual technology radar and trends report, LLM and genAI did not escape my attention. As a developer myself, I [&hellip;]<\/p>","protected":false},"author":85526,"featured_media":16398,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"footnotes":""},"categories":[10122,1815,2225,9973,9139,9937],"tags":[9963,9870],"ppma_author":[10032],"class_list":["post-16352","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence-ai","category-best-practices-and-tutorials","category-cloud","category-generative-ai-genai","category-python","category-vector-search","tag-langchain","tag-llms"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.9 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>From Concept to Code: LLM + RAG with Couchbase<\/title>\n<meta name=\"description\" content=\"Learn how to build a generative AI recommendation engine using LLM, RAG, and Couchbase integration. Step-by-step guide for developers.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.couchbase.com\/blog\/pt\/ai-powered-recommendation-engine-llm-rag\/\" \/>\n<meta property=\"og:locale\" content=\"pt_BR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"From Concept to Code: LLM + RAG with Couchbase\" \/>\n<meta property=\"og:description\" content=\"Learn how to build a generative AI recommendation engine using LLM, RAG, and Couchbase integration. Step-by-step guide for developers.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.couchbase.com\/blog\/pt\/ai-powered-recommendation-engine-llm-rag\/\" \/>\n<meta property=\"og:site_name\" content=\"The Couchbase Blog\" \/>\n<meta property=\"article:published_time\" content=\"2024-09-23T21:08:57+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-06-13T23:36:40+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/From-Concept-to-Code-LLM-RAG-with-Couchbase_V2-scaled.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"2560\" \/>\n\t<meta property=\"og:image:height\" content=\"1340\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Sigal Shaharabani - Technical Leader, Tikal\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sigal Shaharabani - Technical Leader, Tikal\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"12 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/ai-powered-recommendation-engine-llm-rag\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/ai-powered-recommendation-engine-llm-rag\/\"},\"author\":{\"name\":\"Sigal Shaharabani - Technical Leader, Tikal\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/eaa4f519f1bdafc253d366c93f115114\"},\"headline\":\"From Concept to Code: LLM + RAG with Couchbase\",\"datePublished\":\"2024-09-23T21:08:57+00:00\",\"dateModified\":\"2025-06-13T23:36:40+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/ai-powered-recommendation-engine-llm-rag\/\"},\"wordCount\":2271,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/ai-powered-recommendation-engine-llm-rag\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/From-Concept-to-Code-LLM-RAG-with-Couchbase_V2-scaled.jpg\",\"keywords\":[\"langchain\",\"LLMs\"],\"articleSection\":[\"Artificial Intelligence (AI)\",\"Best Practices and Tutorials\",\"Couchbase Capella\",\"Generative AI (GenAI)\",\"Python\",\"Vector Search\"],\"inLanguage\":\"pt-BR\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.couchbase.com\/blog\/ai-powered-recommendation-engine-llm-rag\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/ai-powered-recommendation-engine-llm-rag\/\",\"url\":\"https:\/\/www.couchbase.com\/blog\/ai-powered-recommendation-engine-llm-rag\/\",\"name\":\"From Concept to Code: LLM + RAG with Couchbase\",\"isPartOf\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/ai-powered-recommendation-engine-llm-rag\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/ai-powered-recommendation-engine-llm-rag\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/From-Concept-to-Code-LLM-RAG-with-Couchbase_V2-scaled.jpg\",\"datePublished\":\"2024-09-23T21:08:57+00:00\",\"dateModified\":\"2025-06-13T23:36:40+00:00\",\"description\":\"Learn how to build a generative AI recommendation engine using LLM, RAG, and Couchbase integration. Step-by-step guide for developers.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/ai-powered-recommendation-engine-llm-rag\/#breadcrumb\"},\"inLanguage\":\"pt-BR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.couchbase.com\/blog\/ai-powered-recommendation-engine-llm-rag\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-BR\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/ai-powered-recommendation-engine-llm-rag\/#primaryimage\",\"url\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/From-Concept-to-Code-LLM-RAG-with-Couchbase_V2-scaled.jpg\",\"contentUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/From-Concept-to-Code-LLM-RAG-with-Couchbase_V2-scaled.jpg\",\"width\":2560,\"height\":1340},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/ai-powered-recommendation-engine-llm-rag\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.couchbase.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"From Concept to Code: LLM + RAG with Couchbase\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#website\",\"url\":\"https:\/\/www.couchbase.com\/blog\/\",\"name\":\"The Couchbase Blog\",\"description\":\"Couchbase, the NoSQL Database\",\"publisher\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.couchbase.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"pt-BR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#organization\",\"name\":\"The Couchbase Blog\",\"url\":\"https:\/\/www.couchbase.com\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-BR\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png\",\"contentUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png\",\"width\":218,\"height\":34,\"caption\":\"The Couchbase Blog\"},\"image\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/eaa4f519f1bdafc253d366c93f115114\",\"name\":\"Sigal Shaharabani - Technical Leader, Tikal\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-BR\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/image\/6bf412a23dbadb2c7664e454e6195c40\",\"url\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/unnamed.jpg\",\"contentUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/unnamed.jpg\",\"caption\":\"Sigal Shaharabani - Technical Leader, Tikal\"},\"description\":\"I am a Technical Leader and a Group Leader in Tikal, with a great passion for backend and data systems. In my spare time I enjoy swimming and Israeli folk dancing.\",\"url\":\"https:\/\/www.couchbase.com\/blog\/pt\/author\/sigalshaharabani\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Do conceito ao c\u00f3digo: LLM + RAG com Couchbase","description":"Saiba como criar um mecanismo de recomenda\u00e7\u00e3o de IA generativa usando LLM, RAG e integra\u00e7\u00e3o com o Couchbase. Guia passo a passo para desenvolvedores.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.couchbase.com\/blog\/pt\/ai-powered-recommendation-engine-llm-rag\/","og_locale":"pt_BR","og_type":"article","og_title":"From Concept to Code: LLM + RAG with Couchbase","og_description":"Learn how to build a generative AI recommendation engine using LLM, RAG, and Couchbase integration. Step-by-step guide for developers.","og_url":"https:\/\/www.couchbase.com\/blog\/pt\/ai-powered-recommendation-engine-llm-rag\/","og_site_name":"The Couchbase Blog","article_published_time":"2024-09-23T21:08:57+00:00","article_modified_time":"2025-06-13T23:36:40+00:00","og_image":[{"width":2560,"height":1340,"url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/From-Concept-to-Code-LLM-RAG-with-Couchbase_V2-scaled.jpg","type":"image\/jpeg"}],"author":"Sigal Shaharabani - Technical Leader, Tikal","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Sigal Shaharabani - Technical Leader, Tikal","Est. reading time":"12 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.couchbase.com\/blog\/ai-powered-recommendation-engine-llm-rag\/#article","isPartOf":{"@id":"https:\/\/www.couchbase.com\/blog\/ai-powered-recommendation-engine-llm-rag\/"},"author":{"name":"Sigal Shaharabani - Technical Leader, Tikal","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/eaa4f519f1bdafc253d366c93f115114"},"headline":"From Concept to Code: LLM + RAG with Couchbase","datePublished":"2024-09-23T21:08:57+00:00","dateModified":"2025-06-13T23:36:40+00:00","mainEntityOfPage":{"@id":"https:\/\/www.couchbase.com\/blog\/ai-powered-recommendation-engine-llm-rag\/"},"wordCount":2271,"commentCount":0,"publisher":{"@id":"https:\/\/www.couchbase.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/ai-powered-recommendation-engine-llm-rag\/#primaryimage"},"thumbnailUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/From-Concept-to-Code-LLM-RAG-with-Couchbase_V2-scaled.jpg","keywords":["langchain","LLMs"],"articleSection":["Artificial Intelligence (AI)","Best Practices and Tutorials","Couchbase Capella","Generative AI (GenAI)","Python","Vector Search"],"inLanguage":"pt-BR","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.couchbase.com\/blog\/ai-powered-recommendation-engine-llm-rag\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.couchbase.com\/blog\/ai-powered-recommendation-engine-llm-rag\/","url":"https:\/\/www.couchbase.com\/blog\/ai-powered-recommendation-engine-llm-rag\/","name":"Do conceito ao c\u00f3digo: LLM + RAG com Couchbase","isPartOf":{"@id":"https:\/\/www.couchbase.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.couchbase.com\/blog\/ai-powered-recommendation-engine-llm-rag\/#primaryimage"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/ai-powered-recommendation-engine-llm-rag\/#primaryimage"},"thumbnailUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/From-Concept-to-Code-LLM-RAG-with-Couchbase_V2-scaled.jpg","datePublished":"2024-09-23T21:08:57+00:00","dateModified":"2025-06-13T23:36:40+00:00","description":"Saiba como criar um mecanismo de recomenda\u00e7\u00e3o de IA generativa usando LLM, RAG e integra\u00e7\u00e3o com o Couchbase. Guia passo a passo para desenvolvedores.","breadcrumb":{"@id":"https:\/\/www.couchbase.com\/blog\/ai-powered-recommendation-engine-llm-rag\/#breadcrumb"},"inLanguage":"pt-BR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.couchbase.com\/blog\/ai-powered-recommendation-engine-llm-rag\/"]}]},{"@type":"ImageObject","inLanguage":"pt-BR","@id":"https:\/\/www.couchbase.com\/blog\/ai-powered-recommendation-engine-llm-rag\/#primaryimage","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/From-Concept-to-Code-LLM-RAG-with-Couchbase_V2-scaled.jpg","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/From-Concept-to-Code-LLM-RAG-with-Couchbase_V2-scaled.jpg","width":2560,"height":1340},{"@type":"BreadcrumbList","@id":"https:\/\/www.couchbase.com\/blog\/ai-powered-recommendation-engine-llm-rag\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.couchbase.com\/blog\/"},{"@type":"ListItem","position":2,"name":"From Concept to Code: LLM + RAG with Couchbase"}]},{"@type":"WebSite","@id":"https:\/\/www.couchbase.com\/blog\/#website","url":"https:\/\/www.couchbase.com\/blog\/","name":"Blog do Couchbase","description":"Couchbase, o banco de dados NoSQL","publisher":{"@id":"https:\/\/www.couchbase.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.couchbase.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"pt-BR"},{"@type":"Organization","@id":"https:\/\/www.couchbase.com\/blog\/#organization","name":"Blog do Couchbase","url":"https:\/\/www.couchbase.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"pt-BR","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png","width":218,"height":34,"caption":"The Couchbase Blog"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/eaa4f519f1bdafc253d366c93f115114","name":"Sigal Shaharabani - L\u00edder t\u00e9cnico, Tikal","image":{"@type":"ImageObject","inLanguage":"pt-BR","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/image\/6bf412a23dbadb2c7664e454e6195c40","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/unnamed.jpg","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/unnamed.jpg","caption":"Sigal Shaharabani - Technical Leader, Tikal"},"description":"Sou l\u00edder t\u00e9cnico e l\u00edder de grupo na Tikal, com uma grande paix\u00e3o por back-end e sistemas de dados. Em meu tempo livre, gosto de nadar e dan\u00e7ar folclore israelense.","url":"https:\/\/www.couchbase.com\/blog\/pt\/author\/sigalshaharabani\/"}]}},"authors":[{"term_id":10032,"user_id":85526,"is_guest":0,"slug":"sigalshaharabani","display_name":"Sigal Shaharabani - Technical Leader, Tikal","avatar_url":{"url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/unnamed.jpg","url2x":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/09\/unnamed.jpg"},"author_category":"","last_name":"Shaharabani - Technical Leader, Tikal","first_name":"Sigal","job_title":"","user_url":"","description":"Sou l\u00edder t\u00e9cnico e l\u00edder de grupo na Tikal, com uma grande paix\u00e3o por back-end e sistemas de dados. Em meu tempo livre, gosto de nadar e dan\u00e7ar folclore israelense."}],"_links":{"self":[{"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/posts\/16352","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/users\/85526"}],"replies":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/comments?post=16352"}],"version-history":[{"count":0,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/posts\/16352\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/media\/16398"}],"wp:attachment":[{"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/media?parent=16352"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/categories?post=16352"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/tags?post=16352"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/ppma_author?post=16352"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}