{"id":15671,"date":"2024-05-07T08:05:04","date_gmt":"2024-05-07T15:05:04","guid":{"rendered":"https:\/\/www.couchbase.com\/blog\/?p=15671"},"modified":"2025-06-13T20:27:16","modified_gmt":"2025-06-14T03:27:16","slug":"twitter-thread-tldr-with-ai-part-1","status":"publish","type":"post","link":"https:\/\/www.couchbase.com\/blog\/pt\/twitter-thread-tldr-with-ai-part-1\/","title":{"rendered":"Twitter Thread tl;dr Com IA? Parte 1"},"content":{"rendered":"<div class=\"mb-8 px-4 text-center font-heading md:mb-14 md:px-5 lg:px-8 xl:px-20\">\n<div class=\"mb-8 px-4 text-center font-heading md:mb-14 md:px-5 lg:px-8 xl:px-20\">\n<h2 class=\"text-2xl leading-snug text-slate-700 dark:text-slate-400 md:text-3xl xl:text-3xl\">Porque quem tem tempo? (tamb\u00e9m parte 1 porque me levou mais longe do que eu esperava \ud83d\ude2c)<\/h2>\n<\/div>\n<p>O Couchbase apresentou recentemente <a href=\"https:\/\/www.couchbase.com\/blog\/pt\/products\/vector-search\/\">suporte para pesquisa vetorial<\/a>. E eu estava procurando uma desculpa para brincar com ele. Acontece que recentemente houve um \u00f3timo t\u00f3pico no Twitter sobre marketing para desenvolvedores. Eu me identifico com a maior parte do que est\u00e1 l\u00e1. \u00c9 um t\u00f3pico fant\u00e1stico. Eu poderia resumi-lo para garantir que meus colegas de equipe possam tirar o melhor proveito dele em pouco tempo. Por exemplo, eu poderia escrever esse resumo manualmente. Ou essa pode ser a desculpa que eu estava procurando.<\/p>\n<p>Vamos pedir a um LLM (Large Language Model) que resuma essa brilhante discuss\u00e3o para mim e para o benef\u00edcio de outras pessoas. Em teoria, as coisas devem acontecer da seguinte forma:<\/p>\n<ol>\n<li style=\"list-style-type: none;\">\n<ol>\n<li>Obtendo os tweets<\/li>\n<li>Transformando-os em vetores gra\u00e7as a um LLM<\/li>\n<li>Armazenamento do tweet e dos vetores no Couchbase<\/li>\n<li>Cria\u00e7\u00e3o de um \u00edndice para consult\u00e1-los<\/li>\n<li>Pergunte algo ao LLM<\/li>\n<li>Transforme isso em um vetor<\/li>\n<li>Execute uma pesquisa de vetores para obter algum contexto para o LLM<\/li>\n<li>Criar o prompt do LLM a partir da pergunta e do contexto<\/li>\n<li>Obtenha uma resposta fant\u00e1stica<\/li>\n<\/ol>\n<\/li>\n<\/ol>\n<p>Esse \u00e9 basicamente um fluxo de trabalho RAG. RAG significa Retrieval Augmented Generation (Gera\u00e7\u00e3o Aumentada de Recupera\u00e7\u00e3o). Ele permite que os desenvolvedores criem aplicativos baseados em LLM mais precisos e robustos, fornecendo contexto.<\/p>\n<h2 id=\"heading-extracting-twitter-data\">Extra\u00e7\u00e3o de dados do Twitter<\/h2>\n<p>A primeira coisa a fazer \u00e9 obter dados do Twitter. Na verdade, essa \u00e9 a parte mais dif\u00edcil se voc\u00ea n\u00e3o assinar a API deles. Mas com um bom e velho trabalho de sucata, voc\u00ea ainda pode fazer algo decente. Provavelmente n\u00e3o \u00e9 100% preciso, mas \u00e9 decente. Ent\u00e3o, vamos a isso.<\/p>\n<p>Obtendo meu IDE favorito, com o <a href=\"https:\/\/plugins.jetbrains.com\/plugin\/22131-couchbase\" target=\"_blank\" rel=\"noopener\">Plug-in do Couchbase<\/a> instalado, crio um novo script Python e come\u00e7o a brincar com <a href=\"https:\/\/github.com\/d60\/twikit\" target=\"_blank\" rel=\"noopener\">twikit<\/a>uma biblioteca de coleta de dados do Twitter. Tudo funciona bem at\u00e9 que recebo rapidamente um erro HTTP 429. <em>Muitas solicita\u00e7\u00f5es<\/em>. Tenho me esfor\u00e7ado demais. Fui pego. Algumas coisas para atenuar isso.<\/p>\n<\/div>\n<ol>\n<li style=\"list-style-type: none;\">\n<ol>\n<li>Primeiro, certifique-se de armazenar o cookie de autentica\u00e7\u00e3o em um arquivo e reutiliz\u00e1-lo, em vez de refazer o login freneticamente, como eu fiz.<\/li>\n<li>Em segundo lugar, mude para um IDE on-line, pois voc\u00ea poder\u00e1 alterar o IP com mais facilidade.<\/li>\n<li>Terceiro, introduza o tempo de espera e torne-o aleat\u00f3rio. N\u00e3o tenho certeza se a parte aleat\u00f3ria ajuda, mas por que n\u00e3o, \u00e9 f\u00e1cil.<\/li>\n<\/ol>\n<\/li>\n<\/ol>\n<p>O script final tem a seguinte apar\u00eancia:<\/p>\n<pre class=\"lang:python decode:true\" title=\"Script twikit para registrar um tweet e suas respostas em JSON\">from twikit import Client\r\nfrom random import randint\r\nimportar json\r\nimportar time\r\n\r\ndef get_json_tweet(t, parentid):\r\n    return {\r\n        'created_at': t.created_at,\r\n        'id': t.id,\r\n        'parent' : parentid,\r\n        'full_text': t.full_text,\r\n        'created_at': t.created_at,\r\n        'text': t.text,\r\n        'lang': t.lang,\r\n        'in_reply_to': t.in_reply_to,\r\n        'quote_count': t.quote_count,\r\n        'reply_count': t.reply_count,\r\n        'favorite_count': t.favorite_count,\r\n        'view_count': t.view_count,\r\n        'hashtags': t.hashtags,\r\n        'user' : {\r\n            'id' : t.user.id,\r\n            'name' : t.user.name,\r\n            'screen_name ' : t.user.screen_name ,\r\n            'url ' : t.user.url ,\r\n        },\r\n    }\r\n\r\ndef get_replies(id, total_replies, recordTweetid):\r\n    tweet = client.get_tweet_by_id(id)\r\n    if( tweet.reply_count == 0):\r\n        return\r\n\r\n    # Obter todas as respostas\r\n    todas_respostas = []\r\n    tweets = tweet.replies\r\n    todas_respostas += tweets\r\n\r\n    while len(tweets) != 0:\r\n        try:\r\n            time.sleep(randint(10,20))\r\n            tweets = tweets.next()\r\n            all_replies += tweets\r\n        except IndexError:\r\n            print(\"Array Index error\")\r\n            break\r\n\r\n    print(len(todas_respostas))\r\n    print(todas_respostas)\r\n    for t in all_replies:\r\n        jsonTweet = get_json_tweet(t, id)\r\n        if (not t.id in recordTweetid) and ( t.in_reply_to == id):\r\n            time.sleep(randint(10,20))\r\n            get_replies(t.id, total_replies, recordTweetid)\r\n        f.write(',\\n')\r\n        json.dump(jsonTweet, f, ensure_ascii=False, indent=4)\r\n\r\n\r\n\r\ncliente = Client('en-US')\r\n\r\n## Voc\u00ea pode comentar essa parte do `login` ap\u00f3s a primeira vez que executar o script (e voc\u00ea tiver o arquivo `cookies.json`)\r\ncliente.login(\r\n    auth_info_1='username',\r\n    password='secret',\r\n)\r\n\r\ncliente.save_cookies('cookies.json');\r\n# client.load_cookies(path='cookies.json');\r\n\r\nreplies = []\r\nrecordTweetid = []\r\nwith open('data2.json', 'a', encoding='utf-8') as f:\r\n    get_replies('1775913633064894669', replies, recordTweetid)\r\n<\/pre>\n<p>Foi um pouco doloroso evitar o 429, passei por v\u00e1rias itera\u00e7\u00f5es, mas no final consegui algo que funciona em sua maior parte. S\u00f3 precisei adicionar o colchete inicial e final para transform\u00e1-lo em uma matriz JSON v\u00e1lida:<\/p>\n<pre class=\"lang:js decode:true\" title=\"Tweet como dados JSON\">[\r\n    {\r\n         \"created_at\": \"Thu Apr 04 16:15:02 +0000 2024\",\r\n         \"id\": \"1775920020377502191\",\r\n         \"full_text\": nulo,\r\n         \"text\": \"@kelseyhightower SOCKS! Jogarei milh\u00f5es de d\u00f3lares na primeira empresa que me oferecer meias!\\n\\nImportante observar aqui: Eu n\u00e3o tenho milh\u00f5es de d\u00f3lares! \\n\\nAcho que posso ter um problema.\",\r\n         \"lang\": \"en\",\r\n         \"in_reply_to\": \"1775913633064894669\",\r\n         \"quote_count\": 1,\r\n         \"reply_count\": 3,\r\n         \"favorite_count\": 23,\r\n         \"view_count\": \"4658\",\r\n         \"hashtags\": [],\r\n         \"user\": {\r\n             \"id\": \"4324751\",\r\n             \"name\": \"Josh Long\",\r\n             \"nome_de_tela\": \"starbuxman\",\r\n    \"url \": \"https:\/\/t.co\/PrSomoWx53\"\r\n         }\r\n    },\r\n...\r\n]\r\n<\/pre>\n<p>Josh est\u00e1 obviamente certo, as meias est\u00e3o no centro do que fazemos no marketing para desenvolvedores, juntamente com a ironia.<\/p>\n<p>Agora, tenho um arquivo que cont\u00e9m uma matriz de documentos JSON, todos com dicas de marketing para desenvolvedores. O que vem a seguir?<\/p>\n<h2 id=\"heading-turning-tweets-in-vectors\">Transformando tweets em vetores<\/h2>\n<p>Para garantir que possa ser usado por um LLM como contexto adicional, ele precisa ser transformado em um vetor, ou <em>incorpora\u00e7\u00e3o<\/em>. Basicamente, \u00e9 uma matriz de valores decimais entre 0 e 1. Tudo isso permitir\u00e1 o RAG, Retrieval Augmented Generation. Isso n\u00e3o \u00e9 universal, cada LLM tem sua pr\u00f3pria representa\u00e7\u00e3o de um objeto (como dados de texto, \u00e1udio ou v\u00eddeo). Por ser extremamente pregui\u00e7oso e n\u00e3o saber o que est\u00e1 acontecendo nesse espa\u00e7o, escolhi <a href=\"https:\/\/openai.com\">OpenAI\/ChatGPT<\/a>. \u00c9 como se houvesse mais modelos surgindo a cada semana do que t\u00ednhamos estruturas JavaScript em 2017.<\/p>\n<p>De qualquer forma, criei minha conta na OpenAI, criei uma chave de API, adicionei alguns d\u00f3lares porque, aparentemente, voc\u00ea n\u00e3o pode usar a API deles se n\u00e3o o fizer, mesmo as coisas gratuitas. Ent\u00e3o, eu estava pronto para transformar tweets em vetores. O caminho mais curto para obter a incorpora\u00e7\u00e3o por meio da API \u00e9 usar o curl. Ele ter\u00e1 a seguinte apar\u00eancia:<\/p>\n<pre class=\"lang:sh decode:true\" title=\"Use o curl para criar uma incorpora\u00e7\u00e3o\">curl https:\/\/api.openai.com\/v1\/embeddings -H \"Authorization: Bearer $OPENAI_API_KEY\" \\\r\n -H \"Content-Type: application\/json\" \\\r\n   -d '{\"input\": \" SOCKS! Jogarei milh\u00f5es de d\u00f3lares na primeira empresa que me oferecer meias!\\n\\nImportante observar aqui: Eu n\u00e3o tenho milh\u00f5es de d\u00f3lares! \\n\\nAcho que posso ter um problema.\", \"model\": \"text-embedding-ada-002\"}'\r\n{\r\n  \"object\": \"list\",\r\n  \"data\": [\r\n{\r\n   \"object\": \"embedding\",\r\n   \"index\": 0,\r\n   \"embedding\": [\r\n     -0.008340064,\r\n     -0.03142008,\r\n     0.01558878,\r\n    ...\r\n    0.0007338819,\r\n     -0.01672055\r\n   ]\r\n}\r\n  ],\r\n  \"model\": \"text-embedding-ada-002\",\r\n  \"usage\": {\r\n\"prompt_tokens\": 40,\r\n\"total_tokens\": 40\r\n  }\r\n}\r\n<\/pre>\n<p>Aqui voc\u00ea pode ver que a entrada JSON tem um campo de entrada que ser\u00e1 transformado em um vetor e o campo de modelo que faz refer\u00eancia ao modelo a ser usado para transformar o texto em um vetor. A sa\u00edda fornece o vetor, o modelo usado e as estat\u00edsticas de uso da API.<\/p>\n<p>Fant\u00e1stico, e agora? Transformar esses dados em vetores n\u00e3o \u00e9 barato. \u00c9 melhor que sejam armazenados em um banco de dados para serem reutilizados posteriormente. Al\u00e9m disso, voc\u00ea pode obter facilmente alguns recursos adicionais interessantes, como a pesquisa h\u00edbrida.<\/p>\n<p>H\u00e1 algumas maneiras de ver isso. H\u00e1 uma maneira manual tediosa que \u00e9 \u00f3tima para aprender coisas. E h\u00e1 o uso de bibliotecas e ferramentas que facilitam a vida. Na verdade, fui direto ao ponto usando <a href=\"https:\/\/www.langchain.com\">Langchain<\/a> achando que isso facilitaria minha vida, e foi o que aconteceu, at\u00e9 que me perdi um pouco. Ent\u00e3o, para nosso benef\u00edcio coletivo de aprendizado, vamos come\u00e7ar com a maneira manual. Tenho uma matriz de documentos JSON, preciso vetorizar seu conte\u00fado, armazen\u00e1-lo no Couchbase e, depois, poderei consult\u00e1-los com outro vetor.<\/p>\n<h2 id=\"heading-loading-the-tweets-in-a-vector-store-like-couchbase\">Carregando os tweets em um Vector Store como o Couchbase<\/h2>\n<p>Vou usar Python porque sinto que preciso me aperfei\u00e7oar nisso, embora possamos ver a implementa\u00e7\u00e3o do Langchain em Java ou JavaScript. E a primeira coisa que quero abordar \u00e9 como me conectar ao Couchbase:<\/p>\n<pre class=\"lang:python decode:true\" title=\"Conectar-se ao Couchbase com Python\">def connect_to_couchbase(connection_string, db_username, db_password):\r\n    \"\"\"Conectar-se ao couchbase\"\"\"\r\n    from couchbase.cluster import Cluster\r\n    from couchbase.auth import PasswordAuthenticator\r\n    from couchbase.options import ClusterOptions\r\n    from datetime import timedelta\r\n\r\n    auth = PasswordAuthenticator(db_username, db_password)\r\n    options = ClusterOptions(auth)\r\n    connect_string = connection_string\r\n    cluster = Cluster(connect_string, options)\r\n    # Aguarde at\u00e9 que o cluster esteja pronto para uso.\r\n    cluster.wait_until_ready(timedelta(seconds=5))\r\n    return cluster\r\n\r\nif name == \"__main__\":\r\n    # Carregar vari\u00e1veis de ambiente\r\n    DB_CONN_STR = os.getenv(\"DB_CONN_STR\")\r\n    DB_USERNAME = os.getenv(\"DB_USERNAME\")\r\n    DB_PASSWORD = os.getenv(\"DB_PASSWORD\")\r\n    DB_BUCKET = os.getenv(\"DB_BUCKET\")\r\n    DB_SCOPE = os.getenv(\"DB_SCOPE\")\r\n    DB_COLLECTION = os.getenv(\"DB_COLLECTION\")\r\n    # Conectar-se ao Couchbase Vector Store\r\n    cluster = connect_to_couchbase(DB_CONN_STR, DB_USERNAME, DB_PASSWORD)\r\n    bucket = cluster.bucket(DB_BUCKET)\r\n    escopo = bucket.scope(DB_SCOPE)\r\n    collection = scope.collection(DB_COLLECTION)\r\n<\/pre>\n<p>Nesse c\u00f3digo, voc\u00ea pode ver o <em>connect_to_couchbase<\/em> que aceita um m\u00e9todo <em>string de conex\u00e3o<\/em>, <em>nome de usu\u00e1rio<\/em> e <em>senha<\/em>. Todos eles s\u00e3o fornecidos pelas vari\u00e1veis de ambiente carregadas no in\u00edcio. Quando tivermos o objeto do cluster, poderemos obter o bucket, o escopo e a cole\u00e7\u00e3o associados. Se voc\u00ea n\u00e3o estiver familiarizado com o Couchbase, as cole\u00e7\u00f5es s\u00e3o semelhantes a uma tabela RDBMS. Os escopos podem ter tantas cole\u00e7\u00f5es e baldes quantos forem os escopos. Essa granularidade \u00e9 \u00fatil por v\u00e1rios motivos (multiloca\u00e7\u00e3o, sincroniza\u00e7\u00e3o mais r\u00e1pida, backup etc.).<\/p>\n<p>Mais uma coisa antes de obter a cole\u00e7\u00e3o. Precisamos de um c\u00f3digo para transformar o texto em vetores. Usando o cliente OpenAI, ele tem a seguinte apar\u00eancia:<\/p>\n<pre class=\"lang:python decode:true\" title=\"Configura\u00e7\u00e3o do OpenAI\">from openai import OpenAI\r\n\r\n    def get_embedding(text, model=\"text-embedding-ada-002\"):\r\n        text = text.replace(\"\\n\", \" \")\r\n        return client.embeddings.create(input = [text], model=model).data[0].embedding\r\n\r\n    cliente = OpenAI()<\/pre>\n<p>Isso far\u00e1 um trabalho semelhante ao da chamada de curl anterior. Apenas certifique-se de que voc\u00ea tenha o <strong>OPENAI_API_KEY <\/strong>definida para que o cliente funcione.<\/p>\n<p>Agora vamos ver como criar um documento do Couchbase a partir de um tweet JSON, com a incorpora\u00e7\u00e3o gerada.<\/p>\n<pre class=\"lang:python decode:true\" title=\"Analisar o arquivo JSON que cont\u00e9m o tweet e inseri-lo no Couchbase\">    # Abra o arquivo JSON e carregue os tweets como uma matriz JSON em dados\r\n    com open('data.json') as f:\r\n        data = json.load(f)\r\n\r\n    # Fa\u00e7a um loop para criar o objeto a partir do JSON\r\n    for tweet in data:\r\n        text = tweet['text']\r\n        full_text = tweet['full_text']\r\n        id = tweet['id']\r\n        se full_text n\u00e3o for None:\r\n            embedding = get_embedding(full_text)\r\n            textToEmbed = full_text\r\n        else:\r\n            embedding = get_embedding(text)\r\n            textToEmbed = texto\r\n        document = {\r\n            \"metadata\": tweet,\r\n            \"text\": textToEmbed,\r\n            \"embedding\": embedding\r\n        }\r\n        collection.upsert(key = id, value = document)\r\n<\/pre>\n<p>O documento tem tr\u00eas campos, <em>metadados<\/em> cont\u00e9m o tweet inteiro, <em>texto<\/em> \u00e9 o texto transformado em uma string e <em>incorpora\u00e7\u00e3o<\/em> \u00e9 a incorpora\u00e7\u00e3o gerada com o OpenAI. A chave ser\u00e1 o ID do tweet. E <em>upsert<\/em> \u00e9 usado para atualizar ou inserir o documento, caso ele n\u00e3o exista.<\/p>\n<p>Se eu executar isso e me conectar ao meu servidor Couchbase, verei documentos sendo criados.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/lh7-us.googleusercontent.com\/6RRlzFaKvR5PSLLWhqjRwkpqt7EycrDiyGJkuOLH6WDJfjqoh8gnYK2k-EhamIfEQAhWdhczMByFR0Qfsr_eRjp0YvFlgjfZHs_7wrokA49GzD4meuExljzUiU0biIsyUtYyGvdMH-f4mB_b-aktxqg\" alt=\"A Screenshot of the Couchbase Capella UI showing the list of created Documents\" \/><\/p>\n<p id=\"heading-at-that-point-i-have-extracted-data-from-twitter-uploaded-it-into-couchbase-as-one-tweet-per-document-with-the-openai-embedding-generated-and-inserted-for-each-tweet-i-am-ready-to-ask-questions-query-similar-documents\">Neste ponto, extra\u00ed dados do Twitter, carreguei-os no Couchbase como um tweet por documento, com a incorpora\u00e7\u00e3o do OpenAI gerada e inserida para cada tweet. Estou pronto para fazer perguntas para consultar documentos semelhantes.<\/p>\n<h2 id=\"heading-run-vector-search-on-tweets\">Executar o Vector Search em Tweets<\/h2>\n<p>E agora \u00e9 hora de falar sobre o Vector Search. Como pesquisar tweets semelhantes a um determinado texto? A primeira coisa a fazer \u00e9 transformar o texto em um vetor ou incorpora\u00e7\u00e3o. Ent\u00e3o, vamos fazer a pergunta:<\/p>\n<pre class=\"lang:python decode:true\" title=\"Criar a incorpora\u00e7\u00e3o da consulta\">    query = \"Devemos gastar milh\u00f5es de d\u00f3lares para comprar SOCKs para o marketing do desenvolvedor?\"\r\n    queryEmbedding = get_embedding(query)\r\n<\/pre>\n<p>\u00c9 isso a\u00ed. O <em>queryEmbedding<\/em> cont\u00e9m um vetor que representa a consulta. Para a consulta:<\/p>\n<pre class=\"lang:python decode:true\" title=\"Executar uma solicita\u00e7\u00e3o VectorSearch\">INDEX_NAME = os.getenv(\"INDEX_NAME\") # Nome completo do \u00edndice de texto\r\n# Esta \u00e9 a consulta de pesquisa do vetor\r\nsearch_req = search.SearchRequest.create(\r\n    VectorSearch.from_vector_query(\r\n        VectorQuery(\r\n            \"Embedding\", # Nome da propriedade JSON que cont\u00e9m a incorpora\u00e7\u00e3o a ser comparada\r\n            queryEmbedding, # nossa incorpora\u00e7\u00e3o de consulta\r\n            5, # n\u00famero m\u00e1ximo de resultados\r\n            )\r\n        )\r\n)\r\n# Executar a consulta de pesquisa vetorial no escopo selecionado\r\nresult = scope.search(\r\n        INDEX_NAME, Nome do \u00edndice de texto completo do #\r\n        search_req,\r\n        SearchOptions(\r\n        show_request=True,\r\n        log_request=True\r\n    ),\r\n).rows()\r\n\r\nfor row in result:\r\n    print(\"Encontrado o tweet \\\"{}\\\" \".format(row))\r\n<\/pre>\n<p>Como quero ver o que estou fazendo, estou ativando os registros do Couchbase SDK configurando essa vari\u00e1vel de ambiente:<\/p>\n<pre class=\"lang:sh decode:true\" title=\"ativar registro\">export PYCBC_LOG_LEVEL=info<\/pre>\n<p>Se voc\u00ea estiver acompanhando o processo e tudo correr bem, dever\u00e1 receber uma mensagem de erro!<\/p>\n<pre class=\"lang:sh decode:true\" title=\"Execute o script e receba um erro IndexNotFound\">@ldoguin \u279c \/workspaces\/rag-demo-x (main) $ python read_vectorize_store_query_json.py\r\nTraceback (\u00faltima chamada mais recente):\r\n  File \"\/workspaces\/rag-demo-x\/read_vectorize_store_query_json.py\", line 167, in \r\n    for row in result:\r\n  File \"\/home\/vscode\/.local\/lib\/python3.11\/site-packages\/couchbase\/search.py\", line 136, in __next__\r\n    raise ex\r\n  Arquivo \"\/home\/vscode\/.local\/lib\/python3.11\/site-packages\/couchbase\/search.py\", linha 130, em __next__\r\n    return self._get_next_row()\r\n           ^^^^^^^^^^^^^^^^^^^^\r\n  Arquivo \"\/home\/vscode\/.local\/lib\/python3.11\/site-packages\/couchbase\/search.py\", linha 121, em _get_next_row\r\n    raise ErrorMapper.build_exception(row)\r\ncouchbase.exceptions.QueryIndexNotFoundException: QueryIndexNotFoundException()\r\n<\/pre>\n<p>E isso \u00e9 bom porque temos um <em>QueryIndexNotFoundException<\/em>. Ele est\u00e1 procurando um \u00edndice que ainda n\u00e3o existe. Portanto, precisamos cri\u00e1-lo. Voc\u00ea pode fazer login em seu cluster no Capella e acompanhar o processo:<\/p>\n<div class=\"embed-wrapper\">\n<div class=\"webembed-wrapper\" style=\"border: solid 1px;\">\n<div style=\"width: 900px;\" class=\"wp-video\"><!--[if lt IE 9]><script>document.createElement('video');<\/script><![endif]-->\n<video class=\"wp-video-shortcode\" id=\"video-15671-1\" width=\"900\" height=\"506\" preload=\"metadata\" controls=\"controls\"><source type=\"video\/mp4\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2024\/04\/2024-04-24-16-02-17.mp4?_=1\" \/><a href=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2024\/04\/2024-04-24-16-02-17.mp4\">https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2024\/04\/2024-04-24-16-02-17.mp4<\/a><\/video><\/div>\n<\/div>\n<\/div>\n<p>Quando tiver o \u00edndice, voc\u00ea poder\u00e1 execut\u00e1-lo novamente e obter\u00e1 o seguinte resultado:<\/p>\n<pre class=\"lang:sh decode:true\" title=\"Execute o script e obtenha os resultados da pesquisa\">@ldoguin \u279c \/workspaces\/rag-demo-x (main) $ python read_vectorize_store_query_json.py\r\nEncontrado o tweet \"SearchRow(index='default._default.my_index_6933ea565b622355_4c1c5584', id='1775920020377502191', score=0.6803812980651855, fields=None, sort=[], locations=None, fragments={}, explanation={})\"\r\nEncontrado tweet \"SearchRow(index='default._default.my_index_6933ea565b622355_4c1c5584', id='1775925931791745392', score=0.4303199052810669, fields=None, sort=[], locations=None, fragments={}, explanation={})\"\r\nEncontrado tweet \"SearchRow(index='default._default.my_index_6933ea565b622355_4c1c5584', id='1775921934645006471', score=0.3621498942375183, fields=None, sort=[], locations=None, fragments={}, explanation={})\"\r\nEncontrado tweet \"SearchRow(index='default._default.my_index_6933ea565b622355_4c1c5584', id='1776058836278727024', score=0.3274463415145874, fields=None, sort=[], locations=None, fragments={}, explanation={})\"\r\nEncontrado tweet \"SearchRow(index='default._default.my_index_6933ea565b622355_4c1c5584', id='1775979601862307872', score=0.32539570331573486, fields=None, sort=[], locations=None, fragments={}, explanation={}\"\r\n<\/pre>\n<p>Obtemos <em>Linha de pesquisa<\/em> que cont\u00eam o \u00edndice usado, a chave do documento, a pontua\u00e7\u00e3o relacionada e, em seguida, v\u00e1rios campos vazios. Voc\u00ea pode ver que isso tamb\u00e9m \u00e9 ordenado por <em>pontua\u00e7\u00e3o<\/em>e est\u00e1 fornecendo o tweet mais pr\u00f3ximo da consulta encontrada.<\/p>\n<p>Como podemos saber se funcionou? A coisa mais r\u00e1pida a fazer \u00e9 procurar o documento com nosso plug-in do IDE. Se voc\u00ea estiver usando o <a href=\"https:\/\/www.couchbase.com\/blog\/pt\/couchbase-visual-studio-code\/\">VSCode<\/a> ou qualquer <a href=\"https:\/\/www.couchbase.com\/blog\/pt\/couchbase-jetbrains-plugin\/\">JetBrains<\/a> IDE, deve ser bem f\u00e1cil. Voc\u00ea tamb\u00e9m pode fazer login no Couchbase Capella e encontr\u00e1-lo l\u00e1.<\/p>\n<p>Ou podemos modificar o \u00edndice de pesquisa para armazenar o campo de texto e os metadados associados e executar novamente a consulta:<\/p>\n<div style=\"border: solid 1px;\">\n<div style=\"width: 900px;\" class=\"wp-video\"><video class=\"wp-video-shortcode\" id=\"video-15671-2\" width=\"900\" height=\"506\" preload=\"metadata\" controls=\"controls\"><source type=\"video\/mp4\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2024\/04\/2024-04-24-16-03-17.mp4?_=2\" \/><a href=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2024\/04\/2024-04-24-16-03-17.mp4\">https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2024\/04\/2024-04-24-16-03-17.mp4<\/a><\/video><\/div>\n<\/div>\n<div>\n<div>\n<pre class=\"lang:python decode:true\" title=\"Adicionar op\u00e7\u00f5es de pesquisa \">result = scope.search(\r\n        INDEX_NAME,\r\n        search_req,\r\n        SearchOptions(\r\n        fields=[\"metadata.text\"],\r\n        show_request=True,\r\n        log_request=True\r\n    ),\r\n).rows()\r\n<\/pre>\n<pre class=\"lang:sh decode:true\" title=\"Obter resultado com os campos selecionados\">@ldoguin \u279c \/workspaces\/rag-demo-x (main) $ python read_vectorize_store_query_json.py\r\nFound tweet \"SearchRow(index='default._default.my_index_6933ea565b622355_4c1c5584', id='1775920020377502191', score=0.6803812980651855, fields={'metadata.text': '@kelseyhightower SOCKS! I will throw millions of dollars at the first company to offer me socks!\\n\\nImportant to note here: I don\u2019t have millions of dollars! \\n\\nI think I might have a problem.'}, sort=[], locations=None, fragments={}, explanation={})\"\r\nFound tweet \"SearchRow(index='default._default.my_index_6933ea565b622355_4c1c5584', id='1775925931791745392', score=0.4303199052810669, fields={'metadata.text': \"@kelseyhightower If your t-shirt has a pleasant abstract design on it where the logo of your company isn't very obvious, I will wear that quite happily (thanks, Twilio)\\n\\nI also really like free socks\"}, sort=[], locations=None, fragments={}, explanation={})\"\r\nFound tweet \"SearchRow(index='default._default.my_index_6933ea565b622355_4c1c5584', id='1775921934645006471', score=0.3621498942375183, fields={'metadata.text': \"@kelseyhightower For some reason, devs think they aren't influenced by marketing even if they are\ud83d\ude05\\n\\nI'm influenced by social media &amp; fomo. If a lot of developers start talking about some framework or tool, I  look into it\\n\\nI also look into things that may benefit my career in the future\"}, sort=[], locations=None, fragments={}, explanation={})\"\r\nFound tweet \"SearchRow(index='default._default.my_index_6933ea565b622355_4c1c5584', id='1776058836278727024', score=0.3274463415145874, fields={'metadata.text': \"@kelseyhightower Have a good product. That's the best marketing there is!\"}, sort=[], locations=None, fragments={}, explanation={})\"\r\nFound tweet \"SearchRow(index='default._default.my_index_6933ea565b622355_4c1c5584', id='1775979601862307872', score=0.32539570331573486, fields={'metadata.text': '@kelseyhightower From a security standpoint, marketing that works on me:\\n\\nShowing strong technical expertise. If you\u2019re of the few shops that consistently puts out good research and quality writeups? When I\u2019m looking at vendors, I\u2019m looking at you. When I\u2019m not looking, I\u2019m noting it for later'}, sort=[], locations=None, fragments={}, explanation={})\"\r\n<\/pre>\n<\/div>\n<\/div>\n<h2 id=\"heading-conclusion\">Conclus\u00e3o<\/h2>\n<p>Ent\u00e3o funcionou, o tweet de Josh sobre meias aparece no topo da pesquisa. Agora voc\u00ea j\u00e1 sabe como extrair dados do Twitter, transformar tweets em vetores, armazenar, indexar e consult\u00e1-los no Couchbase. O que isso tem a ver com LLM e IA? Falaremos mais sobre isso na pr\u00f3xima postagem!<\/p>\n<ul>\n<li><a href=\"https:\/\/www.couchbase.com\/blog\/pt\/twitter-thread-tldr-with-ai-part-2\/\">Continue lendo na Parte 2<\/a>.<\/li>\n<\/ul>","protected":false},"excerpt":{"rendered":"<p>Because who has the time ? (also part 1 because it took me further than I expected \ud83d\ude2c) Couchbase recently introduced support for Vector Search. And I have been looking for an excuse to play with it. As it turns [&hellip;]<\/p>","protected":false},"author":49,"featured_media":15703,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"footnotes":""},"categories":[10122,2165,9139,9937],"tags":[1696],"ppma_author":[9023],"class_list":["post-15671","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence-ai","category-full-text-search","category-python","category-vector-search","tag-indexing"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.8 (Yoast SEO v25.8) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>How to Create a Twitter Summary with AI and Vector Search - The Couchbase Blog<\/title>\n<meta name=\"description\" content=\"Discover how Couchbase Vector Search can enable RAG architecture through a real live example: summarizing a twitter thread with a LLM\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.couchbase.com\/blog\/pt\/twitter-thread-tldr-with-ai-part-1\/\" \/>\n<meta property=\"og:locale\" content=\"pt_BR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Twitter Thread tl;dr With AI? Part 1\" \/>\n<meta property=\"og:description\" content=\"Discover how Couchbase Vector Search can enable RAG architecture through a real live example: summarizing a twitter thread with a LLM\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.couchbase.com\/blog\/pt\/twitter-thread-tldr-with-ai-part-1\/\" \/>\n<meta property=\"og:site_name\" content=\"The Couchbase Blog\" \/>\n<meta property=\"article:published_time\" content=\"2024-05-07T15:05:04+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-06-14T03:27:16+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/Designer-3.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1200\" \/>\n\t<meta property=\"og:image:height\" content=\"721\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Laurent Doguin\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@ldoguin\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"unstructured.io\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-1\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-1\/\"},\"author\":{\"name\":\"Laurent Doguin\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/c0aa9b8f1ed51b7a9e2f7cb755994a5e\"},\"headline\":\"Twitter Thread tl;dr With AI? Part 1\",\"datePublished\":\"2024-05-07T15:05:04+00:00\",\"dateModified\":\"2025-06-14T03:27:16+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-1\/\"},\"wordCount\":1465,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-1\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/Designer-3.png\",\"keywords\":[\"Indexing\"],\"articleSection\":[\"Artificial Intelligence (AI)\",\"Full-Text Search\",\"Python\",\"Vector Search\"],\"inLanguage\":\"pt-BR\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-1\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-1\/\",\"url\":\"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-1\/\",\"name\":\"How to Create a Twitter Summary with AI and Vector Search - The Couchbase Blog\",\"isPartOf\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-1\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-1\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/Designer-3.png\",\"datePublished\":\"2024-05-07T15:05:04+00:00\",\"dateModified\":\"2025-06-14T03:27:16+00:00\",\"description\":\"Discover how Couchbase Vector Search can enable RAG architecture through a real live example: summarizing a twitter thread with a LLM\",\"breadcrumb\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-1\/#breadcrumb\"},\"inLanguage\":\"pt-BR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-1\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-BR\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-1\/#primaryimage\",\"url\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/Designer-3.png\",\"contentUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/Designer-3.png\",\"width\":1200,\"height\":721},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-1\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.couchbase.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Twitter Thread tl;dr With AI? Part 1\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#website\",\"url\":\"https:\/\/www.couchbase.com\/blog\/\",\"name\":\"The Couchbase Blog\",\"description\":\"Couchbase, the NoSQL Database\",\"publisher\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.couchbase.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"pt-BR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#organization\",\"name\":\"The Couchbase Blog\",\"url\":\"https:\/\/www.couchbase.com\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-BR\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png\",\"contentUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png\",\"width\":218,\"height\":34,\"caption\":\"The Couchbase Blog\"},\"image\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/c0aa9b8f1ed51b7a9e2f7cb755994a5e\",\"name\":\"Laurent Doguin\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-BR\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/image\/12929ce99397769f362b7a90d6b85071\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/b8c466908092b46634af916b6921f30187a051e4367ded7ac9b1a3f2c5692fd2?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/b8c466908092b46634af916b6921f30187a051e4367ded7ac9b1a3f2c5692fd2?s=96&d=mm&r=g\",\"caption\":\"Laurent Doguin\"},\"description\":\"Laurent is a nerdy metal head who lives in Paris. He mostly writes code in Java and structured text in AsciiDoc, and often talks about data, reactive programming and other buzzwordy stuff. He is also a former Developer Advocate for Clever Cloud and Nuxeo where he devoted his time and expertise to helping those communities grow bigger and stronger. He now runs Developer Relations at Couchbase.\",\"sameAs\":[\"https:\/\/x.com\/ldoguin\"],\"honorificPrefix\":\"Mr\",\"birthDate\":\"1985-06-07\",\"gender\":\"male\",\"award\":[\"Devoxx Champion\",\"Couchbase Legend\"],\"knowsAbout\":[\"Java\"],\"knowsLanguage\":[\"English\",\"French\"],\"jobTitle\":\"Director Developer Relation & Strategy\",\"worksFor\":\"Couchbase\",\"url\":\"https:\/\/www.couchbase.com\/blog\/pt\/author\/laurent-doguin\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"How to Create a Twitter Summary with AI and Vector Search - The Couchbase Blog","description":"Descubra como o Couchbase Vector Search pode habilitar a arquitetura RAG por meio de um exemplo real: resumir um t\u00f3pico do Twitter com um LLM","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.couchbase.com\/blog\/pt\/twitter-thread-tldr-with-ai-part-1\/","og_locale":"pt_BR","og_type":"article","og_title":"Twitter Thread tl;dr With AI? Part 1","og_description":"Discover how Couchbase Vector Search can enable RAG architecture through a real live example: summarizing a twitter thread with a LLM","og_url":"https:\/\/www.couchbase.com\/blog\/pt\/twitter-thread-tldr-with-ai-part-1\/","og_site_name":"The Couchbase Blog","article_published_time":"2024-05-07T15:05:04+00:00","article_modified_time":"2025-06-14T03:27:16+00:00","og_image":[{"width":1200,"height":721,"url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/Designer-3.png","type":"image\/png"}],"author":"Laurent Doguin","twitter_card":"summary_large_image","twitter_creator":"@ldoguin","twitter_misc":{"Written by":"unstructured.io","Est. reading time":"7 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-1\/#article","isPartOf":{"@id":"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-1\/"},"author":{"name":"Laurent Doguin","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/c0aa9b8f1ed51b7a9e2f7cb755994a5e"},"headline":"Twitter Thread tl;dr With AI? Part 1","datePublished":"2024-05-07T15:05:04+00:00","dateModified":"2025-06-14T03:27:16+00:00","mainEntityOfPage":{"@id":"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-1\/"},"wordCount":1465,"commentCount":0,"publisher":{"@id":"https:\/\/www.couchbase.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-1\/#primaryimage"},"thumbnailUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/Designer-3.png","keywords":["Indexing"],"articleSection":["Artificial Intelligence (AI)","Full-Text Search","Python","Vector Search"],"inLanguage":"pt-BR","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-1\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-1\/","url":"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-1\/","name":"How to Create a Twitter Summary with AI and Vector Search - The Couchbase Blog","isPartOf":{"@id":"https:\/\/www.couchbase.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-1\/#primaryimage"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-1\/#primaryimage"},"thumbnailUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/Designer-3.png","datePublished":"2024-05-07T15:05:04+00:00","dateModified":"2025-06-14T03:27:16+00:00","description":"Descubra como o Couchbase Vector Search pode habilitar a arquitetura RAG por meio de um exemplo real: resumir um t\u00f3pico do Twitter com um LLM","breadcrumb":{"@id":"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-1\/#breadcrumb"},"inLanguage":"pt-BR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-1\/"]}]},{"@type":"ImageObject","inLanguage":"pt-BR","@id":"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-1\/#primaryimage","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/Designer-3.png","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/Designer-3.png","width":1200,"height":721},{"@type":"BreadcrumbList","@id":"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-1\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.couchbase.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Twitter Thread tl;dr With AI? Part 1"}]},{"@type":"WebSite","@id":"https:\/\/www.couchbase.com\/blog\/#website","url":"https:\/\/www.couchbase.com\/blog\/","name":"Blog do Couchbase","description":"Couchbase, o banco de dados NoSQL","publisher":{"@id":"https:\/\/www.couchbase.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.couchbase.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"pt-BR"},{"@type":"Organization","@id":"https:\/\/www.couchbase.com\/blog\/#organization","name":"Blog do Couchbase","url":"https:\/\/www.couchbase.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"pt-BR","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png","width":218,"height":34,"caption":"The Couchbase Blog"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/c0aa9b8f1ed51b7a9e2f7cb755994a5e","name":"Laurent Doguin","image":{"@type":"ImageObject","inLanguage":"pt-BR","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/image\/12929ce99397769f362b7a90d6b85071","url":"https:\/\/secure.gravatar.com\/avatar\/b8c466908092b46634af916b6921f30187a051e4367ded7ac9b1a3f2c5692fd2?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/b8c466908092b46634af916b6921f30187a051e4367ded7ac9b1a3f2c5692fd2?s=96&d=mm&r=g","caption":"Laurent Doguin"},"description":"Laurent \u00e9 um nerd metaleiro que mora em Paris. Em sua maior parte, ele escreve c\u00f3digo em Java e texto estruturado em AsciiDoc, e frequentemente fala sobre dados, programa\u00e7\u00e3o reativa e outras coisas que est\u00e3o na moda. Ele tamb\u00e9m foi Developer Advocate do Clever Cloud e do Nuxeo, onde dedicou seu tempo e experi\u00eancia para ajudar essas comunidades a crescerem e se fortalecerem. Atualmente, ele dirige as Rela\u00e7\u00f5es com Desenvolvedores na Couchbase.","sameAs":["https:\/\/x.com\/ldoguin"],"honorificPrefix":"Mr","birthDate":"1985-06-07","gender":"male","award":["Devoxx Champion","Couchbase Legend"],"knowsAbout":["Java"],"knowsLanguage":["English","French"],"jobTitle":"Director Developer Relation & Strategy","worksFor":"Couchbase","url":"https:\/\/www.couchbase.com\/blog\/pt\/author\/laurent-doguin\/"}]}},"authors":[{"term_id":9023,"user_id":49,"is_guest":0,"slug":"laurent-doguin","display_name":"Laurent Doguin","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/b8c466908092b46634af916b6921f30187a051e4367ded7ac9b1a3f2c5692fd2?s=96&d=mm&r=g","author_category":"","last_name":"Doguin","first_name":"Laurent","job_title":"","user_url":"","description":"Laurent \u00e9 um nerd metaleiro que mora em Paris. Em sua maior parte, ele escreve c\u00f3digo em Java e texto estruturado em AsciiDoc, e frequentemente fala sobre dados, programa\u00e7\u00e3o reativa e outras coisas que est\u00e3o na moda. Ele tamb\u00e9m foi Developer Advocate do Clever Cloud e do Nuxeo, onde dedicou seu tempo e experi\u00eancia para ajudar essas comunidades a crescerem e se fortalecerem. Atualmente, ele dirige as Rela\u00e7\u00f5es com Desenvolvedores na Couchbase."}],"_links":{"self":[{"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/posts\/15671","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/users\/49"}],"replies":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/comments?post=15671"}],"version-history":[{"count":0,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/posts\/15671\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/media\/15703"}],"wp:attachment":[{"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/media?parent=15671"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/categories?post=15671"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/tags?post=15671"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/ppma_author?post=15671"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}