{"id":15711,"date":"2024-05-17T14:48:54","date_gmt":"2024-05-17T21:48:54","guid":{"rendered":"https:\/\/www.couchbase.com\/blog\/?p=15711"},"modified":"2025-06-13T21:24:53","modified_gmt":"2025-06-14T04:24:53","slug":"twitter-thread-tldr-with-ai-part-2","status":"publish","type":"post","link":"https:\/\/www.couchbase.com\/blog\/es\/twitter-thread-tldr-with-ai-part-2\/","title":{"rendered":"Twitter Thread tl;dr \u00bfCon IA? Parte 2"},"content":{"rendered":"<p>In <a href=\"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-1\/\">part 1<\/a> we saw how to scrape Twitter, turn tweets in JSON documents, get an embedding representation of that tweet, store everything in Couchbase and how to run a vector search. These are the first steps of a Retrieval Augmented Generation architecture that could summarize a twitter thread. The next step is to use a Large Language Model. We can prompt it to summarize the thread, and we can enrich the context of the prompt thanks to Vector Search.<\/p>\n<h2 id=\"langchain-and-streamlit\">LangChain and Streamlit<\/h2>\n<p>So how do we make this all work together with an LLM? That&#8217;s where the LangChain project can help. Its goal is to enable developers to build LLM-based applications. We already have some samples available on <a href=\"https:\/\/github.com\/couchbase-examples\/\">GitHub<\/a> that showcase our LangChain module. Like this RAG demo allowing the user to upload a PDF, vectorize it, store it in Couchbase and <a href=\"https:\/\/developer.couchbase.com\/tutorial-nodejs-langchain-pdf-chat\">use it in a chatbot<\/a>. That one is in JavaScript, but there is also a <a href=\"https:\/\/github.com\/couchbase-examples\/rag-demo\">Python version<\/a>.<\/p>\n<p>As it turns out, this is exactly what I want to do, except it&#8217;s using a PDF instead of a list of tweets. So I forked it and <a href=\"https:\/\/github.com\/ldoguin\/rag-demo-x\">started playing with it here<\/a>. \u00a0Here, Nithish is using a couple interesting libraries, LangChain of course, and Streamlit. Another cool thing to learn! Streamlit is like a <em>PaaS meet low code meet data science<\/em> service. It allows you to deploy data-based apps very easily, with minimum code, in a very, very opinionated way.<\/p>\n<h2 id=\"configuration\">Configuration<\/h2>\n<p>Let&#8217;s break down the code in smaller chunks. We can start with the configuration. The following method makes sure the right environment variables are set, and stops the application deployment if they are not.<\/p>\n<p>The <em>check_environment_variable<\/em> method is called several time to make sure the needed configuration is set, and if not will stop the app.<code class=\"lang-python\"><\/code><\/p>\n<pre class=\"lang:python decode:true\">def check_environment_variable(variable_name):\r\n\"\"\"Check if environment variable is set\"\"\"\r\nif variable_name not in os.environ:\r\n    st.error(f\"{variable_name} environment variable is not set. Please add it to the secrets.toml file\")\r\n    st.stop()<\/pre>\n<pre class=\"lang:python decode:true \"> \tcheck_environment_variable(\"OPENAI_API_KEY\") # The OpenApi API Key I have created and user earlier\r\n   \tcheck_environment_variable(\"DB_CONN_STR\") # A connection string to connect to Couchbase, like couchbase:\/\/localhost or couchbases:\/\/cb.abab-abab.cloud.couchbase.com\r\n   \tcheck_environment_variable(\"DB_USERNAME\") # Username\r\n   \tcheck_environment_variable(\"DB_PASSWORD\") # And password to connect to Couchbase\r\n   \tcheck_environment_variable(\"DB_BUCKET\") # The name of the bucket containing our scopes and collection\r\n   \tcheck_environment_variable(\"DB_SCOPE\") # Scope\r\n   \tcheck_environment_variable(\"DB_COLLECTION\") # and collection name, you can think of a collection as a table in RDBMS\r\n   \tcheck_environment_variable(\"INDEX_NAME\") # The name of the search vector index<\/pre>\n<p>This means everything in there is needed. A connection to <strong>OpenAI<\/strong> and to <strong>Couchbase<\/strong>. Let&#8217;s quickly talk about Couchbase. It&#8217;s a JSON, multi-model distributed database with an integrated cache. You can use it as K\/V, SQL, Full-text Search, Time Series, Analytics, and we added fantastic new features in 7.6: Recursive CTEs to do graph queries, or the one that interests us most today, Vector Search. Fastest way to try is to go to <a href=\"https:\/\/cloud.couchbase.com\">cloud.couchbase.com<\/a>, there is a 30 day trial, no credit card required.<\/p>\n<p>From there you can follow the steps and get your new cluster setup. Setup a bucket, scope, collection and index, a user and make sure your cluster is available from outside and you can move on to the next part. Getting a connection to Couchbase from the app. It can be done with these two functions. You can see they are annotated with <code>@st.cache_resource<\/code>. It&#8217;s used to cache the object from Streamlit&#8217;s perspective. It makes it available for other instances or reruns. Here&#8217;s the doc excerpt<\/p>\n<blockquote><p>Decorator to cache functions that return global resources (e.g. database connections, ML models).<\/p>\n<p>Cached objects are shared across all users, sessions, and reruns. They must be thread-safe because they can be accessed from multiple threads concurrently. If thread safety is an issue, consider using st.session_state to store resources per session instead.<\/p><\/blockquote>\n<pre class=\"lang:python decode:true\">from langchain_community.vectorstores import CouchbaseVectorStore\r\nfrom langchain_openai import OpenAIEmbeddings\r\n\r\n@st.cache_resource(show_spinner=\"Connecting to Vector Store\")\r\ndef get_vector_store(\r\n    _cluster,\r\n    db_bucket,\r\n    db_scope,\r\n    db_collection,\r\n    _embedding,\r\n    index_name,\r\n):\r\n    \"\"\"Return the Couchbase vector store\"\"\"\r\n    vector_store = CouchbaseVectorStore(\r\n        cluster=_cluster,\r\n        bucket_name=db_bucket,\r\n        scope_name=db_scope,\r\n        collection_name=db_collection,\r\n        embedding=_embedding,\r\n        index_name=index_name,\r\n        text_key\r\n    )\r\n    return vector_store\r\n\r\n@st.cache_resource(show_spinner=\"Connecting to Couchbase\")\r\ndef connect_to_couchbase(connection_string, db_username, db_password):\r\n    \"\"\"Connect to couchbase\"\"\"\r\n    from couchbase.cluster import Cluster\r\n    from couchbase.auth import PasswordAuthenticator\r\n    from couchbase.options import ClusterOptions\r\n    from datetime import timedelta\r\n    auth = PasswordAuthenticator(db_username, db_password)\r\n    options = ClusterOptions(auth)\r\n    connect_string = connection_string\r\n    cluster = Cluster(connect_string, options)\r\n    # Wait until the cluster is ready for use.\r\n    cluster.wait_until_ready(timedelta(seconds=5))\r\n    return cluster<\/pre>\n<p>So with this we have a connection to the Couchbase cluster and a connection to the LangChain Couchbase vector store wrapper.<\/p>\n<p><code>connect_to_couchbase(connection_string, db_username, db_password)<\/code> creates the Couchbase cluster connection. <code>get_vector_store(_cluster, db_bucket, db_scope, db_collection, _embedding, index_name,)<\/code> creates the <em>CouchabseVectorStore<\/em> wrapper. It holds a connection to the cluster, the bucket\/scope\/collection information to store data, the index name to make sure we can query the vectors, and and embedding property.<\/p>\n<p>Here it refers to the OpenAIEmbeddings function. It will automatically pick up the <em>OPENAI_API_KEY<\/em> and allow LangChain to use OpenAI&#8217;s API with the key. Every API call will be made transparent by LangChain. Which also means that switching model provider should be fairly transparent when it comes to embedding management.<\/p>\n<h2 id=\"writing-langchain-documents-to-couchbase\">Writing LangChain Documents to Couchbase<\/h2>\n<p>Now, where the magic happens, where we get the tweets, parse them as JSON, create the embedding and write the JSON doc to the specific Couchbase collection. Thanks to Steamlit we can set up a file upload widget and execute an associated function:<\/p>\n<pre class=\"lang:python decode:true\">import tempfile\r\nimport os\r\nfrom langchain.docstore.document import Document\r\n\r\ndef save_tweet_to_vector_store(uploaded_file, vector_store):\r\n    if uploaded_file is not None:\r\n       data = json.load(uploaded_file)  # Parse the uploaded file in JSON, expecting an array of objects\r\n\r\n    docs = []\r\n    ids = []\r\n    for tweet in data: # For all JSON tweets    \r\n        text = tweet['text']\r\n        full_text = tweet['full_text']\r\n        id = tweet['id']\r\n        # Create the Langchain Document, with a text field and associated metadata.\r\n        if full_text is not None:\r\n            doc = Document(page_content=full_text, metadata=tweet)\r\n        else:\r\n            doc = Document(page_content=text, metadata=tweet)\r\n        docs.append(doc)\r\n        ids.append(id) # Create a similar array for Couchbase doc IDs, if not provided, uuid will be automatically generated\r\n\r\n    vector_store.add_documents(documents = docs, ids =  ids) # Store all documents and embeddings\r\n    st.info(f\"tweet and replies loaded into vector store in {len(docs)} documents\")<\/pre>\n<p>It looks somewhat similar to the code in part 1, except all the embedding creation is managed transparently by LangChain. The text field will be vectorized, the metadata will be added to the Couchbase doc. It will look like this:<\/p>\n<pre class=\"lang:js decode:true\">{\r\n\"text\": \"@kelseyhightower SOCKS! I will throw millions of dollars at the first company to offer me socks!\\n\\nImportant to note here: I don\u2019t have millions of dollars! \\n\\nI think I might have a problem.\",\r\n\"embedding\": [\r\n    -0.0006439118069540552,\r\n    -0.021693240183757154,\r\n    0.026031888593037636,\r\n    -0.020210755239867904,\r\n    -0.003226784468532888,\r\n    .......\r\n    -0.01691936794757287\r\n],\r\n\"metadata\": {\r\n    \"created_at\": \"Thu Apr 04 16:15:02 +0000 2024\",\r\n    \"id\": \"1775920020377502191\",\r\n    \"full_text\": null,\r\n    \"text\": \"@kelseyhightower SOCKS! I will throw millions of dollars at the first company to offer me socks!\\n\\nImportant to note here: I don\u2019t have millions of dollars! \\n\\nI think I might have a problem.\",\r\n    \"lang\": \"en\",\r\n    \"in_reply_to\": \"1775913633064894669\",\r\n    \"quote_count\": 1,\r\n    \"reply_count\": 3,\r\n    \"favorite_count\": 23,\r\n    \"view_count\": \"4658\",\r\n    \"hashtags\": [],\r\n    \"user\": {\r\n    \"id\": \"4324751\",\r\n    \"name\": \"Josh Long\",\r\n    \"screen_name \": \"starbuxman\",\r\n    \"url \": \"https:\/\/t.co\/PrSomoWx53\"\r\n    }\r\n}<\/pre>\n<p>From now on we have functions to manage the tweets upload, vectorize the tweets and store them in Couchbase. Time to use Streamlit to build the actual app and manage the chat flow. Let&#8217;s split that function into several chunks.<\/p>\n<h2 id=\"write-a-streamlit-application\">Write a Streamlit Application<\/h2>\n<p>Starting with the main declaration and the protection of the app. You don&#8217;t want anyone to use it, and use your OpenAI credits. Thanks to Streamlit it can be done fairly easily. Here we setup a password protection using the <em>LOGIN_PASSWORD<\/em> env variable. And we also setup the global page config thanks to the <code>set_page_config<\/code> method. This will give you a simple form to enter the password, and a simple page.<\/p>\n<pre class=\"lang:python decode:true\">if name == \"__main__\":\r\n    # Authorization\r\n    if \"auth\" not in st.session_state:\r\n        st.session_state.auth = False\r\n    st.set_page_config(\r\n        page_title=\"Chat with a tweet export using Langchain, Couchbase &amp; OpenAI\",\r\n        page_icon=\"\ud83e\udd16\",\r\n        layout=\"centered\",\r\n        initial_sidebar_state=\"auto\",\r\n        menu_items=None,\r\n    )\r\n    AUTH = os.getenv(\"LOGIN_PASSWORD\")\r\n    check_environment_variable(\"LOGIN_PASSWORD\")\r\n    # Authentication\r\n    user_pwd = st.text_input(\"Enter password\", type=\"password\")\r\n    pwd_submit = st.button(\"Submit\")\r\n    if pwd_submit and user_pwd == AUTH:\r\n        st.session_state.auth = True\r\n    elif pwd_submit and user_pwd != AUTH:\r\n        st.error(\"Incorrect password\")<\/pre>\n<p>To go a bit further we can add the environment variable checks, OpenAI and Couchbase configuration, and a simple title to start the app flow.<\/p>\n<pre class=\"lang:python decode:true\">if st.session_state.auth:\r\n    # Load environment variables\r\n    DB_CONN_STR = os.getenv(\"DB_CONN_STR\")\r\n    DB_USERNAME = os.getenv(\"DB_USERNAME\")\r\n    DB_PASSWORD = os.getenv(\"DB_PASSWORD\")\r\n    DB_BUCKET = os.getenv(\"DB_BUCKET\")\r\n    DB_SCOPE = os.getenv(\"DB_SCOPE\")\r\n    DB_COLLECTION = os.getenv(\"DB_COLLECTION\")\r\n    INDEX_NAME = os.getenv(\"INDEX_NAME\")\r\n\r\n# Ensure that all environment variables are set\r\ncheck_environment_variable(\"OPENAI_API_KEY\")\r\ncheck_environment_variable(\"DB_CONN_STR\")\r\ncheck_environment_variable(\"DB_USERNAME\")\r\ncheck_environment_variable(\"DB_PASSWORD\")\r\ncheck_environment_variable(\"DB_BUCKET\")\r\ncheck_environment_variable(\"DB_SCOPE\")\r\ncheck_environment_variable(\"DB_COLLECTION\")\r\ncheck_environment_variable(\"INDEX_NAME\")\r\n# Use OpenAI Embeddings\r\nembedding = OpenAIEmbeddings()\r\n# Connect to Couchbase Vector Store\r\ncluster = connect_to_couchbase(DB_CONN_STR, DB_USERNAME, DB_PASSWORD)\r\nvector_store = get_vector_store(\r\n    cluster,\r\n    DB_BUCKET,\r\n    DB_SCOPE,\r\n    DB_COLLECTION,\r\n    embedding,\r\n    INDEX_NAME,\r\n)\r\nst.title(\"Chat with X\")<\/pre>\n<p>Streamlit has a nice codespace integration, I really encourage you to use it, it makes development really easy. And our VSCode plugin can be installed, so you can browse Couchbase and execute queries.<\/p>\n<div id=\"attachment_15714\" style=\"width: 1043px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-15714\" class=\" wp-image-15714\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2024\/05\/e87f1541-3dcf-406a-9e97-711f9a287c90-300x158.png\" alt=\"Run SQ++ Vector Search query from Codespace\" width=\"1033\" height=\"544\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/e87f1541-3dcf-406a-9e97-711f9a287c90-300x158.png 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/e87f1541-3dcf-406a-9e97-711f9a287c90-1024x540.png 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/e87f1541-3dcf-406a-9e97-711f9a287c90-768x405.png 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/e87f1541-3dcf-406a-9e97-711f9a287c90-1536x810.png 1536w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/e87f1541-3dcf-406a-9e97-711f9a287c90-2048x1080.png 2048w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/e87f1541-3dcf-406a-9e97-711f9a287c90-1320x696.png 1320w\" sizes=\"auto, (max-width: 1033px) 100vw, 1033px\" \/><p id=\"caption-attachment-15714\" class=\"wp-caption-text\">Run SQL++ Vector Search query from Codespace<\/p><\/div>\n<div id=\"attachment_15713\" style=\"width: 1045px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-15713\" class=\" wp-image-15713\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2024\/05\/basicapp-300x113.png\" alt=\"A Basic streamlit application opened in codespace\" width=\"1035\" height=\"390\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/basicapp-300x113.png 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/basicapp-1024x385.png 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/basicapp-768x289.png 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/basicapp-1536x578.png 1536w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/basicapp-1320x497.png 1320w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/basicapp.png 1600w\" sizes=\"auto, (max-width: 1035px) 100vw, 1035px\" \/><p id=\"caption-attachment-15713\" class=\"wp-caption-text\">A Basic Streamlit application opened in Codespace<\/p><\/div>\n<h2 id=\"create-langchain-chains\">Create LangChain Chains<\/h2>\n<p>After that comes the chain setup. That\u2019s really where LangChain shines. This is where we can set up the <em>retriever<\/em>. It\u2019s going to be used by LangChain to query Couchbase for all the vectorized tweets. Then it\u2019s time to build the RAG prompt. You can see the template takes a <em>{context}<\/em> and <em>{question}<\/em> parameter. We create a Chat prompt object from the template.<\/p>\n<p>After that comes the LLM choice, here I chose GPT4. And finally the chain creation.<\/p>\n<pre class=\"lang:python decode:true\"># Use couchbase vector store as a retriever for RAG\r\nretriever = vector_store.as_retriever()\r\n# Build the prompt for the RAG\r\ntemplate = \"\"\"You are a helpful bot. If you cannot answer based on the context provided, respond with a generic answer. Answer the question as truthfully as possible using the context below:\r\n{context}\r\nQuestion: {question}\"\"\"\r\nprompt = ChatPromptTemplate.from_template(template)\r\n# Use OpenAI GPT 4 as the LLM for the RAG\r\nllm = ChatOpenAI(temperature=0, model=\"gpt-4-1106-preview\", streaming=True)\r\n# RAG chain\r\nchain = (\r\n    {\"context\": retriever, \"question\": RunnablePassthrough()}\r\n    | prompt\r\n    | llm\r\n    | StrOutputParser()\r\n)<\/pre>\n<p>The chain is built from the chosen model, the context and query parameters, the prompt object and a <em>StrOuptutParser<\/em>. Its role is to parse the LLM response and send it back as a streamable\/chunkable string. The <em>RunnablePassthrough<\/em> method called for the question parameter is used to make sure it\u2019s passed to the prompt \u2018as is\u2019 but you can use other methods to change\/sanitize the question. That&#8217;s it, a RAG architecture. Giving some additional context to an LLM prompt to get a better answer.<\/p>\n<p>We can also build one chain without it to compare the results:<\/p>\n<pre class=\"lang:default decode:true\"># Pure OpenAI output without RAG\r\ntemplate_without_rag = \"\"\"You are a helpful bot. Answer the question as truthfully as possible.\r\nQuestion: {question}\"\"\"\r\nprompt_without_rag = ChatPromptTemplate.from_template(template_without_rag)\r\nllm_without_rag = ChatOpenAI(model=\"gpt-4-1106-preview\")\r\nchain_without_rag = (\r\n    {\"question\": RunnablePassthrough()}\r\n    | prompt_without_rag\r\n    | llm_without_rag\r\n    | StrOutputParser()\r\n)<\/pre>\n<p>No need for context in the prompt template and chain parameter, and no need for a retriever.<\/p>\n<p>Now that we have a couple chain, we can use them through Streamlit. This code will add the first question and the sidebar, allowing for <em>fileupload<\/em>:<\/p>\n<pre class=\"lang:default decode:true\"># Frontend\r\ncouchbase_logo = (\r\n    \"https:\/\/emoji.slack-edge.com\/T024FJS4M\/couchbase\/4a361e948b15ed91.png\"\r\n)\r\nst.markdown(\r\n    \"Answers with [Couchbase logo](https:\/\/emoji.slack-edge.com\/T024FJS4M\/couchbase\/4a361e948b15ed91.png) are generated using RAG while \ud83e\udd16 are generated by pure LLM (ChatGPT)\"\r\n)\r\n\r\nwith st.sidebar:\r\n    st.header(\"Upload your X\")\r\n    with st.form(\"upload X\"):\r\n        uploaded_file = st.file_uploader(\r\n            \"Choose a X export.\",\r\n            help=\"The document will be deleted after one hour of inactivity (TTL).\",\r\n            type=\"json\",\r\n        )\r\n    submitted = st.form_submit_button(\"Upload\")\r\n    if submitted:    \r\n        # store the tweets in the vector store\r\n        save_tweet_to_vector_store(uploaded_file, vector_store)\r\n\r\n    st.subheader(\"How does it work?\")\r\n    st.markdown(\r\n    \"\"\"\r\n    For each question, you will get two answers:\r\n    * one using RAG ([Couchbase logo](https:\/\/emoji.slack-edge.com\/T024FJS4M\/couchbase\/4a361e948b15ed91.png))\r\n    * one using pure LLM - OpenAI (\ud83e\udd16).\r\n    \"\"\"\r\n    )\r\n    st.markdown(\r\n    \"For RAG, we are using [Langchain](https:\/\/langchain.com\/), [Couchbase Vector Search](https:\/\/couchbase.com\/) &amp; [OpenAI](https:\/\/openai.com\/). We fetch tweets relevant to the question using Vector search &amp; add it as the context to the LLM. The LLM is instructed to answer based on the context from the Vector Store.\"\r\n    )\r\n    # View Code\r\n    if st.checkbox(\"View Code\"):\r\n        st.write(\r\n        \"View the code here: [Github](https:\/\/github.com\/couchbase-examples\/rag-demo\/blob\/main\/chat_with_x.py)\"\r\n        )<\/pre>\n<p>Then the instructions and input logic:<\/p>\n<pre class=\"lang:python decode:true \"># Look at the message history and append the first message if empty\r\nif \"messages\" not in st.session_state:\r\n    st.session_state.messages = []\r\n    st.session_state.messages.append(\r\n        {\r\n            \"role\": \"assistant\",\r\n            \"content\": \"Hi, I'm a chatbot who can chat with the tweets. How can I help you?\",\r\n            \"avatar\": \"\ud83e\udd16\",\r\n        }\r\n    )\r\n\r\n# Display chat messages from history on app rerun\r\nfor message in st.session_state.messages:\r\n    with st.chat_message(message[\"role\"], avatar=message[\"avatar\"]):\r\n        st.markdown(message[\"content\"])\r\n\r\n# React to user input\r\nif question := st.chat_input(\"Ask a question based on the Tweets\"):\r\n    # Display user message in chat message container\r\n    st.chat_message(\"user\").markdown(question)\r\n\r\n    # Add user message to chat history\r\n    st.session_state.messages.append(\r\n        {\"role\": \"user\", \"content\": question, \"avatar\": \"\ud83d\udc64\"}\r\n    )\r\n\r\n    # Add placeholder for streaming the response\r\n    with st.chat_message(\"assistant\", avatar=couchbase_logo):\r\n        message_placeholder = st.empty()\r\n\r\n    # stream the response from the RAG\r\n    rag_response = \"\"\r\n    for chunk in chain.stream(question):\r\n        rag_response += chunk\r\n        message_placeholder.markdown(rag_response + \"\u258c\")\r\n\r\n    message_placeholder.markdown(rag_response)\r\n    st.session_state.messages.append(\r\n        {\r\n            \"role\": \"assistant\",\r\n            \"content\": rag_response,\r\n            \"avatar\": couchbase_logo,\r\n        }\r\n    )\r\n\r\n    # stream the response from the pure LLM\r\n\r\n    # Add placeholder for streaming the response\r\n    with st.chat_message(\"ai\", avatar=\"\ud83e\udd16\"):\r\n        message_placeholder_pure_llm = st.empty()\r\n\r\n    pure_llm_response = \"\"\r\n\r\n    for chunk in chain_without_rag.stream(question):\r\n        pure_llm_response += chunk\r\n        message_placeholder_pure_llm.markdown(pure_llm_response + \"\u258c\")\r\n\r\n    message_placeholder_pure_llm.markdown(pure_llm_response)\r\n    st.session_state.messages.append(\r\n        {\r\n            \"role\": \"assistant\",\r\n            \"content\": pure_llm_response,\r\n            \"avatar\": \"\ud83e\udd16\",\r\n        }\r\n    )<\/pre>\n<p>&nbsp;<\/p>\n<p>With that you have everything needed to run the streamlit app that allows the user to:<\/p>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li>Upload a JSON file containing tweets<\/li>\n<li>Transform each tweet into a LangChain Document<\/li>\n<li>Store them in Couchbase along with their embedding representation<\/li>\n<li>Manage two different prompts:\n<ul>\n<li>one with a LangChain retriever to add context<\/li>\n<li>and one without<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>If you run the app you should see something like this:<img decoding=\"async\" src=\"https:\/\/cdn.hashnode.com\/res\/hashnode\/image\/upload\/v1714498016423\/816e1097-91d1-43bd-a130-2e91f3010d7c.png align=&quot;center&quot;\" alt=\"\" \/><\/p>\n<div id=\"attachment_15715\" style=\"width: 1011px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-15715\" class=\" wp-image-15715\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2024\/05\/816e1097-91d1-43bd-a130-2e91f3010d7c-300x155.png\" alt=\"The full streamlit application example opened in Codespace\" width=\"1001\" height=\"517\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/816e1097-91d1-43bd-a130-2e91f3010d7c-300x155.png 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/816e1097-91d1-43bd-a130-2e91f3010d7c-1024x528.png 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/816e1097-91d1-43bd-a130-2e91f3010d7c-768x396.png 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/816e1097-91d1-43bd-a130-2e91f3010d7c-1536x792.png 1536w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/816e1097-91d1-43bd-a130-2e91f3010d7c-2048x1056.png 2048w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/816e1097-91d1-43bd-a130-2e91f3010d7c-1320x681.png 1320w\" sizes=\"auto, (max-width: 1001px) 100vw, 1001px\" \/><p id=\"caption-attachment-15715\" class=\"wp-caption-text\">The full streamlit application example opened in Codespace<\/p><\/div>\n<h2 id=\"conclusion\">Conclusion<\/h2>\n<p>And when you ask &#8220;are Socks are important to developers ?&#8221;, you get those two very interesting answers:<\/p>\n<blockquote><p>Based on the context provided, it seems that socks are important for some developers, as mentioned by Josh Long and Simon Willison in their tweets. They express a desire for socks and seem to value them.<\/p>\n<p>Socks are important for developers as they provide comfort and support while spending long hours sitting at a computer. Additionally, keeping feet warm can help improve focus and productivity.<\/p><\/blockquote>\n<p>Voil\u00e0, we have a bot that knows about a twitter thread, and can answer accordingly. And the fun thing is it did not use just the text Vector in the context, it also used all the metadata stored like the username, because we also indexed all the LangChain document metadata when creating the Index in part 1.<\/p>\n<p>But is this really summarizing the X thread? Not really. Because Vector Search will enrich context with closest documents and not the full thread. So there is a bit of data engineering to do. Let&#8217;s talk about this in the next part!<\/p>\n<h3>Resources<\/h3>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li>Get the <a href=\"https:\/\/github.com\/ldoguin\/rag-demo-x\">RAG demo code example<\/a> to follow along<\/li>\n<li>Sign up for a <a href=\"https:\/\/cloud.couchbase.com\">free trial<\/a> of Couchbase Capella DBaaS to try it yourself<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>In part 1 we saw how to scrape Twitter, turn tweets in JSON documents, get an embedding representation of that tweet, store everything in Couchbase and how to run a vector search. These are the first steps of a Retrieval [&hellip;]<\/p>\n","protected":false},"author":49,"featured_media":15715,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"footnotes":""},"categories":[10122,1815,2225,1816,9139,9936,9937],"tags":[9963,9964],"ppma_author":[9023],"class_list":["post-15711","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence-ai","category-best-practices-and-tutorials","category-cloud","category-couchbase-server","category-python","category-search","category-vector-search","tag-langchain","tag-openai"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.3 (Yoast SEO v27.3) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Twitter Thread tl;dr With AI? Part 2 - The Couchbase Blog<\/title>\n<meta name=\"description\" content=\"Build a Streamlit app that uses LangChain and Vector Search for JSON data taken from Twitter and indexed in Couchbase NoSQL database for interactive Chat.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.couchbase.com\/blog\/es\/twitter-thread-tldr-with-ai-part-2\/\" \/>\n<meta property=\"og:locale\" content=\"es_MX\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Twitter Thread tl;dr With AI? Part 2\" \/>\n<meta property=\"og:description\" content=\"Build a Streamlit app that uses LangChain and Vector Search for JSON data taken from Twitter and indexed in Couchbase NoSQL database for interactive Chat.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.couchbase.com\/blog\/es\/twitter-thread-tldr-with-ai-part-2\/\" \/>\n<meta property=\"og:site_name\" content=\"The Couchbase Blog\" \/>\n<meta property=\"article:published_time\" content=\"2024-05-17T21:48:54+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-06-14T04:24:53+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/816e1097-91d1-43bd-a130-2e91f3010d7c.png\" \/>\n\t<meta property=\"og:image:width\" content=\"2537\" \/>\n\t<meta property=\"og:image:height\" content=\"1308\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Laurent Doguin\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@ldoguin\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"unstructured.io\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/twitter-thread-tldr-with-ai-part-2\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/twitter-thread-tldr-with-ai-part-2\\\/\"},\"author\":{\"name\":\"Laurent Doguin\",\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/#\\\/schema\\\/person\\\/c0aa9b8f1ed51b7a9e2f7cb755994a5e\"},\"headline\":\"Twitter Thread tl;dr With AI? Part 2\",\"datePublished\":\"2024-05-17T21:48:54+00:00\",\"dateModified\":\"2025-06-14T04:24:53+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/twitter-thread-tldr-with-ai-part-2\\\/\"},\"wordCount\":1459,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/twitter-thread-tldr-with-ai-part-2\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/wp-content\\\/uploads\\\/sites\\\/1\\\/2024\\\/05\\\/816e1097-91d1-43bd-a130-2e91f3010d7c.png\",\"keywords\":[\"langchain\",\"openai\"],\"articleSection\":[\"Artificial Intelligence (AI)\",\"Best Practices and Tutorials\",\"Couchbase Capella\",\"Couchbase Server\",\"Python\",\"Search\",\"Vector Search\"],\"inLanguage\":\"es\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/twitter-thread-tldr-with-ai-part-2\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/twitter-thread-tldr-with-ai-part-2\\\/\",\"url\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/twitter-thread-tldr-with-ai-part-2\\\/\",\"name\":\"Twitter Thread tl;dr With AI? Part 2 - The Couchbase Blog\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/twitter-thread-tldr-with-ai-part-2\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/twitter-thread-tldr-with-ai-part-2\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/wp-content\\\/uploads\\\/sites\\\/1\\\/2024\\\/05\\\/816e1097-91d1-43bd-a130-2e91f3010d7c.png\",\"datePublished\":\"2024-05-17T21:48:54+00:00\",\"dateModified\":\"2025-06-14T04:24:53+00:00\",\"description\":\"Build a Streamlit app that uses LangChain and Vector Search for JSON data taken from Twitter and indexed in Couchbase NoSQL database for interactive Chat.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/twitter-thread-tldr-with-ai-part-2\\\/#breadcrumb\"},\"inLanguage\":\"es\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/twitter-thread-tldr-with-ai-part-2\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/twitter-thread-tldr-with-ai-part-2\\\/#primaryimage\",\"url\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/wp-content\\\/uploads\\\/sites\\\/1\\\/2024\\\/05\\\/816e1097-91d1-43bd-a130-2e91f3010d7c.png\",\"contentUrl\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/wp-content\\\/uploads\\\/sites\\\/1\\\/2024\\\/05\\\/816e1097-91d1-43bd-a130-2e91f3010d7c.png\",\"width\":2537,\"height\":1308,\"caption\":\"The full streamlit application example opened in Codespace\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/twitter-thread-tldr-with-ai-part-2\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Twitter Thread tl;dr With AI? Part 2\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/#website\",\"url\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/\",\"name\":\"The Couchbase Blog\",\"description\":\"Couchbase, the NoSQL Database\",\"publisher\":{\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"es\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/#organization\",\"name\":\"The Couchbase Blog\",\"url\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/wp-content\\\/uploads\\\/2023\\\/04\\\/admin-logo.png\",\"contentUrl\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/wp-content\\\/uploads\\\/2023\\\/04\\\/admin-logo.png\",\"width\":218,\"height\":34,\"caption\":\"The Couchbase Blog\"},\"image\":{\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/#\\\/schema\\\/logo\\\/image\\\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/#\\\/schema\\\/person\\\/c0aa9b8f1ed51b7a9e2f7cb755994a5e\",\"name\":\"Laurent Doguin\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/b8c466908092b46634af916b6921f30187a051e4367ded7ac9b1a3f2c5692fd2?s=96&d=mm&r=g12929ce99397769f362b7a90d6b85071\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/b8c466908092b46634af916b6921f30187a051e4367ded7ac9b1a3f2c5692fd2?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/b8c466908092b46634af916b6921f30187a051e4367ded7ac9b1a3f2c5692fd2?s=96&d=mm&r=g\",\"caption\":\"Laurent Doguin\"},\"description\":\"Laurent is a nerdy metal head who lives in Paris. He mostly writes code in Java and structured text in AsciiDoc, and often talks about data, reactive programming and other buzzwordy stuff. He is also a former Developer Advocate for Clever Cloud and Nuxeo where he devoted his time and expertise to helping those communities grow bigger and stronger. He now runs Developer Relations at Couchbase.\",\"sameAs\":[\"https:\\\/\\\/x.com\\\/ldoguin\"],\"honorificPrefix\":\"Mr\",\"birthDate\":\"1985-06-07\",\"gender\":\"male\",\"award\":[\"Devoxx Champion\",\"Couchbase Legend\"],\"knowsAbout\":[\"Java\"],\"knowsLanguage\":[\"English\",\"French\"],\"jobTitle\":\"Director Developer Relation & Strategy\",\"worksFor\":\"Couchbase\",\"url\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/es\\\/author\\\/laurent-doguin\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Twitter Thread tl;dr With AI? Part 2 - The Couchbase Blog","description":"Construir una aplicaci\u00f3n Streamlit que utiliza LangChain y Vector Search para datos JSON tomados de Twitter e indexados en la base de datos NoSQL Couchbase para Chat interactivo.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.couchbase.com\/blog\/es\/twitter-thread-tldr-with-ai-part-2\/","og_locale":"es_MX","og_type":"article","og_title":"Twitter Thread tl;dr With AI? Part 2","og_description":"Build a Streamlit app that uses LangChain and Vector Search for JSON data taken from Twitter and indexed in Couchbase NoSQL database for interactive Chat.","og_url":"https:\/\/www.couchbase.com\/blog\/es\/twitter-thread-tldr-with-ai-part-2\/","og_site_name":"The Couchbase Blog","article_published_time":"2024-05-17T21:48:54+00:00","article_modified_time":"2025-06-14T04:24:53+00:00","og_image":[{"width":2537,"height":1308,"url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/816e1097-91d1-43bd-a130-2e91f3010d7c.png","type":"image\/png"}],"author":"Laurent Doguin","twitter_card":"summary_large_image","twitter_creator":"@ldoguin","twitter_misc":{"Written by":"unstructured.io","Est. reading time":"7 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-2\/#article","isPartOf":{"@id":"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-2\/"},"author":{"name":"Laurent Doguin","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/c0aa9b8f1ed51b7a9e2f7cb755994a5e"},"headline":"Twitter Thread tl;dr With AI? Part 2","datePublished":"2024-05-17T21:48:54+00:00","dateModified":"2025-06-14T04:24:53+00:00","mainEntityOfPage":{"@id":"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-2\/"},"wordCount":1459,"commentCount":0,"publisher":{"@id":"https:\/\/www.couchbase.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-2\/#primaryimage"},"thumbnailUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/816e1097-91d1-43bd-a130-2e91f3010d7c.png","keywords":["langchain","openai"],"articleSection":["Artificial Intelligence (AI)","Best Practices and Tutorials","Couchbase Capella","Couchbase Server","Python","Search","Vector Search"],"inLanguage":"es","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-2\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-2\/","url":"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-2\/","name":"Twitter Thread tl;dr With AI? Part 2 - The Couchbase Blog","isPartOf":{"@id":"https:\/\/www.couchbase.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-2\/#primaryimage"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-2\/#primaryimage"},"thumbnailUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/816e1097-91d1-43bd-a130-2e91f3010d7c.png","datePublished":"2024-05-17T21:48:54+00:00","dateModified":"2025-06-14T04:24:53+00:00","description":"Construir una aplicaci\u00f3n Streamlit que utiliza LangChain y Vector Search para datos JSON tomados de Twitter e indexados en la base de datos NoSQL Couchbase para Chat interactivo.","breadcrumb":{"@id":"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-2\/#breadcrumb"},"inLanguage":"es","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-2\/"]}]},{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-2\/#primaryimage","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/816e1097-91d1-43bd-a130-2e91f3010d7c.png","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/05\/816e1097-91d1-43bd-a130-2e91f3010d7c.png","width":2537,"height":1308,"caption":"The full streamlit application example opened in Codespace"},{"@type":"BreadcrumbList","@id":"https:\/\/www.couchbase.com\/blog\/twitter-thread-tldr-with-ai-part-2\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.couchbase.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Twitter Thread tl;dr With AI? Part 2"}]},{"@type":"WebSite","@id":"https:\/\/www.couchbase.com\/blog\/#website","url":"https:\/\/www.couchbase.com\/blog\/","name":"El blog de Couchbase","description":"Couchbase, la base de datos NoSQL","publisher":{"@id":"https:\/\/www.couchbase.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.couchbase.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"es"},{"@type":"Organization","@id":"https:\/\/www.couchbase.com\/blog\/#organization","name":"El blog de Couchbase","url":"https:\/\/www.couchbase.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png","width":218,"height":34,"caption":"The Couchbase Blog"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/c0aa9b8f1ed51b7a9e2f7cb755994a5e","name":"Laurent Doguin","image":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/secure.gravatar.com\/avatar\/b8c466908092b46634af916b6921f30187a051e4367ded7ac9b1a3f2c5692fd2?s=96&d=mm&r=g12929ce99397769f362b7a90d6b85071","url":"https:\/\/secure.gravatar.com\/avatar\/b8c466908092b46634af916b6921f30187a051e4367ded7ac9b1a3f2c5692fd2?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/b8c466908092b46634af916b6921f30187a051e4367ded7ac9b1a3f2c5692fd2?s=96&d=mm&r=g","caption":"Laurent Doguin"},"description":"Laurent es un metalero empoll\u00f3n que vive en Par\u00eds. Principalmente escribe c\u00f3digo en Java y texto estructurado en AsciiDoc, y a menudo habla sobre datos, programaci\u00f3n reactiva y otras cosas de moda. Tambi\u00e9n fue Developer Advocate de Clever Cloud y Nuxeo, donde dedic\u00f3 su tiempo y experiencia a ayudar a esas comunidades a crecer y fortalecerse. Ahora dirige las relaciones con los desarrolladores en Couchbase.","sameAs":["https:\/\/x.com\/ldoguin"],"honorificPrefix":"Mr","birthDate":"1985-06-07","gender":"male","award":["Devoxx Champion","Couchbase Legend"],"knowsAbout":["Java"],"knowsLanguage":["English","French"],"jobTitle":"Director Developer Relation & Strategy","worksFor":"Couchbase","url":"https:\/\/www.couchbase.com\/blog\/es\/author\/laurent-doguin\/"}]}},"acf":[],"authors":[{"term_id":9023,"user_id":49,"is_guest":0,"slug":"laurent-doguin","display_name":"Laurent Doguin","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/b8c466908092b46634af916b6921f30187a051e4367ded7ac9b1a3f2c5692fd2?s=96&d=mm&r=g","0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""}],"_links":{"self":[{"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/posts\/15711","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/users\/49"}],"replies":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/comments?post=15711"}],"version-history":[{"count":0,"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/posts\/15711\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/media\/15715"}],"wp:attachment":[{"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/media?parent=15711"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/categories?post=15711"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/tags?post=15711"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/ppma_author?post=15711"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}