{"id":17797,"date":"2026-01-07T11:05:12","date_gmt":"2026-01-07T19:05:12","guid":{"rendered":"https:\/\/www.couchbase.com\/blog\/?p=17797"},"modified":"2026-01-07T11:05:12","modified_gmt":"2026-01-07T19:05:12","slug":"codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain","status":"publish","type":"post","link":"https:\/\/www.couchbase.com\/blog\/pt\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/","title":{"rendered":"CodeLab: Building a RAG Application With Couchbase Capella Model Services and LangChain"},"content":{"rendered":"<p>In this tutorial, you will learn how to build a <strong>gera\u00e7\u00e3o aumentada por recupera\u00e7\u00e3o (RAG)<\/strong> aplicativo usando <strong>Couchbase AI Services<\/strong> to store data, generate embedding using embedding models, and LLM inference. We will create a RAG system that: <\/p>\n<ol>\n<li>Ingests news articles from the <strong>BBC News dataset<\/strong>.<\/li>\n<li>Generates vector embeddings using the <strong>NVIDIA NeMo Retriever<\/strong> model via Capella Model Services.<\/li>\n<li>Stores and indexes these vectors in <strong>Couchbase Capella<\/strong>.<\/li>\n<li>Performs semantic search to retrieve relevant context.<\/li>\n<li>Generates answers using the <strong>Mistral-7B<\/strong> LLM hosted on Capella.<\/li>\n<\/ol>\n<p>You can find the notebook source code for this CodeLab <a href=\"https:\/\/github.com\/couchbase-examples\/vector-search-cookbook\/blob\/main\/capella-model-services\/langchain\/search_based\/RAG_with_Capella_Model_Services_and_LangChain.ipynb\">aqui<\/a>.<\/p>\n<h2>Why Couchbase AI Services?<\/h2>\n<p>Couchbase AI Services provide:<\/p>\n<ul class=\"list-block\">\n<li><strong>LLM inference and embeddings API:<\/strong> Access popular LLMs (i.e., Llama 3) and embedding models directly through Capella, without managing external API keys or infrastructure.<\/li>\n<li><strong>Unified platform:<\/strong> Leverage the database, vectorization, search, and model in one place.<\/li>\n<li><strong>Integrated vector search:<\/strong> Perform semantic search directly on your JSON data with millisecond latency.<\/li>\n<\/ul>\n<h2>Setting Up Couchbase AI Services<\/h2>\n<h3>Create a Cluster in Capella<\/h3>\n<ol>\n<li>Fa\u00e7a login em <a href=\"https:\/\/cloud.couchbase.com\/\">Couchbase Capella<\/a>.<\/li>\n<li>Create a new cluster or use an existing one. Note that the cluster needs to run the latest version of Couchbase Server 8.0 that includes the Data, Query, Index, and Eventing services.<\/li>\n<li>Create a bucket. <\/li>\n<li>Create a scope and collection for the data.<\/li>\n<\/ol>\n<h3>Habilitar servi\u00e7os de IA<\/h3>\n<ol>\n<li>Navigate to Capella\u2019s AI Services section on the UI.<\/li>\n<li>Deploy the embeddings and LLM models.<\/li>\n<ul class=\"list-block\" style=\"padding:5px !important\">\n<li>You need to launch an embedding and an LLM for this demo in the same region as the Capella cluster where the data will be stored.<\/li>\n<li>For this demo to work well, you need to deploy an LLM that has tool calling capabilities such as <code>mistralai\/mistral-7b-instruct-v0.3<\/code>. Para embeddings, voc\u00ea pode escolher um modelo como o <code>nvidia\/llama-3.2-nv-embedqa-1b-v2<\/code>.<\/li>\n<\/ul>\n<li>Write down the endpoint URL and generate API keys.<\/li>\n<\/ol>\n<p>For more details on launching AI models, you can read the <a href=\"https:\/\/docs.couchbase.com\/ai\/build\/model-service\/model-service.html\">documenta\u00e7\u00e3o oficial<\/a>.<\/p>\n<h2>Pr\u00e9-requisitos<\/h2>\n<p>Before we begin, ensure you have Python 3.10+ installed.<\/p>\n<h2>Step 1: Install Dependencies<\/h2>\n<p>We need the Couchbase SDK, LangChain integrations, and the datasets library.<\/p>\n<pre class=\"lang:default decode:true\">\r\n%pip install --quiet datasets==4.4.1 langchain-couchbase==1.0.0 langchain-openai==1.1.0\r\n<\/pre>\n<h2>Step 2: Configuration &amp; Connection<\/h2>\n<p>We&#8217;ll start by connecting to our Couchbase cluster. We also need to configure the endpoints for Capella Model Services.<\/p>\n<p style=\"margin-left:50px;text-align:justify;padding-bottom:0px !important\"><strong>Observa\u00e7\u00e3o:<\/strong> Capella Model Services are compatible with the OpenAI API format, so we can use the standard <code style=\"color:#78af89\">langchain-openai<\/code> library by pointing it to our Capella endpoint.<\/p>\n<pre class=\"lang:default decode:true\">\r\nimport getpass\r\nfrom couchbase.auth import PasswordAuthenticator\r\nfrom couchbase.cluster import Cluster\r\nfrom couchbase.options import ClusterOptions\r\nfrom datetime import timedelta\r\n\r\n# Configuration\r\nCB_CONNECTION_STRING = getpass.getpass(\"Couchbase Connection String: \")\r\nCB_USERNAME = input(\"Database Username: \")\r\nCB_PASSWORD = getpass.getpass(\"Database Password: \")\r\nCB_BUCKET_NAME = input(\"Bucket Name: \")\r\nSCOPE_NAME = \"rag\"\r\nCOLLECTION_NAME = \"data\"\r\nINDEX_NAME = \"vs-index\"\r\n\r\n# Model Services Config\r\nCAPELLA_MODEL_SERVICES_ENDPOINT = getpass.getpass(\"Capella Model Services Endpoint: \")\r\nLLM_MODEL_NAME = \"mistralai\/mistral-7b-instruct-v0.3\"\r\nLLM_API_KEY = getpass.getpass(\"LLM API Key: \")\r\nEMBEDDING_MODEL_NAME = \"nvidia\/llama-3.2-nv-embedqa-1b-v2\"\r\nEMBEDDING_API_KEY = getpass.getpass(\"Embedding API Key: \")\r\n\r\n# Connect to Cluster\r\nauth = PasswordAuthenticator(CB_USERNAME, CB_PASSWORD)\r\ncluster = Cluster(CB_CONNECTION_STRING, ClusterOptions(auth))\r\ncluster.wait_until_ready(timedelta(seconds=5))\r\nprint(\"Successfully connected to Couchbase\")\r\n<\/pre>\n<h2> Step 3: Set Up the Database Structure<\/h2>\n<p>We need to ensure our bucket, scope, and collection exist to store the news data.<\/p>\n<pre class=\"lang:default decode:true\">\r\ndef setup_collection(cluster, bucket_name, scope_name, collection_name):\r\n    bucket = cluster.bucket(bucket_name)\r\n    manager = bucket.collections()\r\n    \r\n    # Create Scope\r\n    if scope_name not in [s.name for s in manager.get_all_scopes()]:\r\n        manager.create_scope(scope_name)\r\n        \r\n    # Create Collection\r\n    bucket_manager = bucket.collections()\r\n    scopes = bucket_manager.get_all_scopes()\r\n    # ... (logic to create collection if missing) ...\r\n    \r\n    # Create Primary Index\r\n    cluster.query(f\"CREATE PRIMARY INDEX IF NOT EXISTS ON `{bucket_name}`.`{scope_name}`.`{collection_name}`\").execute()\r\n\r\nsetup_collection(cluster, CB_BUCKET_NAME, SCOPE_NAME, COLLECTION_NAME)\r\n<\/pre>\n<h2>Step 4: Loading Couchbase Vector Search Index<\/h2>\n<p>Semantic search requires an efficient way to retrieve relevant documents based on a user&#8217;s query. This is where Couchbase Vector Search, formerly known as Full-Text Search (FTS) service, comes into play. In this step, we load the Vector Search Index definition from a JSON file, which specifies how the index should be structured. This includes the fields to be indexed, the dimensions of the vectors, and other parameters that determine how the search engine processes queries based on vector similarity.<\/p>\n<pre class=\"lang:default decode:true\">\r\n# If you are running this script in Google Colab, comment the following line\r\n# and provide the path to your index definition file.\r\n\r\nindex_definition_path = \"capella_index.json\"  # Local setup: specify your file path here\r\n\r\n# If you are running in Google Colab, use the following code to upload the index definition file\r\n# from google.colab import files\r\n# print(\"Upload your index definition file\")\r\n# uploaded = files.upload()\r\n# index_definition_path = list(uploaded.keys())[0]\r\n\r\ntry:\r\n    with open(index_definition_path, \"r\") as file:\r\n        index_definition = json.load(file)\r\n\r\n        # Update search index definition with user inputs\r\n        index_definition['name'] = INDEX_NAME\r\n        index_definition['sourceName'] = CB_BUCKET_NAME\r\n        # Update types mapping\r\n        old_type_key = next(iter(index_definition['params']['mapping']['types'].keys()))\r\n        type_obj = index_definition['params']['mapping']['types'].pop(old_type_key)\r\n        index_definition['params']['mapping']['types'][f\"{SCOPE_NAME}.{COLLECTION_NAME}\"] = type_obj\r\n        \r\nexcept Exception as e:\r\n    raise ValueError(\r\n        f\"Error loading index definition from {index_definition_path}: {str(e)}\"\r\n    )\r\n\r\n# Create the Vector Index via SDK\r\ntry:\r\n    scope_index_manager = (\r\n        cluster.bucket(CB_BUCKET_NAME).scope(SCOPE_NAME).search_indexes()\r\n    )\r\n\r\n    # Check if index already exists\r\n    existing_indexes = scope_index_manager.get_all_indexes()\r\n    index_name = index_definition[\"name\"]\r\n\r\n    if index_name in [index.name for index in existing_indexes]:\r\n        print(f\"Index '{index_name}' found\")\r\n    else:\r\n        print(f\"Creating new index '{index_name}'...\")\r\n\r\n    # Create SearchIndex object from JSON definition\r\n    search_index = SearchIndex.from_json(index_definition)\r\n\r\n    # Upsert the index (create if not exists, update if exists)\r\n    scope_index_manager.upsert_index(search_index)\r\n    print(f\"Index '{index_name}' successfully created\/updated.\")\r\n\r\nexcept Exception as e:\r\n    logging.error(f\"Error creating or updating index: {e}\")\r\n<\/pre>\n<h2>Step 5: Initialize AI Models<\/h2>\n<p>Here is the magic: we initialize the embedding model using <code style=\"color:#78af89\">Aberturas do OpenAIEmbeddings<\/code> but point it to Capella. Couchbase AI Services provide OpenAI-compatible endpoints that are used by the agents. For embeddings, we\u2019re using the LangChain OpenAI package as it is used in association with the LangChain Couchbase integration.<\/p>\n<pre class=\"lang:default decode:true\">\r\nfrom langchain_openai import OpenAIEmbeddings\r\n\r\nembeddings = OpenAIEmbeddings(\r\n    openai_api_key=EMBEDDING_API_KEY,\r\n    openai_api_base=CAPELLA_MODEL_SERVICES_ENDPOINT, # Capella Endpoint\r\n    model=EMBEDDING_MODEL_NAME,\r\n    check_embedding_ctx_length=False,\r\n    tiktoken_enabled=False\r\n)\r\n<\/pre>\n<h2>Step 6: Ingest Data<\/h2>\n<p>We load the BBC News dataset and ingest it into Couchbase. The <code style=\"color:#78af89\">CouchbaseSearchVectorStore<\/code> automatically handles generating embeddings using our defined model and storing them.<\/p>\n<pre class=\"lang:default decode:true\">\r\nfrom datasets import load_dataset\r\nfrom langchain_core.documents import Document\r\nfrom langchain_couchbase.vectorstores import CouchbaseSearchVectorStore\r\n\r\n# Load Data\r\ndataset = load_dataset('RealTimeData\/bbc_news_alltime', '2024-12', split=\"train\")\r\nunique_articles = list(set(dataset[\"content\"]))[:100] # Limit for demo\r\n\r\n# Initialize Vector Store\r\nvector_store = CouchbaseSearchVectorStore(\r\n    cluster=cluster,\r\n    bucket_name=CB_BUCKET_NAME,\r\n    scope_name=SCOPE_NAME,\r\n    collection_name=COLLECTION_NAME,\r\n    embedding=embeddings,\r\n    index_name=INDEX_NAME,\r\n)\r\n\r\n# Ingest\r\ndocuments = [Document(page_content=article) for article in unique_articles]\r\nvector_store.add_documents(documents)\r\nprint(f\"Ingested {len(documents)} documents\")\r\n<\/pre>\n<h2>Step 7: Build the RAG Chain<\/h2>\n<p>Now we create the RAG pipeline. We initialize the LLM (again pointing to Capella) and connect it to our vector store retriever.<\/p>\n<pre class=\"lang:default decode:true\">\r\nfrom langchain_openai import ChatOpenAI\r\nfrom langchain_core.prompts import ChatPromptTemplate\r\nfrom langchain_core.runnables import RunnablePassthrough\r\nfrom langchain_core.output_parsers import StrOutputParser\r\n\r\n# 1. Initialize LLM\r\nllm = ChatOpenAI(\r\n    openai_api_base=CAPELLA_MODEL_SERVICES_ENDPOINT,\r\n    openai_api_key=LLM_API_KEY,\r\n    model=LLM_MODEL_NAME,\r\n    temperature=0\r\n)\r\n\r\n# 2. Define Prompt\r\ntemplate = \"\"\"Answer the question based only on the following context:\r\n{context}\r\n\r\nQuestion: {question}\r\n\"\"\"\r\nprompt = ChatPromptTemplate.from_template(template)\r\n\r\n# 3. Create Chain\r\nrag_chain = (\r\n    {\"context\": vector_store.as_retriever(), \"question\": RunnablePassthrough()}\r\n    | prompt\r\n    | llm\r\n    | StrOutputParser()\r\n)\r\n<\/pre>\n<h2>Step 8: Run Queries<\/h2>\n<p>Let&#8217;s test our RAG.<\/p>\n<pre class=\"lang:default decode:true\">\r\nquery = \"What was Pep Guardiola's reaction to Manchester City's recent form?\"\r\nresponse = rag_chain.invoke(query)\r\n\r\nprint(f\"Question: {query}\")\r\nprint(f\"Answer: {response}\")\r\n<\/pre>\n<p><strong>Example Output:<\/strong><\/p>\n<p style=\"margin-left:50px\"><strong>Resposta:<\/strong> Pep Guardiola has expressed concern and frustration about Manchester City&#8217;s recent form. He stated, &#8220;I am not good enough. I am the boss&#8230; I have to find solutions.&#8221; He acknowledged the team&#8217;s defensive issues and lack of confidence.<\/p>\n<h2>Conclus\u00e3o<\/h2>\n<p>Neste tutorial, voc\u00ea aprendeu a: <\/p>\n<ol>\n<li>Vectorize data using Couchbase.<\/li>\n<li>Use Couchbase AI Services for embeddings and LLM.<\/li>\n<li>Implement RAG with Couchbase Vector Search.<\/li>\n<\/ol>\n<p>Couchbase\u2019s unified database platform creates powerful AI applications that can generate high-quality, contextually-aware content.<\/p>","protected":false},"excerpt":{"rendered":"<p>In this tutorial, you will learn how to build a retrieval-augmented generation (RAG) application using Couchbase AI Services to store data, generate embedding using embedding models, and LLM inference. We will create a RAG system that: Ingests news articles from [&hellip;]<\/p>","protected":false},"author":49,"featured_media":17799,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"footnotes":""},"categories":[9973],"tags":[],"ppma_author":[9023],"class_list":["post-17797","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-generative-ai-genai"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v26.5 (Yoast SEO v26.5) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>CodeLab: Building a RAG Application With Couchbase Capella Model Services and LangChain - The Couchbase Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.couchbase.com\/blog\/pt\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/\" \/>\n<meta property=\"og:locale\" content=\"pt_BR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"CodeLab: Building a RAG Application With Couchbase Capella Model Services and LangChain\" \/>\n<meta property=\"og:description\" content=\"In this tutorial, you will learn how to build a retrieval-augmented generation (RAG) application using Couchbase AI Services to store data, generate embedding using embedding models, and LLM inference. We will create a RAG system that: Ingests news articles from [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.couchbase.com\/blog\/pt\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/\" \/>\n<meta property=\"og:site_name\" content=\"The Couchbase Blog\" \/>\n<meta property=\"article:published_time\" content=\"2026-01-07T19:05:12+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/CodeLab_-Building-a-RAG-Application-With-Couchbase-Capella-Model-Services-and-LangChain.png\" \/>\n\t<meta property=\"og:image:width\" content=\"2400\" \/>\n\t<meta property=\"og:image:height\" content=\"1256\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Laurent Doguin\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@ldoguin\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"unstructured.io\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/\"},\"author\":{\"name\":\"Laurent Doguin\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/c0aa9b8f1ed51b7a9e2f7cb755994a5e\"},\"headline\":\"CodeLab: Building a RAG Application With Couchbase Capella Model Services and LangChain\",\"datePublished\":\"2026-01-07T19:05:12+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/\"},\"wordCount\":690,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/CodeLab_-Building-a-RAG-Application-With-Couchbase-Capella-Model-Services-and-LangChain.png\",\"articleSection\":[\"Generative AI (GenAI)\"],\"inLanguage\":\"pt-BR\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.couchbase.com\/blog\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/\",\"url\":\"https:\/\/www.couchbase.com\/blog\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/\",\"name\":\"CodeLab: Building a RAG Application With Couchbase Capella Model Services and LangChain - The Couchbase Blog\",\"isPartOf\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/CodeLab_-Building-a-RAG-Application-With-Couchbase-Capella-Model-Services-and-LangChain.png\",\"datePublished\":\"2026-01-07T19:05:12+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/#breadcrumb\"},\"inLanguage\":\"pt-BR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.couchbase.com\/blog\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-BR\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/#primaryimage\",\"url\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/CodeLab_-Building-a-RAG-Application-With-Couchbase-Capella-Model-Services-and-LangChain.png\",\"contentUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/CodeLab_-Building-a-RAG-Application-With-Couchbase-Capella-Model-Services-and-LangChain.png\",\"width\":2400,\"height\":1256},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.couchbase.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"CodeLab: Building a RAG Application With Couchbase Capella Model Services and LangChain\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#website\",\"url\":\"https:\/\/www.couchbase.com\/blog\/\",\"name\":\"The Couchbase Blog\",\"description\":\"Couchbase, the NoSQL Database\",\"publisher\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.couchbase.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"pt-BR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#organization\",\"name\":\"The Couchbase Blog\",\"url\":\"https:\/\/www.couchbase.com\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-BR\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png\",\"contentUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png\",\"width\":218,\"height\":34,\"caption\":\"The Couchbase Blog\"},\"image\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/c0aa9b8f1ed51b7a9e2f7cb755994a5e\",\"name\":\"Laurent Doguin\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-BR\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/image\/12929ce99397769f362b7a90d6b85071\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/b8c466908092b46634af916b6921f30187a051e4367ded7ac9b1a3f2c5692fd2?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/b8c466908092b46634af916b6921f30187a051e4367ded7ac9b1a3f2c5692fd2?s=96&d=mm&r=g\",\"caption\":\"Laurent Doguin\"},\"description\":\"Laurent is a nerdy metal head who lives in Paris. He mostly writes code in Java and structured text in AsciiDoc, and often talks about data, reactive programming and other buzzwordy stuff. He is also a former Developer Advocate for Clever Cloud and Nuxeo where he devoted his time and expertise to helping those communities grow bigger and stronger. He now runs Developer Relations at Couchbase.\",\"sameAs\":[\"https:\/\/x.com\/ldoguin\"],\"honorificPrefix\":\"Mr\",\"birthDate\":\"1985-06-07\",\"gender\":\"male\",\"award\":[\"Devoxx Champion\",\"Couchbase Legend\"],\"knowsAbout\":[\"Java\"],\"knowsLanguage\":[\"English\",\"French\"],\"jobTitle\":\"Director Developer Relation & Strategy\",\"worksFor\":\"Couchbase\",\"url\":\"https:\/\/www.couchbase.com\/blog\/pt\/author\/laurent-doguin\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"CodeLab: Building a RAG Application With Couchbase Capella Model Services and LangChain - The Couchbase Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.couchbase.com\/blog\/pt\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/","og_locale":"pt_BR","og_type":"article","og_title":"CodeLab: Building a RAG Application With Couchbase Capella Model Services and LangChain","og_description":"In this tutorial, you will learn how to build a retrieval-augmented generation (RAG) application using Couchbase AI Services to store data, generate embedding using embedding models, and LLM inference. We will create a RAG system that: Ingests news articles from [&hellip;]","og_url":"https:\/\/www.couchbase.com\/blog\/pt\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/","og_site_name":"The Couchbase Blog","article_published_time":"2026-01-07T19:05:12+00:00","og_image":[{"width":2400,"height":1256,"url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/CodeLab_-Building-a-RAG-Application-With-Couchbase-Capella-Model-Services-and-LangChain.png","type":"image\/png"}],"author":"Laurent Doguin","twitter_card":"summary_large_image","twitter_creator":"@ldoguin","twitter_misc":{"Written by":"unstructured.io","Est. reading time":"4 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.couchbase.com\/blog\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/#article","isPartOf":{"@id":"https:\/\/www.couchbase.com\/blog\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/"},"author":{"name":"Laurent Doguin","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/c0aa9b8f1ed51b7a9e2f7cb755994a5e"},"headline":"CodeLab: Building a RAG Application With Couchbase Capella Model Services and LangChain","datePublished":"2026-01-07T19:05:12+00:00","mainEntityOfPage":{"@id":"https:\/\/www.couchbase.com\/blog\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/"},"wordCount":690,"commentCount":0,"publisher":{"@id":"https:\/\/www.couchbase.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/#primaryimage"},"thumbnailUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/CodeLab_-Building-a-RAG-Application-With-Couchbase-Capella-Model-Services-and-LangChain.png","articleSection":["Generative AI (GenAI)"],"inLanguage":"pt-BR","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.couchbase.com\/blog\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.couchbase.com\/blog\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/","url":"https:\/\/www.couchbase.com\/blog\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/","name":"CodeLab: Building a RAG Application With Couchbase Capella Model Services and LangChain - The Couchbase Blog","isPartOf":{"@id":"https:\/\/www.couchbase.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.couchbase.com\/blog\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/#primaryimage"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/#primaryimage"},"thumbnailUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/CodeLab_-Building-a-RAG-Application-With-Couchbase-Capella-Model-Services-and-LangChain.png","datePublished":"2026-01-07T19:05:12+00:00","breadcrumb":{"@id":"https:\/\/www.couchbase.com\/blog\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/#breadcrumb"},"inLanguage":"pt-BR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.couchbase.com\/blog\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/"]}]},{"@type":"ImageObject","inLanguage":"pt-BR","@id":"https:\/\/www.couchbase.com\/blog\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/#primaryimage","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/CodeLab_-Building-a-RAG-Application-With-Couchbase-Capella-Model-Services-and-LangChain.png","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/01\/CodeLab_-Building-a-RAG-Application-With-Couchbase-Capella-Model-Services-and-LangChain.png","width":2400,"height":1256},{"@type":"BreadcrumbList","@id":"https:\/\/www.couchbase.com\/blog\/codelab-building-a-rag-application-with-couchbase-capella-model-services-and-langchain\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.couchbase.com\/blog\/"},{"@type":"ListItem","position":2,"name":"CodeLab: Building a RAG Application With Couchbase Capella Model Services and LangChain"}]},{"@type":"WebSite","@id":"https:\/\/www.couchbase.com\/blog\/#website","url":"https:\/\/www.couchbase.com\/blog\/","name":"Blog do Couchbase","description":"Couchbase, o banco de dados NoSQL","publisher":{"@id":"https:\/\/www.couchbase.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.couchbase.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"pt-BR"},{"@type":"Organization","@id":"https:\/\/www.couchbase.com\/blog\/#organization","name":"Blog do Couchbase","url":"https:\/\/www.couchbase.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"pt-BR","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png","width":218,"height":34,"caption":"The Couchbase Blog"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/c0aa9b8f1ed51b7a9e2f7cb755994a5e","name":"Laurent Doguin","image":{"@type":"ImageObject","inLanguage":"pt-BR","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/image\/12929ce99397769f362b7a90d6b85071","url":"https:\/\/secure.gravatar.com\/avatar\/b8c466908092b46634af916b6921f30187a051e4367ded7ac9b1a3f2c5692fd2?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/b8c466908092b46634af916b6921f30187a051e4367ded7ac9b1a3f2c5692fd2?s=96&d=mm&r=g","caption":"Laurent Doguin"},"description":"Laurent \u00e9 um nerd metaleiro que mora em Paris. Em sua maior parte, ele escreve c\u00f3digo em Java e texto estruturado em AsciiDoc, e frequentemente fala sobre dados, programa\u00e7\u00e3o reativa e outras coisas que est\u00e3o na moda. Ele tamb\u00e9m foi Developer Advocate do Clever Cloud e do Nuxeo, onde dedicou seu tempo e experi\u00eancia para ajudar essas comunidades a crescerem e se fortalecerem. Atualmente, ele dirige as Rela\u00e7\u00f5es com Desenvolvedores na Couchbase.","sameAs":["https:\/\/x.com\/ldoguin"],"honorificPrefix":"Mr","birthDate":"1985-06-07","gender":"male","award":["Devoxx Champion","Couchbase Legend"],"knowsAbout":["Java"],"knowsLanguage":["English","French"],"jobTitle":"Director Developer Relation & Strategy","worksFor":"Couchbase","url":"https:\/\/www.couchbase.com\/blog\/pt\/author\/laurent-doguin\/"}]}},"authors":[{"term_id":9023,"user_id":49,"is_guest":0,"slug":"laurent-doguin","display_name":"Laurent Doguin","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/b8c466908092b46634af916b6921f30187a051e4367ded7ac9b1a3f2c5692fd2?s=96&d=mm&r=g","author_category":"","last_name":"Doguin","first_name":"Laurent","job_title":"","user_url":"","description":"Laurent \u00e9 um nerd metaleiro que mora em Paris. Em sua maior parte, ele escreve c\u00f3digo em Java e texto estruturado em AsciiDoc, e frequentemente fala sobre dados, programa\u00e7\u00e3o reativa e outras coisas que est\u00e3o na moda. Ele tamb\u00e9m foi Developer Advocate do Clever Cloud e do Nuxeo, onde dedicou seu tempo e experi\u00eancia para ajudar essas comunidades a crescerem e se fortalecerem. Atualmente, ele dirige as Rela\u00e7\u00f5es com Desenvolvedores na Couchbase."}],"_links":{"self":[{"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/posts\/17797","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/users\/49"}],"replies":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/comments?post=17797"}],"version-history":[{"count":0,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/posts\/17797\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/media\/17799"}],"wp:attachment":[{"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/media?parent=17797"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/categories?post=17797"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/tags?post=17797"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/ppma_author?post=17797"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}