{"id":18046,"date":"2026-04-28T13:25:21","date_gmt":"2026-04-28T20:25:21","guid":{"rendered":"https:\/\/www.couchbase.com\/blog\/?p=18046"},"modified":"2026-04-28T13:33:35","modified_gmt":"2026-04-28T20:33:35","slug":"build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index","status":"publish","type":"post","link":"https:\/\/www.couchbase.com\/blog\/pt\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\/","title":{"rendered":"Build a Personal Memory Search With Couchbase AI Services and Hyperscale Vector Index"},"content":{"rendered":"<h2><span style=\"font-weight: 400\">Introduction<\/span><\/h2>\n<p><span style=\"font-weight: 400\">Memory is unreliable. You&#8217;ve read hundreds of articles and saved dozens of PDFs but when you actually need to find something, you&#8217;re stuck scrolling through folders or guessing the right filename. What if you could ask, in plain English, <\/span><i><span style=\"font-weight: 400\">&#8220;What did that book say about building good habits?&#8221;<\/span><\/i><span style=\"font-weight: 400\"> or <\/span><i><span style=\"font-weight: 400\">&#8220;What does my insurance cover for dental?&#8221;<\/span><\/i><span style=\"font-weight: 400\"> and get a coherent, cited answer in seconds?<\/span><\/p>\n<p><span style=\"font-weight: 400\">That is the problem <\/span><b>Memory Lane<\/b><span style=\"font-weight: 400\"> solves.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Memory Lane is an open source demo application that uses <\/span><b>Couchbase Hyperscale Vector Index (HVI)<\/b><span style=\"font-weight: 400\">, a feature introduced in Couchbase v8.0 that\u2019s combined with OpenAI <\/span><span style=\"font-weight: 400\">text-embedding-3-small<\/span><span style=\"font-weight: 400\"> for vector embeddings and a GPT-4o synthesis layer to build a semantic personal document search assistant. Unlike keyword search, it understands the meaning behind your query. Unlike a simple vector search demo, it synthesises retrieved passages into a coherent natural language answer and streams that answer back token by token in real time.<\/span><\/p>\n<p><span style=\"font-weight: 400\">This post walks through how we built it, why we made the technology choices we did, and what patterns from this demo you can carry directly into production systems.<\/span><\/p>\n<h2><span style=\"font-weight: 400\">Why This Demo Is Important to Developers<\/span><\/h2>\n<p><span style=\"font-weight: 400\">Vector search has moved beyond academic curiosity to become a production staple in under three years. But most developer-facing demos show only the simplest possible version: embed a string, store a vector, retrieve by cosine similarity. Real applications are messier and that gap is where teams get stuck.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Memory Lane was specifically designed to expose the hard parts:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400\"><b>Ambiguous queries<\/b> <b>\u2013<\/b><span style=\"font-weight: 400\"> &#8220;What does my contract say about notice period?&#8221; needs to retrieve the right passage from potentially hundreds of documents.<\/span><\/li>\n<li style=\"font-weight: 400\"><b>Answer synthesis<\/b> <b>\u2013<\/b><span style=\"font-weight: 400\">\u00a0 Returning a ranked list of chunks is not a user experience. The system has to combine retrieved evidence into a coherent, readable response.<\/span><\/li>\n<li style=\"font-weight: 400\"><b>Production-grade infrastructure<\/b> <b>\u2013<\/b><span style=\"font-weight: 400\"> Embeddings need to be fast, search needs to be scalable, and the whole stack needs to run without complex tooling.<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400\">Memory Lane addresses all three. It is a complete, end-to-end reference implementation that includes a FastAPI backend and React frontend with streaming SSE that\u2019s wired together in a way that is easy to read and straightforward to adapt. Whether you are building a knowledge assistant, a document retrieval tool, or an enterprise memory layer, the patterns transfer directly.<\/span><\/p>\n<h2><span style=\"font-weight: 400\">What the App Does<\/span><\/h2>\n<p><span style=\"font-weight: 400\">Memory Lane is a split-panel web application.<\/span><\/p>\n<p><b>Left panel Chat interface:<\/b><span style=\"font-weight: 400\"> The user types a natural-language query. The app embeds the query, searches the Couchbase document collection, and streams a coherent cited answer back in real time. The search traces which collection was searched, which model was used, and the relevance scores of the top results is shown in a collapsible panel below each response.<\/span><\/p>\n<p><b>Right panel Document cards:<\/b><span style=\"font-weight: 400\"> All stored document chunks are displayed as browseable cards in a responsive grid. When a search returns results, the panel switches from browse mode to search mode; only the matching documents are shown, each with a percentage similarity score badge. Clearing the search returns to the full browseable collection.<\/span><\/p>\n<p><span style=\"font-weight: 400\">The app indexes <\/span><b>document chunks<\/b> <b>\u2013<\/b><span style=\"font-weight: 400\"> text extracted from PDFs, books, insurance policies, and travel itineraries split into overlapping passages by Couchbase AI Services and embedded using OpenAI <\/span><span style=\"font-weight: 400\">text-embedding-3-small<\/span><span style=\"font-weight: 400\"> (1536-dim). Chunks live in the <\/span><span style=\"font-weight: 400\">memory_chunks<\/span><span style=\"font-weight: 400\"> collection in Couchbase Capella and are indexed using a Hyperscale Vector Index created automatically by the AI Services workflow.<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-18047\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.16.11-PM.png\" alt=\"\" width=\"1208\" height=\"960\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.16.11-PM.png 1208w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.16.11-PM-300x238.png 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.16.11-PM-1024x814.png 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.16.11-PM-768x610.png 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.16.11-PM-15x12.png 15w\" sizes=\"auto, (max-width: 1208px) 100vw, 1208px\" \/><\/p>\n<h2><span style=\"font-weight: 400\">Current Capabilities<\/span><\/h2>\n<ul>\n<li style=\"font-weight: 400\"><b>Natural-language document search<\/b> <b>\u2013 <\/b><span style=\"font-weight: 400\">Semantic retrieval from the document collection via a single chat interface<\/span><\/li>\n<li style=\"font-weight: 400\"><b>Single embedding model<\/b> <b>\u2013 <\/b><span style=\"font-weight: 400\">OpenAI <\/span><span style=\"font-weight: 400\">text-embedding-3-small<\/span><span style=\"font-weight: 400\"> embeds both stored chunks (at ingestion time, via the AI Services workflow) and user queries (at search time, in the backend); delivering consistent vector space throughout<\/span><\/li>\n<li style=\"font-weight: 400\"><b>Streaming responses<\/b> <b>\u2013 <\/b><span style=\"font-weight: 400\">Answers stream token by token via Server-Sent Events (SSE), giving users immediate feedback rather than a loading spinner<\/span><\/li>\n<li style=\"font-weight: 400\"><b>Search trace transparency<\/b> <b>\u2013 <\/b><span style=\"font-weight: 400\">The searched collection, embedding model, and top-k relevance scores are visible to the user in a collapsible panel<\/span><\/li>\n<li style=\"font-weight: 400\"><b>Browse and search modes<\/b> <b>\u2013 <\/b><span style=\"font-weight: 400\">The right panel handles both paginated browsing of all documents and ranked display of search results, switching seamlessly<\/span><\/li>\n<li style=\"font-weight: 400\"><b>Read-only by design<\/b> <b>\u2013 <\/b><span style=\"font-weight: 400\">The app is a search interface only; document ingestion is handled externally by Couchbase AI Services, which reads from Amazon S3<\/span><\/li>\n<li style=\"font-weight: 400\"><b>Ships with sample data<\/b> <b>\u2013 <\/b><span style=\"font-weight: 400\">A dataset of 50 PDFs is included in the repo; you can swap in your own files, upload to S3, and re-run the AI Services workflow to search your own data<\/span><\/li>\n<\/ul>\n<h2><span style=\"font-weight: 400\">How It Works<\/span><\/h2>\n<p><span style=\"font-weight: 400\">When a user types <\/span><i><span style=\"font-weight: 400\">&#8220;What does Atomic Habits say about identity?&#8221;<\/span><\/i><span style=\"font-weight: 400\">, here is what happens step by step.<\/span><\/p>\n<h3><span style=\"font-weight: 400\">1. Convert the query into a vector<\/span><\/h3>\n<p><span style=\"font-weight: 400\">The query is passed to OpenAI <\/span><span style=\"font-weight: 400\">text-embedding-3-small<\/span><span style=\"font-weight: 400\">, which outputs a 1536-dimensional float vector. This is the same model used to embed document chunks at ingestion time, so stored vectors and the query vector live in the same space; higher dot product means more similar meaning.<\/span><\/p>\n<p><span style=\"font-weight: 400\">The output is a unit-normalized vector, so DOT product and cosine similarity are mathematically equivalent.<\/span><\/p>\n<h3><span style=\"font-weight: 400\">2. Store embeddings with metadata<\/span><\/h3>\n<p><span style=\"font-weight: 400\">Every memory in the system is a single JSON document in Couchbase. The embedding vector lives inside the same document as the content and source information:<\/span><\/p>\n<pre class=\"lang:default decode:true\">{\r\n  \"element-id\": \"atomic_habits_summary\/paragraph\/1\",\r\n  \"text-to-embed\": \"Document Title: Atomic Habits\\nSection Title: Identity-Based Habits\\nContent: Identity change is the North Star of habit change...\",\r\n  \"meta-data\": {\r\n    \"page-number\": 2,\r\n    \"associated-titles\": [\"Atomic Habits\", \"Identity-Based Habits\"],\r\n    \"type\": \"paragraph\"\r\n  },\r\n  \"xmeta-data\": {\r\n    \"filename\": \"atomic_habits_summary.pdf\",\r\n    \"filetype\": \"pdf\",\r\n    \"languages\": [\"eng\"],\r\n    \"workflow_id\": \"memory-lane-docs\"\r\n  },\r\n  \"text-embedding\": [0.023, -0.104, 0.061, ...]\r\n}\r\n<\/pre>\n<p><span style=\"font-weight: 400\">This schema is generated by Couchbase AI Services and stored directly in Couchbase; the search app queries it without any transformation.<\/span><\/p>\n<h3><span style=\"font-weight: 400\">3. Run vector similarity search with Couchbase HVI<\/span><\/h3>\n<p><span style=\"font-weight: 400\">The query vector is submitted to Couchbase via a SQL++ query using <\/span><span style=\"font-weight: 400\">APPROX_VECTOR_DISTANCE<\/span><span style=\"font-weight: 400\">, the function that drives Hyperscale Vector Index lookups:<\/span><\/p>\n<pre class=\"lang:default decode:true\">SELECT META(d).id AS doc_key,\r\n       APPROX_VECTOR_DISTANCE(d.`text-embedding`,$query_vec,\"DOT\",4) AS distance,\r\n       d.`text-to-embed` AS text_content,\r\n       d.`meta-data`     AS metadata,\r\n       d.`xmeta-data`    AS xmetadata\r\nFROM `memory_chunks` AS d\r\nORDER BY APPROX_VECTOR_DISTANCE(d.`text-embedding`,$query_vec,\"DOT\",4)\r\nLIMIT $top_k\r\n<\/pre>\n<p><span style=\"font-weight: 400\">The HVI returns the top-K nearest neighbours by dot product, working through the index rather than scanning every document. Because <\/span><span style=\"font-weight: 400\">APPROX_VECTOR_DISTANCE<\/span><span style=\"font-weight: 400\"> with DOT similarity returns the <\/span><i><span style=\"font-weight: 400\">negated<\/span><\/i><span style=\"font-weight: 400\"> dot product lower values are more similar the backend flips the sign to produce a <\/span><span style=\"font-weight: 400\">score<\/span><span style=\"font-weight: 400\"> field where higher means more relevant.<\/span><\/p>\n<p><span style=\"font-weight: 400\">The top results are then passed to GPT-4o, which synthesises a natural-language answer and streams it back to the frontend token by token via SSE.<\/span><\/p>\n<h2><span style=\"font-weight: 400\">Architecture Overview<\/span><\/h2>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-18048\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.18.02-PM.png\" alt=\"\" width=\"806\" height=\"1048\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.18.02-PM.png 806w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.18.02-PM-231x300.png 231w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.18.02-PM-788x1024.png 788w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.18.02-PM-768x999.png 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.18.02-PM-9x12.png 9w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.18.02-PM-300x390.png 300w\" sizes=\"auto, (max-width: 806px) 100vw, 806px\" \/><\/p>\n<p><span style=\"font-weight: 400\">Stack<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-18049\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.18.52-PM.png\" alt=\"\" width=\"1504\" height=\"690\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.18.52-PM.png 1504w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.18.52-PM-300x138.png 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.18.52-PM-1024x470.png 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.18.52-PM-768x352.png 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.18.52-PM-18x8.png 18w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.18.52-PM-1320x606.png 1320w\" sizes=\"auto, (max-width: 1504px) 100vw, 1504px\" \/><\/p>\n<h2><span style=\"font-weight: 400\">Configuring Couchbase AI Services<\/span><\/h2>\n<p><span style=\"font-weight: 400\">The ingestion side of Memory Lane, which turns raw PDFs into searchable vectors in Couchbase, is handled entirely by <\/span><b>Couchbase AI Services<\/b><span style=\"font-weight: 400\">. It does everything: chunks documents, generates embeddings, upserts JSON documents into Couchbase, and creates the Hyperscale Vector Index automatically. The search app simply queries what the pipeline has already populated.<\/span><\/p>\n<h3><span style=\"font-weight: 400\">Prerequisites<\/span><\/h3>\n<ul>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Amazon S3 bucket with your PDFs (max 10,000 files, max 100 MB per file; supported formats: PDF, DOCX)<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Read-only AWS credentials (Access Key ID + Secret Access Key) for the bucket<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Capella cluster running Couchbase 8.0+ with the Search Service and Eventing Service running on at least one service group<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">A destination bucket in Capella to receive the generated embeddings and HVI<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">An embedding model either a deployed Capella model (with its API Key ID and Token) or an OpenAI API key<\/span><\/li>\n<\/ul>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-18050\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.19.29-PM.png\" alt=\"\" width=\"1178\" height=\"588\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.19.29-PM.png 1178w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.19.29-PM-300x150.png 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.19.29-PM-1024x511.png 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.19.29-PM-768x383.png 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.19.29-PM-18x9.png 18w\" sizes=\"auto, (max-width: 1178px) 100vw, 1178px\" \/><\/p>\n<h3><span style=\"font-weight: 400\">Creating a workflow<\/span><\/h3>\n<ol>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Go to <\/span><b>AI Services \u2192 Workflows<\/b><span style=\"font-weight: 400\"> and click <\/span><b>Create New Workflow<\/b><span style=\"font-weight: 400\">.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Click <\/span><b>Unstructured Data from External sources<\/b><span style=\"font-weight: 400\">.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">In the <\/span><b>Workflow Name<\/b><span style=\"font-weight: 400\"> field, enter <\/span><span style=\"font-weight: 400\">memory-lane-docs<\/span><span style=\"font-weight: 400\">.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Click <\/span><b>Start Workflow<\/b><span style=\"font-weight: 400\">.<\/span><\/li>\n<li style=\"font-weight: 400\"><b>Configure Your Amazon S3 Bucket.<\/b><span style=\"font-weight: 400\"> Click <\/span><b>Add New S3 Bucket Integration<\/b><span style=\"font-weight: 400\">, give it an integration name, then enter your bucket name, region, Access Key ID, and Secret Access Key. Select the saved integration to proceed.<\/span><\/li>\n<li style=\"font-weight: 400\"><b>Choose HVI timing.<\/b><span style=\"font-weight: 400\"> Select <\/span><b>Create HyperScale Vector Index (now)<\/b><span style=\"font-weight: 400\"> \u2013the workflow builds and attaches the index automatically when processing completes. No manual index creation is needed.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Under the Destination<\/span><b> Cluster<\/b><span style=\"font-weight: 400\">, select your Capella operational cluster.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Set <\/span><b>Destination Bucket<\/b><span style=\"font-weight: 400\">, <\/span><b>Destination Scope<\/b><span style=\"font-weight: 400\">, and <\/span><b>Destination Collection<\/b><span style=\"font-weight: 400\"> to match <\/span><span style=\"font-weight: 400\">CB_BUCKET<\/span><span style=\"font-weight: 400\">, <\/span><span style=\"font-weight: 400\">CB_SCOPE<\/span><span style=\"font-weight: 400\">, and <\/span><span style=\"font-weight: 400\">CB_COLLECTION_DOCS<\/span><span style=\"font-weight: 400\"> in your <\/span><span style=\"font-weight: 400\">.env<\/span><span style=\"font-weight: 400\">.<\/span><\/li>\n<li style=\"font-weight: 400\"><b>Configure your data preprocessing settings:<\/b>\n<ul>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Optionally enable <\/span><b>Include Page Range<\/b><span style=\"font-weight: 400\"> to process only a subset of PDF pages.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Optionally choose <\/span><b>Layout Exclusions<\/b><span style=\"font-weight: 400\"> to skip headers, footers, or other page elements.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Enable <\/span><b>OCR<\/b><span style=\"font-weight: 400\"> if your PDFs are scanned.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Choose a <\/span><b>chunking strategy<\/b><span style=\"font-weight: 400\">, maximum chunk size, and chunk overlap.<\/span><\/li>\n<\/ul>\n<\/li>\n<li style=\"font-weight: 400\"><b>Choose your embedding model.<\/b><span style=\"font-weight: 400\"> Select a Capella-hosted model or an OpenAI model.<\/span><\/li>\n<\/ol>\n<p><b>Embedding consistency is critical.<\/b><span style=\"font-weight: 400\"> This app embeds queries with OpenAI <\/span><span style=\"font-weight: 400\">text-embedding-3-small<\/span><span style=\"font-weight: 400\"> (1536-dim). The AI Services workflow must be configured to use the same model. If you choose a different model in the workflow, update <\/span><span style=\"font-weight: 400\">EMBED_MODEL<\/span><span style=\"font-weight: 400\"> and <\/span><span style=\"font-weight: 400\">EMBED_DIMS<\/span><span style=\"font-weight: 400\"> in <\/span><span style=\"font-weight: 400\">backend\/embeddings.py<\/span><span style=\"font-weight: 400\"> to match a mismatch that silently degrades search quality.<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Verify your configuration and click <\/span><b>Run Workflow<\/b><span style=\"font-weight: 400\">.<\/span><\/li>\n<\/ol>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-18051\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.20.05-PM.png\" alt=\"\" width=\"1182\" height=\"590\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.20.05-PM.png 1182w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.20.05-PM-300x150.png 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.20.05-PM-1024x511.png 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.20.05-PM-768x383.png 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.20.05-PM-18x9.png 18w\" sizes=\"auto, (max-width: 1182px) 100vw, 1182px\" \/><\/p>\n<p><b>Important:<\/b><span style=\"font-weight: 400\"> Do not delete or modify the metadata scope, collections, or Eventing functions that the workflow creates in Couchbase. Modifying them requires deleting the workflow and creating a new one from scratch.<\/span><\/p>\n<p><b>Re-ingestion:<\/b><span style=\"font-weight: 400\"> AI Services does not watch S3 continuously. To process new or changed files, re-trigger the workflow manually from the Capella UI.<\/span><\/p>\n<h2><span style=\"font-weight: 400\">How the Workflow Handles Different Document Types<\/span><\/h2>\n<p><span style=\"font-weight: 400\">This is where most DIY ingestion pipelines fall apart. A personal document collection or an enterprise content library is never uniform. Memory Lane&#8217;s sample corpus alone spans books, insurance policies, email threads, journal entries, and employee benefit guides. Each has a completely different layout, density, and information structure:<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-18052\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.20.41-PM.png\" alt=\"\" width=\"1478\" height=\"654\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.20.41-PM.png 1478w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.20.41-PM-300x133.png 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.20.41-PM-1024x453.png 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.20.41-PM-768x340.png 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.20.41-PM-18x8.png 18w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.20.41-PM-1320x584.png 1320w\" sizes=\"auto, (max-width: 1478px) 100vw, 1478px\" \/><\/p>\n<p><span style=\"font-weight: 400\">Couchbase AI Services solves this with <\/span><b>layout-aware document parsing<\/b><span style=\"font-weight: 400\">, not a generic text splitter. Rather than cut the document at a fixed token count, it first analyses the visual and logical structure of each page,\u00a0 identifying the type of every content element before any chunking decision is made.<\/span><\/p>\n<h3><span style=\"font-weight: 400\">Layout detection: what the workflow actually sees<\/span><\/h3>\n<p><span style=\"font-weight: 400\">The workflow runs an unstructured document parser over every PDF page. It identifies discrete elements by their layout role:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400\"><b>paragraph<\/b> <b>\u2013 <\/b><span style=\"font-weight: 400\">body text, a self-contained prose unit<\/span><\/li>\n<li style=\"font-weight: 400\"><b>table<\/b> <b>\u2013 <\/b><span style=\"font-weight: 400\">structured rows and columns; serialised into text preserving the row\u2013column relationship<\/span><\/li>\n<li style=\"font-weight: 400\"><b>title<\/b> <b>\u2013 <\/b><span style=\"font-weight: 400\">document or section heading<\/span><\/li>\n<li style=\"font-weight: 400\"><b>list_item<\/b> <b>\u2013 <\/b><span style=\"font-weight: 400\">individual item within a bulleted or numbered list<\/span><\/li>\n<li style=\"font-weight: 400\"><b>narrative_text<\/b> <b>\u2013 <\/b><span style=\"font-weight: 400\">longer, flowing prose (e.g., a journal page)<\/span><\/li>\n<li style=\"font-weight: 400\"><b>header \/ footer<\/b> <b>\u2013 <\/b><span style=\"font-weight: 400\">page-level metadata, which you can exclude via Layout Exclusions<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400\">Each detected element becomes its own chunk with its type recorded in <\/span><span style=\"font-weight: 400\">meta-data.type<\/span><span style=\"font-weight: 400\">. A table never gets split mid-row. A paragraph never gets merged with an unrelated one from the next section. Semantic boundaries are respected automatically, regardless of how complex the page layout is.<\/span><\/p>\n<h3><span style=\"font-weight: 400\">How tables become searchable text<\/span><\/h3>\n<p><span style=\"font-weight: 400\">Tables are the hardest content type for embedding-based search. You cannot embed a two-dimensional grid directly except for a string. The workflow serialises each table into a readable text representation that preserves the row-to-column relationships. A dental coverage table like:<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-18053\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.21.25-PM.png\" alt=\"\" width=\"1496\" height=\"314\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.21.25-PM.png 1496w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.21.25-PM-300x63.png 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.21.25-PM-1024x215.png 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.21.25-PM-768x161.png 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.21.25-PM-18x4.png 18w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.21.25-PM-1320x277.png 1320w\" sizes=\"auto, (max-width: 1496px) 100vw, 1496px\" \/><\/p>\n<p><span style=\"font-weight: 400\">\u2026becomes a text chunk that an embedding model can meaningfully represent and that a query like <\/span><i><span style=\"font-weight: 400\">&#8220;What is the dental coverage for implants?&#8221;<\/span><\/i><span style=\"font-weight: 400\"> will reliably retrieve, because the serialised text contains both the treatment name and its coverage value in the same chunk.<\/span><\/p>\n<h3><span style=\"font-weight: 400\">Section context travels with every chunk<\/span><\/h3>\n<p><span style=\"font-weight: 400\">Real documents have hierarchy. A paragraph about &#8220;quarterly equity refresh grants&#8221; means something different when it appears in <\/span><i><span style=\"font-weight: 400\">TechFlow Employee Benefits Guide \u2192 Compensation<\/span><\/i><span style=\"font-weight: 400\"> versus a separate unrelated PDF. The workflow captures this by injecting the document title and section title into every chunk&#8217;s <\/span><span style=\"font-weight: 400\">meta-data.associated-titles<\/span><span style=\"font-weight: 400\"> field:<\/span><\/p>\n<pre class=\"lang:default decode:true\">\"meta-data\": {\r\n  \"associated-titles\": [\"TechFlow Employee Benefits Guide\", \"Compensation &amp; Time Off\"],\r\n  \"page-number\": 1,\r\n  \"type\": \"paragraph\"\r\n}\r\n<\/pre>\n<p><span style=\"font-weight: 400\">When that chunk is embedded, the embedding reflects both the content <\/span><i><span style=\"font-weight: 400\">and<\/span><\/i><span style=\"font-weight: 400\"> its context in the document hierarchy. A user asking about equity vesting gets back chunks that are specifically about compensation, not generic text that happens to mention vesting from a different source.<\/span><\/p>\n<h3><span style=\"font-weight: 400\">The chunking strategy choice<\/span><\/h3>\n<p><span style=\"font-weight: 400\">The workflow offers three chunking strategies. The right choice depends on your content:<\/span><\/p>\n<p><b>By element (recommended for mixed corpora):<\/b><span style=\"font-weight: 400\"> Each layout element is its own chunk: one paragraph, one table, one list item. Chunk boundaries follow the document&#8217;s natural semantic structure. This is what Memory Lane uses, and it works well for a mixed corpus because each document type gets appropriate boundaries without any manual configuration.<\/span><\/p>\n<p><b>By page:<\/b><span style=\"font-weight: 400\"> All elements on a page are concatenated into a single chunk. This is useful for when your documents have very dense, tightly coupled content where cross-element context on the same page matters. It produces fewer, larger chunks. One trade-off is that a query may retrieve a chunk where only one paragraph is relevant, but the whole page was stored together.<\/span><\/p>\n<p><b>By token size with overlap:<\/b><span style=\"font-weight: 400\"> Fixed-size windows, with a configurable overlap between adjacent chunks. The overlap ensures that a sentence or clause that falls near a chunk boundary is still represented in both adjacent chunks. This is useful for very long, uniform prose where page or element boundaries are not meaningful. The overlap value (typically 10\u201320% of chunk size) controls how much context is shared between adjacent chunks; more overlap improves recall at the cost of storing more data.<\/span><\/p>\n<p><span style=\"font-weight: 400\">For the Memory Lane corpus, where documents range from tightly structured insurance tables to freeform journal prose, chunking by element gives the best retrieval quality because each chunk represents one coherent unit of meaning.<\/span><\/p>\n<h3><span style=\"font-weight: 400\">OCR for scanned and image-heavy PDFs<\/span><\/h3>\n<p><span style=\"font-weight: 400\">Not every PDF has selectable text. Annual reports, older scanned contracts, and photocopied forms are stored as images. The workflow&#8217;s <\/span><b>OCR<\/b><span style=\"font-weight: 400\"> option runs optical character recognition over each page image before layout analysis, extracting the text that the visual scan contains. The rest of the pipeline, including layout detection, chunking, and embedding, then runs identically on the extracted text. You can enable OCR whenever your corpus may include PDFs that were created by scanning physical documents.<\/span><\/p>\n<h3><span style=\"font-weight: 400\">What the app inherits for free<\/span><\/h3>\n<p><span style=\"font-weight: 400\">Because the ingestion pipeline handles all of this, the Memory Lane search app itself contains zero document-type-specific logic. It does not know or care whether a retrieved chunk came from a table, a paragraph, a journal entry, or a scanned form. Every chunk arrives as a <\/span><span style=\"font-weight: 400\">text-to-embed<\/span><span style=\"font-weight: 400\"> string with structured metadata. The app embeds the query, runs the vector search, and synthesises an answer using the same three steps regardless of what the original document looked like.<\/span><\/p>\n<p><span style=\"font-weight: 400\">This is the right way to build a retrieval application. Document intelligence belongs in the ingestion pipeline, not scattered across the search layer.<\/span><\/p>\n<h2><span style=\"font-weight: 400\">Why Couchbase<\/span><\/h2>\n<h3><span style=\"font-weight: 400\">1. Vector and metadata live together<\/span><\/h3>\n<p><span style=\"font-weight: 400\">Many vector search architectures require two systems running in parallel: a vector store (such as Pinecone or Weaviate) and a traditional database (such as PostgreSQL) to hold the metadata. You have to keep them in sync, which is a classic dual-write problem that introduces consistency risk and operational overhead.<\/span><\/p>\n<p><span style=\"font-weight: 400\">With Couchbase, the embedding vector, the document content, the source name, and the page metadata all live in the <\/span><b>same JSON document in the same collection<\/b><span style=\"font-weight: 400\">. When the HVI returns a result, the full document is already there with no secondary lookup, no join, no synchronisation logic. The codebase is simpler, the query is faster, and there is one fewer moving part to break in production.<\/span><\/p>\n<h3><span style=\"font-weight: 400\">2. Hyperscale Vector Index is queried with standard SQL++<\/span><\/h3>\n<p><span style=\"font-weight: 400\">The Couchbase Hyperscale Vector Index is architecturally distinct from older FTS-based vector search approaches. It is a <\/span><b>GSI index<\/b><span style=\"font-weight: 400\"> part of the Global Secondary Index service queried via the standard Query service using SQL++. This has meaningful practical consequences:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400\"><b>No separate Search service<\/b><span style=\"font-weight: 400\"> to provision, tune, or operate<\/span><\/li>\n<li style=\"font-weight: 400\"><b>Standard SQL++<\/b><span style=\"font-weight: 400\">\u00a0 <\/span><b>\u2013 <\/b><span style=\"font-weight: 400\">The <\/span><span style=\"font-weight: 400\">APPROX_VECTOR_DISTANCE<\/span><span style=\"font-weight: 400\"> function is just another expression; you can add <\/span><span style=\"font-weight: 400\">WHERE<\/span><span style=\"font-weight: 400\"> clauses, <\/span><span style=\"font-weight: 400\">JOIN<\/span><span style=\"font-weight: 400\"> other collections, or combine vector ranking with any other GSI-indexed filter in the same query<\/span><\/li>\n<li style=\"font-weight: 400\"><b>Disk-based storage<\/b> <b>\u2013<\/b><span style=\"font-weight: 400\"> The HVI stores index data on disk rather than in RAM, making billion-document scale viable without proportionally large memory requirements<\/span><\/li>\n<li style=\"font-weight: 400\"><b>One DDL statement<\/b> <b>\u2013<\/b><span style=\"font-weight: 400\"> To create; no schema registration, no API call to a separate service, no YAML configuration file<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400\">CREATE VECTOR INDEX IF NOT EXISTS `memory_lane_docs_hvi`<\/span><\/p>\n<p><span style=\"font-weight: 400\">ON `memory-lane-test`.`live`.`memory_chunks`(`text-embedding` VECTOR)<\/span><\/p>\n<p><span style=\"font-weight: 400\">WITH {&#8220;dimension&#8221;: 1536, &#8220;similarity&#8221;: &#8220;DOT&#8221;, &#8220;description&#8221;: &#8220;IVF,SQ8&#8221;, &#8220;scan_nprobes&#8221;: 4};<\/span><\/p>\n<p><span style=\"font-weight: 400\">In production this index is created automatically by the Couchbase AI Services workflow with no manual DDL needed.<\/span><\/p>\n<h3><span style=\"font-weight: 400\">3. Managed infrastructure reduces operational friction<\/span><\/h3>\n<p><span style=\"font-weight: 400\">Couchbase Capella is a fully managed cloud database. For an application like Memory Lane this means:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400\"><b>TLS by default:<\/b><span style=\"font-weight: 400\"> The connection string uses <\/span><span style=\"font-weight: 400\">couchbases:\/\/<\/span><span style=\"font-weight: 400\"> (with an <\/span><span style=\"font-weight: 400\">s<\/span><span style=\"font-weight: 400\">) and certificate validation is enforced out of the box \u2013 no certificate management required<\/span><\/li>\n<li style=\"font-weight: 400\"><b>No cluster operations:<\/b><span style=\"font-weight: 400\"> No replication topology to configure, no disk management, no backup scheduling<\/span><\/li>\n<li style=\"font-weight: 400\"><b>Scale without application changes:<\/b><span style=\"font-weight: 400\"> If you take this demo to production and data volume grows, Capella scales horizontally; the SQL++ queries and the application code do not change<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400\">A free Capella trial cluster is available in under five minutes, which is how quickly you can go from cloning this repository to a running application against a live database.<\/span><\/p>\n<h2><span style=\"font-weight: 400\">Index Configuration<\/span><\/h2>\n<p><span style=\"font-weight: 400\">Memory Lane uses a single HVI index on the <\/span><span style=\"font-weight: 400\">memory_chunks<\/span><span style=\"font-weight: 400\"> collection:<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-18054\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.22.50-PM.png\" alt=\"\" width=\"1504\" height=\"222\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.22.50-PM.png 1504w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.22.50-PM-300x44.png 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.22.50-PM-1024x151.png 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.22.50-PM-768x113.png 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.22.50-PM-18x3.png 18w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Screenshot-2026-04-28-at-1.22.50-PM-1320x195.png 1320w\" sizes=\"auto, (max-width: 1504px) 100vw, 1504px\" \/><\/p>\n<p><b>Why DOT similarity?<\/b> <span style=\"font-weight: 400\">text-embedding-3-small<\/span><span style=\"font-weight: 400\"> produces unit-normalised vectors whose L2 norm is always 1. For unit vectors, dot product and cosine similarity are mathematically identical. DOT is marginally cheaper to compute, so we use it.<\/span><\/p>\n<p><b>Why IVF,SQ8?<\/b><span style=\"font-weight: 400\"> The <\/span><span style=\"font-weight: 400\">description<\/span><span style=\"font-weight: 400\"> field selects the underlying ANN algorithm:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400\"><b>IVF<\/b><span style=\"font-weight: 400\"> (Inverted File Index) partitions the vector space into clusters at index build time. At query time, only the nearest clusters are examined rather than every vector, reducing search to a small fraction of the full scan cost.<\/span><\/li>\n<li style=\"font-weight: 400\"><b>SQ8<\/b><span style=\"font-weight: 400\"> applies 8-bit scalar quantisation, compressing each 32-bit float to 8 bits. This reduces the in-memory and on-disk footprint of vector data by approximately 4x with minimal accuracy loss.<\/span><\/li>\n<\/ul>\n<p><b>scan_nprobes: 4<\/b><span style=\"font-weight: 400\"> controls how many IVF clusters are examined per query. A value of 4 gives high recall for small datasets. For a corpus of millions of documents, tune this upward at the cost of proportionally higher query latency based on your recall\/latency SLA.<\/span><\/p>\n<p><span style=\"font-weight: 400\">In the dev stub path, <\/span><span style=\"font-weight: 400\">ingest_stubs.py<\/span><span style=\"font-weight: 400\"> creates the index automatically. The SQL++ DDL is also provided in <\/span><span style=\"font-weight: 400\">backend\/cb_vector_index.n1ql<\/span><span style=\"font-weight: 400\"> for reference.<\/span><\/p>\n<h2><span style=\"font-weight: 400\">More Than Just a Fun App: Real Business Patterns<\/span><\/h2>\n<p><span style=\"font-weight: 400\">Every architectural choice in Memory Lane maps to a repeatable production pattern.<\/span><\/p>\n<h3><span style=\"font-weight: 400\">Enterprise knowledge base search<\/span><\/h3>\n<p><span style=\"font-weight: 400\">Replace personal PDFs with internal documentation: Confluence exports, SharePoint archives, engineering runbooks. The same single-collection architecture works at enterprise scale. Add per-user scoping or metadata filtering by department, and you have a production internal search system. Companies routinely lose institutional knowledge when documents are scattered across tools; semantic search over a centralised Couchbase collection addresses this directly.<\/span><\/p>\n<h3><span style=\"font-weight: 400\">Customer support automation<\/span><\/h3>\n<p><span style=\"font-weight: 400\">Index product manuals, past support tickets, and knowledge base articles are available. When a customer submits a ticket, the system retrieves the most relevant documentation passages and synthesises a suggested response, reducing the load on human agents and improving first-contact resolution rates. The same confidence-scored retrieval that surfaces <\/span><i><span style=\"font-weight: 400\">Atomic Habits<\/span><\/i><span style=\"font-weight: 400\"> passages in the demo surfaces the right troubleshooting steps in production.<\/span><\/p>\n<h3><span style=\"font-weight: 400\">Legal and compliance document retrieval<\/span><\/h3>\n<p><span style=\"font-weight: 400\">Legal teams and compliance departments deal with thousands of contracts, regulations, and precedents. Semantic search over embedded document chunks lets an analyst ask <\/span><i><span style=\"font-weight: 400\">&#8220;What does our standard MSA say about liability in case of a data breach?&#8221;<\/span><\/i><span style=\"font-weight: 400\"> and get a cited, passage-level answer in seconds instead of manually searching across hundreds of PDFs. The read-only architecture of Memory Lane is a natural fit for audit-sensitive environments where the search interface should not be able to modify source documents.<\/span><\/p>\n<h2><span style=\"font-weight: 400\">Takeaway for Developers<\/span><\/h2>\n<p><span style=\"font-weight: 400\">If you are building an AI-powered search or retrieval system, Memory Lane demonstrates that the hard parts of streaming responses, accurate semantic retrieval, and production-grade infrastructure do not require exotic infrastructure. They require clear architecture.<\/span><\/p>\n<p><span style=\"font-weight: 400\">The patterns worth carrying from this demo into your next project:<\/span><\/p>\n<p><b>Let the ingestion pipeline normalise your data.<\/b><span style=\"font-weight: 400\"> Couchbase AI Services converts every content type \u2013paragraphs, tables, and even text extracted via OCR \u2013 into a uniform <\/span><span style=\"font-weight: 400\">text-to-embed<\/span><span style=\"font-weight: 400\"> field. When everything arrives at the embedding step as text, you can use a single model for all content types. Simpler code, consistent vector space, no per-type embedding logic at query time.<\/span><\/p>\n<p><b>Match your ingestion and query-time models exactly.<\/b><span style=\"font-weight: 400\"> The embedding model used by AI Services at ingestion must be the same model your app calls at query time. A mismatch silently puts stored vectors and query vectors in different spaces. Pin the model name explicitly in both the AI Services workflow configuration and your app&#8217;s embeddings module.<\/span><\/p>\n<p><b>Keep vectors and metadata together.<\/b><span style=\"font-weight: 400\"> Couchbase HVI lets you store the embedding vector alongside all document content in one JSON document. Avoid architectures that force a join between a vector store and a separate metadata database; consistency is harder than it looks at scale.<\/span><\/p>\n<p><b>Stream everything.<\/b><span style=\"font-weight: 400\"> Users tolerate a three-second wait better when they see the first words of the answer appear within 200 ms. Server-Sent Events are cheap to implement and dramatically improve perceived responsiveness. The ReadableStream pattern used in Memory Lane works in every modern browser without a library.<\/span><\/p>\n<h2><span style=\"font-weight: 400\">Final Thoughts<\/span><\/h2>\n<p><span style=\"font-weight: 400\">Memory Lane started from a straightforward question: What does a genuinely complete, end-to-end AI document search demo look like when built on modern Couchbase infrastructure?<\/span><\/p>\n<p><span style=\"font-weight: 400\">The answer turned out to be surprisingly clean. A FastAPI backend with four routes. A single OpenAI embedding model <\/span><span style=\"font-weight: 400\">text-embedding-3-small<\/span><span style=\"font-weight: 400\">. A three-step flow: embed, search, synthesise. A React frontend with two panels. And at the centre a Couchbase Hyperscale Vector Index, storing embedding vectors alongside document content in a single JSON document, queryable with a standard SQL++ expression and a single new function name.<\/span><\/p>\n<p><span style=\"font-weight: 400\">The code is intentionally approachable. Every design choice was made to be readable, not clever. The goal is not to demonstrate the maximum possible complexity, but to give you a starting point with real architecture decisions already made, so you can adapt it to your use case rather than rebuild from first principles.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Vector search and managed cloud databases are no longer research topics. They are the building blocks of the next generation of enterprise software. Memory Lane shows one way to put them together \u2013 clearly, practically, and with enough detail that the next step is cloning the repository rather than reading another white paper.<\/span><\/p>\n<p><span style=\"font-weight: 400\">The full source code is available on GitHub. Try it, fork it, and build something better.<\/span><\/p>\n<p><a href=\"https:\/\/github.com\/cb-ankush92\/memory-lane\"><span style=\"font-weight: 400\">https:\/\/github.com\/cb-ankush92\/memory-lane<\/span><\/a><span style=\"font-weight: 400\">\u00a0<\/span><\/p>\n<p><i><span style=\"font-weight: 400\">Built with Couchbase Capella, Couchbase Hyperscale Vector Index (HVI), FastAPI, React, and OpenAI GPT-4o.<\/span><\/i><\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Introduction Memory is unreliable. You&#8217;ve read hundreds of articles and saved dozens of PDFs but when you actually need to find something, you&#8217;re stuck scrolling through folders or guessing the right filename. What if you could ask, in plain English, [&hellip;]<\/p>\n","protected":false},"author":85715,"featured_media":18055,"comment_status":"open","ping_status":"open","sticky":true,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"footnotes":""},"categories":[10133],"tags":[],"ppma_author":[10178],"class_list":["post-18046","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-engineering"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.3 (Yoast SEO v27.3) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Build a Personal Memory Search With Couchbase AI Services and Hyperscale Vector Index - The Couchbase Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.couchbase.com\/blog\/pt\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\/\" \/>\n<meta property=\"og:locale\" content=\"pt_BR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Build a Personal Memory Search With Couchbase AI Services and Hyperscale Vector Index\" \/>\n<meta property=\"og:description\" content=\"Introduction Memory is unreliable. You&#8217;ve read hundreds of articles and saved dozens of PDFs but when you actually need to find something, you&#8217;re stuck scrolling through folders or guessing the right filename. What if you could ask, in plain English, [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.couchbase.com\/blog\/pt\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\/\" \/>\n<meta property=\"og:site_name\" content=\"The Couchbase Blog\" \/>\n<meta property=\"article:published_time\" content=\"2026-04-28T20:25:21+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-04-28T20:33:35+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Build-a-Personal-Memory-Search-with-CBAI.png\" \/>\n\t<meta property=\"og:image:width\" content=\"2400\" \/>\n\t<meta property=\"og:image:height\" content=\"1256\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Ankush Shankar, Principal Product Manager\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Ankush Shankar, Principal Product Manager\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"18 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\\\/\"},\"author\":{\"name\":\"Ankush Shankar, Principal Product Manager\",\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/#\\\/schema\\\/person\\\/787cb9807cf0885d26a66078fa8dc1ff\"},\"headline\":\"Build a Personal Memory Search With Couchbase AI Services and Hyperscale Vector Index\",\"datePublished\":\"2026-04-28T20:25:21+00:00\",\"dateModified\":\"2026-04-28T20:33:35+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\\\/\"},\"wordCount\":3589,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/wp-content\\\/uploads\\\/sites\\\/1\\\/2026\\\/04\\\/Build-a-Personal-Memory-Search-with-CBAI.png\",\"articleSection\":[\"Engineering\"],\"inLanguage\":\"pt-BR\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\\\/\",\"url\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\\\/\",\"name\":\"Build a Personal Memory Search With Couchbase AI Services and Hyperscale Vector Index - The Couchbase Blog\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/wp-content\\\/uploads\\\/sites\\\/1\\\/2026\\\/04\\\/Build-a-Personal-Memory-Search-with-CBAI.png\",\"datePublished\":\"2026-04-28T20:25:21+00:00\",\"dateModified\":\"2026-04-28T20:33:35+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\\\/#breadcrumb\"},\"inLanguage\":\"pt-BR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-BR\",\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\\\/#primaryimage\",\"url\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/wp-content\\\/uploads\\\/sites\\\/1\\\/2026\\\/04\\\/Build-a-Personal-Memory-Search-with-CBAI.png\",\"contentUrl\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/wp-content\\\/uploads\\\/sites\\\/1\\\/2026\\\/04\\\/Build-a-Personal-Memory-Search-with-CBAI.png\",\"width\":2400,\"height\":1256},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Build a Personal Memory Search With Couchbase AI Services and Hyperscale Vector Index\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/#website\",\"url\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/\",\"name\":\"The Couchbase Blog\",\"description\":\"Couchbase, the NoSQL Database\",\"publisher\":{\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"pt-BR\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/#organization\",\"name\":\"The Couchbase Blog\",\"url\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-BR\",\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/wp-content\\\/uploads\\\/2023\\\/04\\\/admin-logo.png\",\"contentUrl\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/wp-content\\\/uploads\\\/2023\\\/04\\\/admin-logo.png\",\"width\":218,\"height\":34,\"caption\":\"The Couchbase Blog\"},\"image\":{\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/#\\\/schema\\\/logo\\\/image\\\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/#\\\/schema\\\/person\\\/787cb9807cf0885d26a66078fa8dc1ff\",\"name\":\"Ankush Shankar, Principal Product Manager\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-BR\",\"@id\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/wp-content\\\/uploads\\\/sites\\\/1\\\/2026\\\/04\\\/ankush-shankar.jpeg94ed44909c15a1c9ef7d79650f9763f1\",\"url\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/wp-content\\\/uploads\\\/sites\\\/1\\\/2026\\\/04\\\/ankush-shankar.jpeg\",\"contentUrl\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/wp-content\\\/uploads\\\/sites\\\/1\\\/2026\\\/04\\\/ankush-shankar.jpeg\",\"caption\":\"Ankush Shankar, Principal Product Manager\"},\"jobTitle\":\"Principal Product Manager\",\"url\":\"https:\\\/\\\/www.couchbase.com\\\/blog\\\/pt\\\/author\\\/ankushshankar\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Build a Personal Memory Search With Couchbase AI Services and Hyperscale Vector Index - The Couchbase Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.couchbase.com\/blog\/pt\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\/","og_locale":"pt_BR","og_type":"article","og_title":"Build a Personal Memory Search With Couchbase AI Services and Hyperscale Vector Index","og_description":"Introduction Memory is unreliable. You&#8217;ve read hundreds of articles and saved dozens of PDFs but when you actually need to find something, you&#8217;re stuck scrolling through folders or guessing the right filename. What if you could ask, in plain English, [&hellip;]","og_url":"https:\/\/www.couchbase.com\/blog\/pt\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\/","og_site_name":"The Couchbase Blog","article_published_time":"2026-04-28T20:25:21+00:00","article_modified_time":"2026-04-28T20:33:35+00:00","og_image":[{"width":2400,"height":1256,"url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Build-a-Personal-Memory-Search-with-CBAI.png","type":"image\/png"}],"author":"Ankush Shankar, Principal Product Manager","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Ankush Shankar, Principal Product Manager","Est. reading time":"18 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.couchbase.com\/blog\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\/#article","isPartOf":{"@id":"https:\/\/www.couchbase.com\/blog\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\/"},"author":{"name":"Ankush Shankar, Principal Product Manager","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/787cb9807cf0885d26a66078fa8dc1ff"},"headline":"Build a Personal Memory Search With Couchbase AI Services and Hyperscale Vector Index","datePublished":"2026-04-28T20:25:21+00:00","dateModified":"2026-04-28T20:33:35+00:00","mainEntityOfPage":{"@id":"https:\/\/www.couchbase.com\/blog\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\/"},"wordCount":3589,"commentCount":0,"publisher":{"@id":"https:\/\/www.couchbase.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\/#primaryimage"},"thumbnailUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Build-a-Personal-Memory-Search-with-CBAI.png","articleSection":["Engineering"],"inLanguage":"pt-BR","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.couchbase.com\/blog\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.couchbase.com\/blog\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\/","url":"https:\/\/www.couchbase.com\/blog\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\/","name":"Build a Personal Memory Search With Couchbase AI Services and Hyperscale Vector Index - The Couchbase Blog","isPartOf":{"@id":"https:\/\/www.couchbase.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.couchbase.com\/blog\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\/#primaryimage"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\/#primaryimage"},"thumbnailUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Build-a-Personal-Memory-Search-with-CBAI.png","datePublished":"2026-04-28T20:25:21+00:00","dateModified":"2026-04-28T20:33:35+00:00","breadcrumb":{"@id":"https:\/\/www.couchbase.com\/blog\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\/#breadcrumb"},"inLanguage":"pt-BR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.couchbase.com\/blog\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\/"]}]},{"@type":"ImageObject","inLanguage":"pt-BR","@id":"https:\/\/www.couchbase.com\/blog\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\/#primaryimage","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Build-a-Personal-Memory-Search-with-CBAI.png","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/Build-a-Personal-Memory-Search-with-CBAI.png","width":2400,"height":1256},{"@type":"BreadcrumbList","@id":"https:\/\/www.couchbase.com\/blog\/build-a-personal-memory-search-with-couchbase-ai-services-and-hyperscale-vector-index\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.couchbase.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Build a Personal Memory Search With Couchbase AI Services and Hyperscale Vector Index"}]},{"@type":"WebSite","@id":"https:\/\/www.couchbase.com\/blog\/#website","url":"https:\/\/www.couchbase.com\/blog\/","name":"Blog do Couchbase","description":"Couchbase, o banco de dados NoSQL","publisher":{"@id":"https:\/\/www.couchbase.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.couchbase.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"pt-BR"},{"@type":"Organization","@id":"https:\/\/www.couchbase.com\/blog\/#organization","name":"Blog do Couchbase","url":"https:\/\/www.couchbase.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"pt-BR","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png","width":218,"height":34,"caption":"The Couchbase Blog"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/787cb9807cf0885d26a66078fa8dc1ff","name":"Ankush Shankar, Principal Product Manager","image":{"@type":"ImageObject","inLanguage":"pt-BR","@id":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/ankush-shankar.jpeg94ed44909c15a1c9ef7d79650f9763f1","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/ankush-shankar.jpeg","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/ankush-shankar.jpeg","caption":"Ankush Shankar, Principal Product Manager"},"jobTitle":"Principal Product Manager","url":"https:\/\/www.couchbase.com\/blog\/pt\/author\/ankushshankar\/"}]}},"acf":[],"authors":[{"term_id":10178,"user_id":85715,"is_guest":0,"slug":"ankushshankar","display_name":"Ankush Shankar, Principal Product Manager","avatar_url":{"url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/ankush-shankar.jpeg","url2x":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2026\/04\/ankush-shankar.jpeg"},"0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""}],"_links":{"self":[{"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/posts\/18046","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/users\/85715"}],"replies":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/comments?post=18046"}],"version-history":[{"count":0,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/posts\/18046\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/media\/18055"}],"wp:attachment":[{"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/media?parent=18046"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/categories?post=18046"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/tags?post=18046"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/pt\/wp-json\/wp\/v2\/ppma_author?post=18046"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}