{"id":16838,"date":"2025-02-05T10:53:31","date_gmt":"2025-02-05T18:53:31","guid":{"rendered":"https:\/\/www.couchbase.com\/blog\/?p=16838"},"modified":"2025-06-13T16:36:20","modified_gmt":"2025-06-13T23:36:20","slug":"groq-llm-integration-couchbase-vector-search","status":"publish","type":"post","link":"https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/","title":{"rendered":"Integrate Groq&#8217;s Fast LLM Inferencing With Couchbase Vector Search"},"content":{"rendered":"<p><span style=\"font-weight: 400;\">With so many LLMs coming out, a lot of companies are focusing on enhancing the inference speeds for large language models with specialized hardware and optimizations to be able to scale the inference capabilities of these models. One such company making huge strides in this space is <a href=\"https:\/\/groq.com\">Groq<\/a>.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In this blog post we will explore Groq and how you integrate Groq&#8217;s fast LLM inferencing capabilities with Couchbase Vector Search to create fast and efficient RAG applications. We will also compare the performance of different LLM solutions like OpenAI, Gemini and how they compare with Groq&#8217;s inference speeds.<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">What is Groq?<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">Groq, Inc. is an American technology company specializing in artificial intelligence particularly known for its development of the Language Processing Unit (LPU), an application-specific integrated circuit (ASIC) designed to accelerate AI inference tasks. It is specifically designed to enhance <a href=\"https:\/\/www.couchbase.com\/blog\/large-language-models-explained\/\">Large Language Models (LLMs)<\/a> with ultra-low latency inference capabilities. Groq Cloud APIs enable developers to integrate state-of-the-art LLMs like Llama3 and Mixtral 8x7B into their applications.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">What does this mean for developers? It means that Groq APIs can be seamlessly integrated into applications that demand real-time AI processing with quick inference needs.<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">How to Get Started with Groq APIs<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">To tap into the power of Groq APIs, the first step is to generate an API key. This is a straightforward process that begins with signing up on the Groq Cloud console.<\/span><\/p>\n<p><a href=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image3.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-16839\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image3-1024x572.png\" alt=\"\" width=\"900\" height=\"503\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image3-1024x572.png 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image3-300x168.png 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image3-768x429.png 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image3-1536x858.png 1536w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image3-1320x738.png 1320w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image3.png 1999w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><\/a><br style=\"font-weight: 400;\" \/><span style=\"font-weight: 400;\">Once you&#8217;re signed up, navigate to the <strong>API Keys<\/strong>\u00a0section. Here, you&#8217;ll have the option to <a href=\"https:\/\/console.groq.com\/keys\">create a new API key<\/a>.<\/span><\/p>\n<p><a href=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image2.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-16840\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image2-1024x413.png\" alt=\"\" width=\"900\" height=\"363\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image2-1024x413.png 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image2-300x121.png 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image2-768x310.png 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image2-1536x619.png 1536w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image2-1320x532.png 1320w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image2.png 1999w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><\/a><\/p>\n<p><span style=\"font-weight: 400;\">The API Key will allow you to integrate state-of-the-art large language models like <a href=\"https:\/\/www.llama.com\">Llama3<\/a> and <a href=\"https:\/\/huggingface.co\/docs\/transformers\/model_doc\/mixtral\">Mixtral<\/a> into your applications. Next, we will be integrating the Groq chat model with <a href=\"https:\/\/www.langchain.com\">LangChain<\/a> in our application.<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">Using Groq as the LLM<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">You can leverage the Groq API as one of the LLM providers in LangChain:<\/span><\/p>\n<pre class=\"nums:false lang:python decode:true\">from langchain_groq import ChatGroq\r\n\r\nllm = ChatGroq(\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0temperature=0.3,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0model_name=\"mixtral-8x7b-32768\",\r\n)\r\n<\/pre>\n<p><span style=\"font-weight: 400;\">When you instantiate the <em>ChatGroq<\/em> object you can pass the temperature and the model name. You can take a look at the <a href=\"https:\/\/console.groq.com\/docs\/models\">currently supported models in Groq<\/a>.<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">Building RAG application with Couchbase and Groq<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">The goal is to create a chat application that allows users to upload PDFs and chat with them. We&#8217;ll be using the Couchbase Python SDK and Streamlit to facilitate PDF uploads into Couchbase VectorStore. Additionally, we&#8217;ll explore how to use RAG for context-based question-answering from PDFs, all powered by Groq.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">You can follow the steps mentioned in <a href=\"https:\/\/developer.couchbase.com\/tutorial-gemini-langchain-pdf-chat\">this tutorial<\/a> on how to set up a Streamlit RAG application powered by Couchbase Vector Search. In this tutorial we leverage Gemini as the LLM. We will replace the implementation for Gemini with Groq.<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">Comparing Groq\u2019s performance<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">In this blog we also compare the performance of different LLM providers. For this we have built a drop down for the user to be able to select what LLM provider they wish to use for the RAG application. In this example we are using Gemini, OpenAI, Ollama and Groq as the different LLM providers. There is a <a href=\"https:\/\/python.langchain.com\/v0.1\/docs\/modules\/model_io\/llms\/\">large list of LLM providers supported by LangChain<\/a>.\u00a0<\/span><\/p>\n<pre class=\"nums:false lang:python decode:true\"> \u00a0\u00a0\u00a0st.sidebar.subheader(\"Select LLM\")\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0llm_choice = st.sidebar.selectbox(\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"Select LLM\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0[\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"OpenAI\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"Groq\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"Gemini\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"Ollama\"\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0]\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0)\r\n\r\n\u00a0\u00a0\u00a0\u00a0if llm_choice == \"Gemini\":\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0check_environment_variable(\"GOOGLE_API_KEY\")\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0llm = GoogleGenerativeAI(\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0temperature=0.3,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0model=\"models\/gemini-1.5-pro\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0)\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0llm_without_rag = GoogleGenerativeAI(\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0temperature=0,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0model=\"models\/gemini-1.5-pro\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0)\r\n\u00a0\u00a0\u00a0\u00a0elif llm_choice == \"Groq\":\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0check_environment_variable(\"GROQ_API_KEY\")\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0llm = ChatGroq(\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0temperature=0.3,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0model_name=\"mixtral-8x7b-32768\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0)\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0llm_without_rag = ChatGroq(\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0temperature=0,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0model_name=\"mixtral-8x7b-32768\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0)\r\n\u00a0\u00a0\u00a0\u00a0elif llm_choice == \"OpenAI\":\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0check_environment_variable(\"OPENAI_API_KEY\")\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0llm = ChatOpenAI(\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0temperature=0.3,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0model=\"gpt-3.5-turbo\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0)\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0llm_without_rag = ChatOpenAI(\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0temperature=0,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0model=\"gpt-3.5-turbo\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0)\r\n\u00a0\u00a0\u00a0\u00a0elif llm_choice == \"Ollama\":\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0llm = Ollama(\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0temperature=0.3,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0model = ollama_model,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0base_url = ollama_url\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0)\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0llm_without_rag = Ollama(\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0temperature=0,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0model = ollama_model,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0base_url = ollama_url\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0)<\/pre>\n<p><span style=\"font-weight: 400;\">In order to highlight Groq\u2019s quick inference speed, we built a way to calculate the inference time for the LLM Response. This measures and records the time taken for each response generation. The results are displayed in a sidebar table, showing the model used and the time taken for each query comparing different LLM providers such as OpenAI, Ollama, Gemini and Groq; through these comparisons, it was found that Groq&#8217;s LLM consistently delivered the quickest inference times. This performance benchmark allows users to see the efficiency of various models in real-time.<\/span><\/p>\n<p><br style=\"font-weight: 400;\" \/><a href=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image1.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-16841\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image1-1024x587.png\" alt=\"\" width=\"900\" height=\"516\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image1-1024x587.png 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image1-300x172.png 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image1-768x440.png 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image1-1536x881.png 1536w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image1-1320x757.png 1320w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image1.png 1999w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><\/a><\/p>\n<pre class=\"nums:false lang:python decode:true\"> if question := st.chat_input(\"Ask a question based on the PDF\"):\r\n\u00a0\u00a0\u00a0\u00a0# Start timing\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0start_time = time.time()\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0# Display user message in chat message container\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0st.chat_message(\"user\").markdown(question)\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0# Add user message to chat history\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0st.session_state.messages.append(\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{\"role\": \"user\", \"content\": question, \"avatar\": \"\ud83d\udc64\"}\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0)\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0# Add placeholder for streaming the response\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0with st.chat_message(\"assistant\", avatar=couchbase_logo):\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0message_placeholder = st.empty()\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0# stream the response from the RAG\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0rag_response = \"\"\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0for chunk in chain.stream(question):\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0rag_response += chunk\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0message_placeholder.markdown(rag_response + \"\u258c\")\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0message_placeholder.markdown(rag_response)\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0st.session_state.messages.append(\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"role\": \"assistant\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"content\": rag_response,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"avatar\": couchbase_logo,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0)\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0# stream the response from the pure LLM\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0with st.chat_message(\"ai\", avatar=\"\ud83e\udd16\"):\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0message_placeholder_pure_llm = st.empty()\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0pure_llm_response = \"\"\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0for chunk in chain_without_rag.stream(question):\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0pure_llm_response += chunk\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0message_placeholder_pure_llm.markdown(pure_llm_response + \"\u258c\")\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0message_placeholder_pure_llm.markdown(pure_llm_response)\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0st.session_state.messages.append(\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"role\": \"assistant\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"content\": pure_llm_response,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"avatar\": \"\ud83e\udd16\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0)\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0# End timing and calculate duration\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0end_time = time.time()\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0duration = end_time - start_time\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0# Display the time taken\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0st.write(f\"Time taken to generate complete output: {duration:.2f} seconds\")\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0st.session_state.query_results.append({\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"model\": llm_choice,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"time\": duration\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0})\r\n\r\n\u00a0\u00a0\u00a0 st.sidebar.subheader(\"Query Results\")\r\n\r\n\u00a0\u00a0\u00a0\u00a0table_header = \"| Model | Time (s) |\\n| --- | --- |\\n\"\r\n\r\n\u00a0\u00a0\u00a0\u00a0# create table rows\u00a0\r\n\r\n\u00a0\u00a0\u00a0\u00a0table_rows = \"\"\r\n\r\n\u00a0\u00a0\u00a0\u00a0for idx, result in enumerate(st.session_state.query_results, 1):\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0table_rows += f\"| {result['model']} | {result['time']:.2f} |\\n\"\r\n\r\n\u00a0\u00a0\u00a0\u00a0table = table_header + table_rows\u00a0\r\n\r\n\u00a0\u00a0\u00a0\u00a0st.sidebar.markdown(table, unsafe_allow_html=True)\r\n\r\n\u00a0\u00a0\u00a0\u00a0if st.sidebar.button(\"Clear Results\"):\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0st.session_state.query_results = []\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0st.experimental_rerun()<\/pre>\n<p><span style=\"font-weight: 400;\">As you can see from the results, Groq inference speed is the quickest in comparison with the other LLM providers.<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">Conclusion<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">LangChain is a great open source framework which provides you a lot of possible options for vector stores, LLM of your choice to build AI powered applications. Groq is at the forefront of being one of the quickest LLM inference engine and it pairs well with AI powered applications that need quick and real time inference. Thus with the power of quick inference of Groq and Couchbase Vector Search you can build production ready and scalable RAG applications.<\/span><\/p>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li><a href=\"https:\/\/cloud.couchbase.com?ref=blog\">Start using Capella today<\/a>, for free<\/li>\n<li>Learn more about <a href=\"https:\/\/www.couchbase.com\/products\/vector-search\/\">vector search<\/a><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>With so many LLMs coming out, a lot of companies are focusing on enhancing the inference speeds for large language models with specialized hardware and optimizations to be able to scale the inference capabilities of these models. One such company [&hellip;]<\/p>\n","protected":false},"author":85559,"featured_media":16843,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"footnotes":""},"categories":[10122,1815,2225,1816,9973,9937],"tags":[9974,10088,9963,9870],"ppma_author":[10069],"class_list":["post-16838","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence-ai","category-best-practices-and-tutorials","category-cloud","category-couchbase-server","category-generative-ai-genai","category-vector-search","tag-genai","tag-groq","tag-langchain","tag-llms"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.8 (Yoast SEO v25.8) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Integrate Groq&#039;s Fast LLM Inferencing With Couchbase Vector Search<\/title>\n<meta name=\"description\" content=\"Integrate Groq\u2019s fast LLM inference with Couchbase Vector Search for efficient RAG apps. Compare its speed with OpenAI, Gemini, and Ollama.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Integrate Groq&#039;s Fast LLM Inferencing With Couchbase Vector Search\" \/>\n<meta property=\"og:description\" content=\"Integrate Groq\u2019s fast LLM inference with Couchbase Vector Search for efficient RAG apps. Compare its speed with OpenAI, Gemini, and Ollama.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/\" \/>\n<meta property=\"og:site_name\" content=\"The Couchbase Blog\" \/>\n<meta property=\"article:published_time\" content=\"2025-02-05T18:53:31+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-06-13T23:36:20+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/blog-groq-couchbase-integration.png\" \/>\n\t<meta property=\"og:image:width\" content=\"2400\" \/>\n\t<meta property=\"og:image:height\" content=\"1256\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Shivay Lamba, Developer Evangelist\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Shivay Lamba, Developer Evangelist\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/\"},\"author\":{\"name\":\"Shivay Lamba, Developer Evangelist\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/377d9b772c90439916236da79c02c418\"},\"headline\":\"Integrate Groq&#8217;s Fast LLM Inferencing With Couchbase Vector Search\",\"datePublished\":\"2025-02-05T18:53:31+00:00\",\"dateModified\":\"2025-06-13T23:36:20+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/\"},\"wordCount\":718,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/blog-groq-couchbase-integration.png\",\"keywords\":[\"GenAI\",\"groq\",\"langchain\",\"LLMs\"],\"articleSection\":[\"Artificial Intelligence (AI)\",\"Best Practices and Tutorials\",\"Couchbase Capella\",\"Couchbase Server\",\"Generative AI (GenAI)\",\"Vector Search\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/\",\"url\":\"https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/\",\"name\":\"Integrate Groq's Fast LLM Inferencing With Couchbase Vector Search\",\"isPartOf\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/blog-groq-couchbase-integration.png\",\"datePublished\":\"2025-02-05T18:53:31+00:00\",\"dateModified\":\"2025-06-13T23:36:20+00:00\",\"description\":\"Integrate Groq\u2019s fast LLM inference with Couchbase Vector Search for efficient RAG apps. Compare its speed with OpenAI, Gemini, and Ollama.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/#primaryimage\",\"url\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/blog-groq-couchbase-integration.png\",\"contentUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/blog-groq-couchbase-integration.png\",\"width\":2400,\"height\":1256},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.couchbase.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Integrate Groq&#8217;s Fast LLM Inferencing With Couchbase Vector Search\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#website\",\"url\":\"https:\/\/www.couchbase.com\/blog\/\",\"name\":\"The Couchbase Blog\",\"description\":\"Couchbase, the NoSQL Database\",\"publisher\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.couchbase.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#organization\",\"name\":\"The Couchbase Blog\",\"url\":\"https:\/\/www.couchbase.com\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png\",\"contentUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png\",\"width\":218,\"height\":34,\"caption\":\"The Couchbase Blog\"},\"image\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/377d9b772c90439916236da79c02c418\",\"name\":\"Shivay Lamba, Developer Evangelist\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/image\/7b5e7cd8007bd40de81c1ef6a9e0266f\",\"url\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/01\/shivay-lambda-couchbase.jpeg\",\"contentUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/01\/shivay-lambda-couchbase.jpeg\",\"caption\":\"Shivay Lamba, Developer Evangelist\"},\"url\":\"https:\/\/www.couchbase.com\/blog\/author\/shivaylambda\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Integrate Groq's Fast LLM Inferencing With Couchbase Vector Search","description":"Integrate Groq\u2019s fast LLM inference with Couchbase Vector Search for efficient RAG apps. Compare its speed with OpenAI, Gemini, and Ollama.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/","og_locale":"en_US","og_type":"article","og_title":"Integrate Groq's Fast LLM Inferencing With Couchbase Vector Search","og_description":"Integrate Groq\u2019s fast LLM inference with Couchbase Vector Search for efficient RAG apps. Compare its speed with OpenAI, Gemini, and Ollama.","og_url":"https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/","og_site_name":"The Couchbase Blog","article_published_time":"2025-02-05T18:53:31+00:00","article_modified_time":"2025-06-13T23:36:20+00:00","og_image":[{"width":2400,"height":1256,"url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/blog-groq-couchbase-integration.png","type":"image\/png"}],"author":"Shivay Lamba, Developer Evangelist","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Shivay Lamba, Developer Evangelist","Est. reading time":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/#article","isPartOf":{"@id":"https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/"},"author":{"name":"Shivay Lamba, Developer Evangelist","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/377d9b772c90439916236da79c02c418"},"headline":"Integrate Groq&#8217;s Fast LLM Inferencing With Couchbase Vector Search","datePublished":"2025-02-05T18:53:31+00:00","dateModified":"2025-06-13T23:36:20+00:00","mainEntityOfPage":{"@id":"https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/"},"wordCount":718,"commentCount":0,"publisher":{"@id":"https:\/\/www.couchbase.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/#primaryimage"},"thumbnailUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/blog-groq-couchbase-integration.png","keywords":["GenAI","groq","langchain","LLMs"],"articleSection":["Artificial Intelligence (AI)","Best Practices and Tutorials","Couchbase Capella","Couchbase Server","Generative AI (GenAI)","Vector Search"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/","url":"https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/","name":"Integrate Groq's Fast LLM Inferencing With Couchbase Vector Search","isPartOf":{"@id":"https:\/\/www.couchbase.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/#primaryimage"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/#primaryimage"},"thumbnailUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/blog-groq-couchbase-integration.png","datePublished":"2025-02-05T18:53:31+00:00","dateModified":"2025-06-13T23:36:20+00:00","description":"Integrate Groq\u2019s fast LLM inference with Couchbase Vector Search for efficient RAG apps. Compare its speed with OpenAI, Gemini, and Ollama.","breadcrumb":{"@id":"https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/#primaryimage","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/blog-groq-couchbase-integration.png","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/blog-groq-couchbase-integration.png","width":2400,"height":1256},{"@type":"BreadcrumbList","@id":"https:\/\/www.couchbase.com\/blog\/groq-llm-integration-couchbase-vector-search\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.couchbase.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Integrate Groq&#8217;s Fast LLM Inferencing With Couchbase Vector Search"}]},{"@type":"WebSite","@id":"https:\/\/www.couchbase.com\/blog\/#website","url":"https:\/\/www.couchbase.com\/blog\/","name":"The Couchbase Blog","description":"Couchbase, the NoSQL Database","publisher":{"@id":"https:\/\/www.couchbase.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.couchbase.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.couchbase.com\/blog\/#organization","name":"The Couchbase Blog","url":"https:\/\/www.couchbase.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png","width":218,"height":34,"caption":"The Couchbase Blog"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/377d9b772c90439916236da79c02c418","name":"Shivay Lamba, Developer Evangelist","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/image\/7b5e7cd8007bd40de81c1ef6a9e0266f","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/01\/shivay-lambda-couchbase.jpeg","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/01\/shivay-lambda-couchbase.jpeg","caption":"Shivay Lamba, Developer Evangelist"},"url":"https:\/\/www.couchbase.com\/blog\/author\/shivaylambda\/"}]}},"authors":[{"term_id":10069,"user_id":85559,"is_guest":0,"slug":"shivaylambda","display_name":"Shivay Lamba, Developer Evangelist","avatar_url":{"url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/01\/shivay-lambda-couchbase.jpeg","url2x":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/01\/shivay-lambda-couchbase.jpeg"},"author_category":"","last_name":"Lamba, Developer Evangelist","first_name":"Shivay","job_title":"","user_url":"","description":""}],"_links":{"self":[{"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/posts\/16838","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/users\/85559"}],"replies":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/comments?post=16838"}],"version-history":[{"count":0,"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/posts\/16838\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/media\/16843"}],"wp:attachment":[{"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/media?parent=16838"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/categories?post=16838"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/tags?post=16838"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/wp-json\/wp\/v2\/ppma_author?post=16838"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}