{"id":16068,"date":"2024-08-01T06:51:02","date_gmt":"2024-08-01T13:51:02","guid":{"rendered":"https:\/\/www.couchbase.com\/blog\/?p=16068"},"modified":"2025-06-13T20:19:05","modified_gmt":"2025-06-14T03:19:05","slug":"faster-llm-apps-semantic-cache-langchain-couchbase","status":"publish","type":"post","link":"https:\/\/www.couchbase.com\/blog\/es\/faster-llm-apps-semantic-cache-langchain-couchbase\/","title":{"rendered":"Cree aplicaciones LLM m\u00e1s r\u00e1pidas y baratas con Couchbase y LangChain"},"content":{"rendered":"<h2><span style=\"font-weight: 400;\">Nueva cach\u00e9 est\u00e1ndar, sem\u00e1ntica y conversacional con integraci\u00f3n de LangChain<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">En el panorama en r\u00e1pida evoluci\u00f3n del desarrollo de aplicaciones de IA, la integraci\u00f3n de grandes modelos ling\u00fc\u00edsticos (LLM) con fuentes de datos empresariales se ha convertido en un aspecto fundamental. La capacidad de aprovechar la potencia de los LLM para generar respuestas de alta calidad y contextualmente relevantes est\u00e1 transformando diversos sectores. Sin embargo, los equipos se enfrentan a importantes retos a la hora de ofrecer respuestas fiables a gran velocidad y reducir costes, especialmente a medida que aumenta el volumen de peticiones de los usuarios. Adem\u00e1s, como la mayor\u00eda de los LLM tienen una memoria limitada, existe la oportunidad de almacenar las conversaciones de los LLM durante un periodo de tiempo prolongado y evitar que los usuarios tengan que volver a empezar desde cero cuando se agote la memoria de un LLM.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Couchbase, l\u00edder en almacenamiento en cach\u00e9 altamente escalable y de baja latencia (<\/span><a href=\"https:\/\/www.couchbase.com\/blog\/es\/customers\/linkedin\/\"><span style=\"font-weight: 400;\">Lea el art\u00edculo de LinkedIn<\/span><\/a><span style=\"font-weight: 400;\">), aborda estos retos con soluciones innovadoras. Las nuevas mejoras de nuestra oferta de b\u00fasqueda vectorial y almacenamiento en cach\u00e9, as\u00ed como un paquete LangChain dedicado para desarrolladores, facilitan la elevaci\u00f3n del rendimiento y la fiabilidad de las aplicaciones de IA generativa.<\/span><\/p>\n<h2>B\u00fasqueda vectorial en Couchbase y generaci\u00f3n mejorada de recuperaci\u00f3n (RAG)<\/h2>\n<p><span style=\"font-weight: 400;\">La b\u00fasqueda vectorial de Couchbase permite a los usuarios encontrar objetos similares sin necesidad de tener una coincidencia exacta. Se trata de una funci\u00f3n avanzada que permite buscar y recuperar datos de forma eficiente bas\u00e1ndose en incrustaciones vectoriales, que son representaciones matem\u00e1ticas de objetos en un gran n\u00famero de dimensiones. Por ejemplo, buscar en un cat\u00e1logo de productos zapatos \"marrones\" y \"de cuero\" devolver\u00eda esos resultados, as\u00ed como zapatos \"de ante\", con colores como \"caoba, casta\u00f1o, caf\u00e9, bronce, casta\u00f1o rojizo y cacao\".<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Retrieval-augmented generation (RAG) combina la b\u00fasqueda vectorial, recuperando informaci\u00f3n de la base de datos Couchbase relacionada con la petici\u00f3n del usuario, y ofrece tanto la petici\u00f3n como la informaci\u00f3n relevante relacionada. <\/span><span style=\"font-weight: 400;\">a un modelo generativo para producir respuestas LLM m\u00e1s informadas y adecuadas al contexto. Esto suele ser m\u00e1s r\u00e1pido y menos costoso que entrenar un modelo personalizado. <\/span><span style=\"font-weight: 400;\">Couchbase <\/span><a href=\"https:\/\/www.couchbase.com\/blog\/es\/couchbase-capella-advantages-speed-functionality-tco-over-redis\/\"><span style=\"font-weight: 400;\">arquitectura en memoria altamente escalable<\/span><\/a><span style=\"font-weight: 400;\"> proporciona un acceso r\u00e1pido y eficiente a la b\u00fasqueda de datos de incrustaci\u00f3n vectorial relevantes. Para aumentar el rendimiento y la eficiencia de una aplicaci\u00f3n RAG, los desarrolladores pueden utilizar funciones de cach\u00e9 sem\u00e1ntica y conversacional.\u00a0<\/span><\/p>\n<h2>Cach\u00e9 sem\u00e1ntico<\/h2>\n<p><span style=\"font-weight: 400;\">La cach\u00e9 sem\u00e1ntica es una sofisticada t\u00e9cnica de cach\u00e9 que utiliza incrustaciones vectoriales para comprender el contexto y la intenci\u00f3n de las consultas. A diferencia de los m\u00e9todos de cach\u00e9 tradicionales, que se basan en coincidencias exactas, el cach\u00e9 sem\u00e1ntico aprovecha el significado y la relevancia de los datos. Esto significa que preguntas similares, que de otro modo obtendr\u00edan la misma respuesta de un LLM, no necesitan hacer peticiones adicionales al LLM. Siguiendo con el ejemplo anterior, un usuario que buscara \"Busco zapatos de piel marr\u00f3n de la talla 10\" obtendr\u00eda los mismos resultados que otro usuario que solicitara \"Quiero comprar zapatos de piel de la talla 10 de color marr\u00f3n\".\u00a0<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-large wp-image-16069\" style=\"border: solid 1px black;\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2024\/07\/image1-1024x551.jpg\" alt=\"Couchbase Semantic Cache\" width=\"900\" height=\"484\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1-1024x551.jpg 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1-300x161.jpg 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1-768x413.jpg 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1-1536x826.jpg 1536w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1-1320x710.jpg 1320w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1.jpg 1708w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><\/p>\n<p><span style=\"font-weight: 400;\">Entre las ventajas de la cach\u00e9 sem\u00e1ntica, especialmente en vol\u00famenes m\u00e1s elevados, se incluyen:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\"><b>Mayor eficacia -<\/b><span style=\"font-weight: 400;\"> Tiempos de recuperaci\u00f3n m\u00e1s r\u00e1pidos gracias a la comprensi\u00f3n del contexto de la consulta<\/span><\/li>\n<li style=\"font-weight: 400;\"><b>Costes m\u00e1s bajos -<\/b><span style=\"font-weight: 400;\"> La reducci\u00f3n de llamadas al LLM ahorra tiempo y dinero<\/span><\/li>\n<\/ul>\n<h2>\u00a0Cach\u00e9 conversacional<\/h2>\n<p><span style=\"font-weight: 400;\">Mientras que la cach\u00e9 sem\u00e1ntica reduce el n\u00famero de llamadas a un LLM a trav\u00e9s de una amplia variedad de usuarios, una cach\u00e9 conversacional mejora la experiencia general del usuario ampliando el conocimiento conversacional de por vida de las interacciones entre el usuario y el LLM. Al aprovechar el historial de preguntas y respuestas, el LLM puede proporcionar un mejor contexto a medida que se env\u00edan nuevas solicitudes.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\"> <img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-large wp-image-16070\" style=\"border: solid 1px black;\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2024\/07\/image2-1024x581.jpg\" alt=\"Couchbase conversational cache\" width=\"900\" height=\"511\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image2-1024x581.jpg 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image2-300x170.jpg 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image2-768x436.jpg 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image2-1536x872.jpg 1536w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image2-1320x749.jpg 1320w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image2.jpg 1720w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><\/span><\/p>\n<p><span style=\"font-weight: 400;\">Adem\u00e1s, la cach\u00e9 conversacional puede utilizarse para ayudar a aplicar el razonamiento a los flujos de trabajo de los agentes de IA. Un usuario puede preguntar: \"\u00bfQu\u00e9 tal funcionar\u00e1 este art\u00edculo con los productos que he comprado anteriormente?\". En primer lugar, esto requiere la resoluci\u00f3n de la referencia \"este art\u00edculo\" seguida de un razonamiento sobre c\u00f3mo determinar lo bien que funcionar\u00e1 con compras anteriores.\"<\/span><\/p>\n<h3>Paquetes LangChain-Couchbase dedicados<\/h3>\n<p><span style=\"font-weight: 400;\">Couchbase ha introducido recientemente m\u00f3dulos LangChain dise\u00f1ados para desarrolladores Python. Este paquete simplifica la integraci\u00f3n de las capacidades avanzadas de Couchbase en aplicaciones de IA generativa a trav\u00e9s de LangChain, facilitando a los desarrolladores la implementaci\u00f3n de potentes funciones como la b\u00fasqueda vectorial y el almacenamiento sem\u00e1ntico en cach\u00e9.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">El paquete LangChain-Couchbase integra a la perfecci\u00f3n las capacidades de b\u00fasqueda vectorial, cach\u00e9 sem\u00e1ntico y cach\u00e9 conversacional de Couchbase en flujos de trabajo de IA generativa. Esta integraci\u00f3n permite a los desarrolladores crear aplicaciones m\u00e1s inteligentes y conscientes del contexto con el m\u00ednimo esfuerzo.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Al proporcionar un paquete dedicado, Couchbase garantiza que los desarrolladores puedan acceder f\u00e1cilmente a funciones avanzadas e implementarlas sin tener que lidiar con configuraciones complejas. El paquete est\u00e1 dise\u00f1ado para ser f\u00e1cil de usar por los desarrolladores, lo que permite una integraci\u00f3n r\u00e1pida y eficiente.<\/span><\/p>\n<h3>Caracter\u00edsticas principales<\/h3>\n<p><span style=\"font-weight: 400;\">El paquete LangChain-Couchbase ofrece varias funciones clave, entre ellas:<\/span><\/p>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li style=\"font-weight: 400;\"><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/providers\/couchbase\/\"><b>B\u00fasqueda vectorial<\/b><\/a> <b>-<\/b><span style=\"font-weight: 400;\"> Recuperaci\u00f3n eficiente de datos basada en incrustaciones vectoriales<\/span><\/li>\n<li style=\"font-weight: 400;\"><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/llm_caching\/#couchbase-cache\"><b>Cach\u00e9 est\u00e1ndar<\/b><\/a><span style=\"font-weight: 400;\"> - Para coincidencias exactas m\u00e1s r\u00e1pidas<\/span><\/li>\n<li style=\"font-weight: 400;\"><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/llm_caching\/#couchbase-semantic-cache\"><b>Cach\u00e9 sem\u00e1ntico<\/b><\/a> <b>-<\/b><span style=\"font-weight: 400;\"> Cach\u00e9 contextual para mejorar la pertinencia de la respuesta<\/span><\/li>\n<li style=\"font-weight: 400;\"><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/memory\/couchbase_chat_message_history\/\"><b>Cach\u00e9 de conversaci\u00f3n<\/b><\/a> <span style=\"font-weight: 400;\">- Gesti\u00f3n del contexto de la conversaci\u00f3n para mejorar las interacciones de los usuarios<\/span><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h2>Casos pr\u00e1cticos y ejemplos<\/h2>\n<p><span style=\"font-weight: 400;\">Las nuevas mejoras de Couchbase pueden aplicarse en varios escenarios, como:<\/span><\/p>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li style=\"font-weight: 400;\"><b>Chatbots de comercio electr\u00f3nico -<\/b><span style=\"font-weight: 400;\"> Ofrecer recomendaciones de compra personalizadas basadas en las preferencias del usuario<\/span><\/li>\n<li style=\"font-weight: 400;\"><b>Atenci\u00f3n al cliente -<\/b><span style=\"font-weight: 400;\"> Respuestas precisas y contextualizadas a las consultas de los clientes<\/span><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3>Fragmentos de c\u00f3digo o tutoriales<\/h3>\n<p><span style=\"font-weight: 400;\">Los desarrolladores pueden encontrar fragmentos de c\u00f3digo y tutoriales para implantar la cach\u00e9 sem\u00e1ntica y el paquete LangChain-Couchbase en <\/span><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/llm_caching\/#couchbase-semantic-cache\"><span style=\"font-weight: 400;\">Sitio web de LangChain<\/span><\/a><span style=\"font-weight: 400;\">. Tambi\u00e9n hay ejemplos de c\u00f3digo de b\u00fasqueda vectorial en Couchbase's <\/span><a href=\"https:\/\/github.com\/couchbase-examples\/\"><span style=\"font-weight: 400;\">Repositorio GitHub<\/span><\/a><span style=\"font-weight: 400;\">. Estos recursos proporcionan orientaci\u00f3n para ayudar a los desarrolladores a empezar r\u00e1pidamente.<\/span><\/p>\n<h3>Beneficios<\/h3>\n<p><span style=\"font-weight: 400;\">Las mejoras de Couchbase en la b\u00fasqueda vectorial y las ofertas de almacenamiento en cach\u00e9 para aplicaciones basadas en LLM proporcionan numerosos beneficios, incluyendo una mayor eficiencia, relevancia y personalizaci\u00f3n de las respuestas. Estas caracter\u00edsticas est\u00e1n dise\u00f1adas para hacer frente a los desaf\u00edos de la construcci\u00f3n de aplicaciones de IA generativa fiables, escalables y rentables.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Couchbase est\u00e1 comprometido con la innovaci\u00f3n continua, asegurando que nuestra plataforma permanezca a la vanguardia del desarrollo de aplicaciones de IA. Las futuras mejoras ampliar\u00e1n a\u00fan m\u00e1s las capacidades de Couchbase, permitiendo a los desarrolladores crear aplicaciones a\u00fan m\u00e1s avanzadas e inteligentes.<\/span><\/p>\n<h3>Recursos adicionales<\/h3>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li><span style=\"font-weight: 400;\">Blog: <\/span><a href=\"https:\/\/www.couchbase.com\/blog\/es\/an-overview-of-retrieval-augmented-generation\/\"><span style=\"font-weight: 400;\">Visi\u00f3n general de RAG<\/span><\/a><\/li>\n<li><span style=\"font-weight: 400;\">Docs: <\/span><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/providers\/couchbase\/\"><span style=\"font-weight: 400;\">Instalar la integraci\u00f3n Langchain-Couchbase<\/span><\/a><\/li>\n<li><span style=\"font-weight: 400;\">Docs: <\/span><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/vectorstores\/couchbase\/\"><span style=\"font-weight: 400;\">Couchbase como almac\u00e9n vectorial con LangChain<\/span><\/a><\/li>\n<li><span style=\"font-weight: 400;\">V\u00eddeo: <\/span><a href=\"https:\/\/www.youtube.com\/watch?v=sYy0ob2GqUo\"><span style=\"font-weight: 400;\">B\u00fasqueda vectorial e h\u00edbrida<\/span><\/a><\/li>\n<li><span style=\"font-weight: 400;\">V\u00eddeo: <\/span><a href=\"https:\/\/www.youtube.com\/watch?v=_iveSnEikMQ&amp;t=1s\"><span style=\"font-weight: 400;\">B\u00fasqueda vectorial para aplicaciones m\u00f3viles<\/span><\/a><\/li>\n<li><span style=\"font-weight: 400;\">Docs: <\/span><a href=\"https:\/\/docs.couchbase.com\/cloud\/vector-search\/vector-search.html\"><span style=\"font-weight: 400;\">B\u00fasqueda vectorial en Capella DBaaS<\/span><\/a><\/li>\n<li>Modelos compatibles con LangChain y Couchbase<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3>Modelos compatibles mediante <a href=\"https:\/\/python.langchain.com\/v0.2\/api_reference\/couchbase\/index.html\">LangChain y Couchbase<\/a><\/h3>\n<table>\n<tbody>\n<tr>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/ai21\">AI21<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/azureopenai\">AzureOpenAI<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/bge_huggingface\">BGE en Cara de abrazo<\/a><\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/aleph_alpha\">Aleph Alfa<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/baichuan\">Incrustaci\u00f3n de textos Baichuan<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/bookend\">AI Bookend<\/a><\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/anyscale\">Anyscale<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/baidu_qianfan_endpoint\">Baidu Qianfan<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/clarifai\">Clarifai<\/a><\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/ascend\">asciende<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/bedrock\">Bedrock<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/cloudflare_workersai\">Trabajadores de Cloudflare AI<\/a><\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/awadb\">AwaDB<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/bge_huggingface\">BGE en Cara de abrazo<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/cohere\">Cohere<\/a><\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/google_generative_ai\">Google Generative AI Embeddings<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/google_vertex_ai_palm\">Google Vertex AI PaLM<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/gpt4all\">GPT4All<\/a><\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/jina\">Jina<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/johnsnowlabs_embedding\">Laboratorios John Snow<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/laser\">L\u00c1SER<\/a><\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/llamacpp\">Llama.cpp<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/llamafile\">llamafile<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/localai\">LocalAI<\/a><\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/mini_max\">MiniMax<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/mistralai\">MistralAI<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/nlp_cloud\">Nube PNL<\/a><\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/oci_generative_ai\">Infraestructura en la nube de Oracle Generative AI<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/ollama\">Ollama<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/openai\">OpenAI<\/a><\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/optimum_intel\">Incrustaci\u00f3n de documentos mediante incrustadores optimizados y cuantificados<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/oracleai\">Oracle AI B\u00fasqueda vectorial: Generar incrustaciones<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/ovhcloud\">OVHcloud<\/a><\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/pinecone\">Incrustaciones de pi\u00f1as<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/premai\">PremAI<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/sagemaker-endpoint\">SageMaker<\/a><\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/sambanova\">SambaNova<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/sentence_transformers\">Sentencia Transformers en Cara de Abrazo<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/tensorflowhub\">Centro TensorFlow<\/a><\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/together\">Juntos IA<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/upstage\">Arriba<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/xinference\">Inferencia de Xorbits (Xinferencia)<\/a><\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/volcengine\">Motor Volc<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/voyageai\">Viaje AI<\/a><\/td>\n<td><a href=\"https:\/\/python.langchain.com\/v0.2\/docs\/integrations\/text_embedding\/yandex\">YandexGPT<\/a><\/td>\n<\/tr>\n<\/tbody>\n<\/table>","protected":false},"excerpt":{"rendered":"<p>New Standard, Semantic and Conversational Cache With LangChain Integration In the rapidly evolving landscape of AI application development, integrating large language models (LLMs) with enterprise data sources has become a critical focus. The ability to harness the power of LLMs [&hellip;]<\/p>","protected":false},"author":77912,"featured_media":16069,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"footnotes":""},"categories":[1814,10122,2225,1816,9973,9417,9937],"tags":[9963],"ppma_author":[9311],"class_list":["post-16068","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-application-design","category-artificial-intelligence-ai","category-cloud","category-couchbase-server","category-generative-ai-genai","category-performance","category-vector-search","tag-langchain"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.9 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Build Faster and Cheaper LLM Apps With Couchbase and LangChain - The Couchbase Blog<\/title>\n<meta name=\"description\" content=\"The LangChain-Couchbase package integrates Couchbase&#039;s vector search, semantic cache, conversational cache for generative AI workflows.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.couchbase.com\/blog\/es\/faster-llm-apps-semantic-cache-langchain-couchbase\/\" \/>\n<meta property=\"og:locale\" content=\"es_MX\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Build Faster and Cheaper LLM Apps With Couchbase and LangChain\" \/>\n<meta property=\"og:description\" content=\"The LangChain-Couchbase package integrates Couchbase&#039;s vector search, semantic cache, conversational cache for generative AI workflows.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.couchbase.com\/blog\/es\/faster-llm-apps-semantic-cache-langchain-couchbase\/\" \/>\n<meta property=\"og:site_name\" content=\"The Couchbase Blog\" \/>\n<meta property=\"article:published_time\" content=\"2024-08-01T13:51:02+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-06-14T03:19:05+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1708\" \/>\n\t<meta property=\"og:image:height\" content=\"919\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Tim Rottach, Director of Product Line Marketing\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Tim Rottach, Director of Product Line Marketing\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/\"},\"author\":{\"name\":\"Tim Rottach, Director of Product Line Marketing\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/02596c1f54a5dd8d2094d919487485cc\"},\"headline\":\"Build Faster and Cheaper LLM Apps With Couchbase and LangChain\",\"datePublished\":\"2024-08-01T13:51:02+00:00\",\"dateModified\":\"2025-06-14T03:19:05+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/\"},\"wordCount\":1050,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1.jpg\",\"keywords\":[\"langchain\"],\"articleSection\":[\"Application Design\",\"Artificial Intelligence (AI)\",\"Couchbase Capella\",\"Couchbase Server\",\"Generative AI (GenAI)\",\"High Performance\",\"Vector Search\"],\"inLanguage\":\"es\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/\",\"url\":\"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/\",\"name\":\"Build Faster and Cheaper LLM Apps With Couchbase and LangChain - The Couchbase Blog\",\"isPartOf\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1.jpg\",\"datePublished\":\"2024-08-01T13:51:02+00:00\",\"dateModified\":\"2025-06-14T03:19:05+00:00\",\"description\":\"The LangChain-Couchbase package integrates Couchbase's vector search, semantic cache, conversational cache for generative AI workflows.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#breadcrumb\"},\"inLanguage\":\"es\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#primaryimage\",\"url\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1.jpg\",\"contentUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1.jpg\",\"width\":1708,\"height\":919,\"caption\":\"Couchbase Semantic Cache\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.couchbase.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Build Faster and Cheaper LLM Apps With Couchbase and LangChain\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#website\",\"url\":\"https:\/\/www.couchbase.com\/blog\/\",\"name\":\"The Couchbase Blog\",\"description\":\"Couchbase, the NoSQL Database\",\"publisher\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.couchbase.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"es\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#organization\",\"name\":\"The Couchbase Blog\",\"url\":\"https:\/\/www.couchbase.com\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png\",\"contentUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png\",\"width\":218,\"height\":34,\"caption\":\"The Couchbase Blog\"},\"image\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/02596c1f54a5dd8d2094d919487485cc\",\"name\":\"Tim Rottach, Director of Product Line Marketing\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/image\/93228766273ae64ba068eecec5523b48\",\"url\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2021\/07\/timothy-rottach-couchbase.jpeg\",\"contentUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2021\/07\/timothy-rottach-couchbase.jpeg\",\"caption\":\"Tim Rottach, Director of Product Line Marketing\"},\"description\":\"Tim Rottach is Director of Product Line Marketing at Couchbase.\",\"url\":\"https:\/\/www.couchbase.com\/blog\/es\/author\/timothy-rottach\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Build Faster and Cheaper LLM Apps With Couchbase and LangChain - The Couchbase Blog","description":"El paquete LangChain-Couchbase integra la b\u00fasqueda vectorial, la cach\u00e9 sem\u00e1ntica y la cach\u00e9 conversacional de Couchbase para flujos de trabajo de IA generativa.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.couchbase.com\/blog\/es\/faster-llm-apps-semantic-cache-langchain-couchbase\/","og_locale":"es_MX","og_type":"article","og_title":"Build Faster and Cheaper LLM Apps With Couchbase and LangChain","og_description":"The LangChain-Couchbase package integrates Couchbase's vector search, semantic cache, conversational cache for generative AI workflows.","og_url":"https:\/\/www.couchbase.com\/blog\/es\/faster-llm-apps-semantic-cache-langchain-couchbase\/","og_site_name":"The Couchbase Blog","article_published_time":"2024-08-01T13:51:02+00:00","article_modified_time":"2025-06-14T03:19:05+00:00","og_image":[{"width":1708,"height":919,"url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1.jpg","type":"image\/jpeg"}],"author":"Tim Rottach, Director of Product Line Marketing","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Tim Rottach, Director of Product Line Marketing","Est. reading time":"6 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#article","isPartOf":{"@id":"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/"},"author":{"name":"Tim Rottach, Director of Product Line Marketing","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/02596c1f54a5dd8d2094d919487485cc"},"headline":"Build Faster and Cheaper LLM Apps With Couchbase and LangChain","datePublished":"2024-08-01T13:51:02+00:00","dateModified":"2025-06-14T03:19:05+00:00","mainEntityOfPage":{"@id":"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/"},"wordCount":1050,"commentCount":0,"publisher":{"@id":"https:\/\/www.couchbase.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#primaryimage"},"thumbnailUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1.jpg","keywords":["langchain"],"articleSection":["Application Design","Artificial Intelligence (AI)","Couchbase Capella","Couchbase Server","Generative AI (GenAI)","High Performance","Vector Search"],"inLanguage":"es","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/","url":"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/","name":"Build Faster and Cheaper LLM Apps With Couchbase and LangChain - The Couchbase Blog","isPartOf":{"@id":"https:\/\/www.couchbase.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#primaryimage"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#primaryimage"},"thumbnailUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1.jpg","datePublished":"2024-08-01T13:51:02+00:00","dateModified":"2025-06-14T03:19:05+00:00","description":"El paquete LangChain-Couchbase integra la b\u00fasqueda vectorial, la cach\u00e9 sem\u00e1ntica y la cach\u00e9 conversacional de Couchbase para flujos de trabajo de IA generativa.","breadcrumb":{"@id":"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#breadcrumb"},"inLanguage":"es","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/"]}]},{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#primaryimage","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1.jpg","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/07\/image1.jpg","width":1708,"height":919,"caption":"Couchbase Semantic Cache"},{"@type":"BreadcrumbList","@id":"https:\/\/www.couchbase.com\/blog\/faster-llm-apps-semantic-cache-langchain-couchbase\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.couchbase.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Build Faster and Cheaper LLM Apps With Couchbase and LangChain"}]},{"@type":"WebSite","@id":"https:\/\/www.couchbase.com\/blog\/#website","url":"https:\/\/www.couchbase.com\/blog\/","name":"El blog de Couchbase","description":"Couchbase, la base de datos NoSQL","publisher":{"@id":"https:\/\/www.couchbase.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.couchbase.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"es"},{"@type":"Organization","@id":"https:\/\/www.couchbase.com\/blog\/#organization","name":"El blog de Couchbase","url":"https:\/\/www.couchbase.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png","width":218,"height":34,"caption":"The Couchbase Blog"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/02596c1f54a5dd8d2094d919487485cc","name":"Tim Rottach, Director de Marketing de L\u00ednea de Productos","image":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/image\/93228766273ae64ba068eecec5523b48","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2021\/07\/timothy-rottach-couchbase.jpeg","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2021\/07\/timothy-rottach-couchbase.jpeg","caption":"Tim Rottach, Director of Product Line Marketing"},"description":"Tim Rottach es Director de Marketing de L\u00ednea de Productos en Couchbase.","url":"https:\/\/www.couchbase.com\/blog\/es\/author\/timothy-rottach\/"}]}},"authors":[{"term_id":9311,"user_id":77912,"is_guest":0,"slug":"timothy-rottach","display_name":"Tim Rottach, Director of Product Line Marketing","avatar_url":{"url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2021\/07\/timothy-rottach-couchbase.jpeg","url2x":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2021\/07\/timothy-rottach-couchbase.jpeg"},"author_category":"","last_name":"Rottach","first_name":"Timothy","job_title":"","user_url":"","description":"Tim Rottach es Director de Marketing de L\u00ednea de Productos en Couchbase."}],"_links":{"self":[{"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/posts\/16068","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/users\/77912"}],"replies":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/comments?post=16068"}],"version-history":[{"count":0,"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/posts\/16068\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/media\/16069"}],"wp:attachment":[{"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/media?parent=16068"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/categories?post=16068"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/tags?post=16068"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/ppma_author?post=16068"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}