{"id":16826,"date":"2025-02-03T17:35:16","date_gmt":"2025-02-04T01:35:16","guid":{"rendered":"https:\/\/www.couchbase.com\/blog\/?p=16826"},"modified":"2025-06-13T16:36:21","modified_gmt":"2025-06-13T23:36:21","slug":"capella-model-service-ai","status":"publish","type":"post","link":"https:\/\/www.couchbase.com\/blog\/es\/capella-model-service-ai\/","title":{"rendered":"Servicio Modelo Capella: Seguro, escalable y compatible con OpenAI"},"content":{"rendered":"<p>Couchbase Capella ha lanzado un <b>Avance privado de los servicios de IA<\/b>\u00a1! Echa un vistazo<a href=\"https:\/\/www.couchbase.com\/blog\/es\/ai-services-expedite-agent-development\/\"> este blog<\/a> para obtener una visi\u00f3n general de c\u00f3mo estos servicios simplifican el proceso de creaci\u00f3n de aplicaciones y agentes de IA escalables y nativos de la nube.<\/p>\n<p>En este blog exploraremos <b>Modelo de servicio<\/b> - una funci\u00f3n de Capella que le permite desplegar modelos ling\u00fc\u00edsticos privados y modelos de incrustaci\u00f3n de forma segura y a escala. Este servicio permite que la inferencia se ejecute cerca de sus datos para mejorar el rendimiento y el cumplimiento.<\/p>\n<h2>\u00bfPor qu\u00e9 utilizar el Servicio Modelo Capella?<\/h2>\n<p>Muchas empresas se enfrentan a problemas de seguridad y cumplimiento cuando desarrollan agentes de IA. Debido a normativas como <b>GDPR y protecci\u00f3n de la informaci\u00f3n personal<\/b>A menudo, las empresas no pueden utilizar modelos ling\u00fc\u00edsticos abiertos ni almacenar datos fuera de su red interna. Esto limita su capacidad para explorar soluciones basadas en IA.<\/p>\n<p>En <b>Servicio Modelo Capella<\/b> aborda esta cuesti\u00f3n simplificando las complejidades operativas de la implantaci\u00f3n de un modelo de lenguaje privado dentro de la misma red interna que el cl\u00faster del cliente.<\/p>\n<p>Esto asegura:<\/p>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\">Datos utilizados para la inferencia <b>nunca abandona los l\u00edmites de la red virtual del cl\u00faster operativo<\/b><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Inferencia de baja latencia<\/b> debido a la m\u00ednima sobrecarga de la red<\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Conformidad<\/b> con las pol\u00edticas de seguridad de datos de la empresa<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h2>Caracter\u00edsticas principales del Servicio Modelo Capella<\/h2>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Despliegue seguro del modelo<\/b> - Ejecutar modelos en un entorno seguro y aislado<\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>API y SDK compatibles con OpenAI<\/b> - Invoca f\u00e1cilmente modelos alojados en Capella con bibliotecas y marcos compatibles con OpenAI, como Langchain.<\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Mejoras de rendimiento <\/b>- Incluye ofertas de valor a\u00f1adido de almacenamiento en cach\u00e9 y por lotes para mayor eficacia<\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Herramientas de moderaci\u00f3n<\/b> - Proporciona funciones de moderaci\u00f3n de contenidos y filtrado de palabras clave<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<hr \/>\n<h2>Primeros pasos: implantar y utilizar un modelo en Capella<\/h2>\n<p>Veamos un sencillo tutorial para desplegar un modelo en Capella y utilizarlo para tareas b\u00e1sicas de IA.<\/p>\n<h3>Lo que aprender\u00e1s:<\/h3>\n<ol>\n<li style=\"list-style-type: none;\">\n<ol>\n<li style=\"font-weight: 300;\" aria-level=\"1\">Implantaci\u00f3n de un modelo ling\u00fc\u00edstico en Capella<\/li>\n<li style=\"font-weight: 300;\" aria-level=\"1\">Utilizando el modelo para <b>finalizaciones del chat<\/b><\/li>\n<li style=\"font-weight: 300;\" aria-level=\"1\">Explorar <b>caracter\u00edsticas de valor a\u00f1adido<\/b><\/li>\n<\/ol>\n<\/li>\n<\/ol>\n<hr \/>\n<h3>Requisitos previos<\/h3>\n<p>Antes de empezar, aseg\u00farate de que tienes:<\/p>\n<ul>\n<li style=\"font-weight: 300;\" aria-level=\"1\">Se ha inscrito en la vista previa privada y ha activado los servicios de IA para su organizaci\u00f3n. <a href=\"https:\/\/info.couchbase.com\/capella-ai-services-signup?_gl=1*s7h2vl*_gcl_au*MTc2ODAwNzM2NC4xNzM4MjkyNjY3\">Inscr\u00edbase aqu\u00ed<\/a><\/li>\n<li style=\"font-weight: 300;\" aria-level=\"1\"><b>Funci\u00f3n de propietario de la organizaci\u00f3n <\/b>permisos para gestionar modelos ling\u00fc\u00edsticos<\/li>\n<li style=\"font-weight: 300;\" aria-level=\"1\"><b>Una agrupaci\u00f3n operativa multizona<\/b> (recomendado para mejorar el rendimiento)<\/li>\n<li style=\"font-weight: 300;\" aria-level=\"1\"><b>Cubos de muestras<\/b> para aprovechar funciones de valor a\u00f1adido como el almacenamiento en cach\u00e9 y el procesamiento por lotes.<\/li>\n<\/ul>\n<hr \/>\n<h3>Paso 1: Despliegue del modelo ling\u00fc\u00edstico<\/h3>\n<p><i>Objetivo de aprendizaje: Aprender a desplegar un modelo de lenguaje privado en Capella y configurar los ajustes clave.<\/i><\/p>\n<p>Vaya a <b>Servicios de IA<\/b> en la p\u00e1gina de inicio de Capella y haga clic en <b>Modelo de servicio<\/b> para proceder.<\/p>\n<p><a href=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/2-scaled.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-16827\" style=\"border: 1px black solid;\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/2-1024x501.jpg\" alt=\"Selecting the AI model service for Capella \" width=\"900\" height=\"440\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/2-1024x501.jpg 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/2-300x147.jpg 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/2-768x376.jpg 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/2-1536x752.jpg 1536w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/2-2048x1002.jpg 2048w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/2-1320x646.jpg 1320w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><\/a><\/p>\n<h4>Seleccione la configuraci\u00f3n del modelo<\/h4>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\">Elija una<b> grupo operativo<\/b> para su modelo<\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\">Defina <b>calcular el tama\u00f1o<\/b> y seleccione un<b> modelo de cimentaci\u00f3n<\/b><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><a href=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/3-scaled.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-16828\" style=\"border: 1px black solid;\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/3-1024x700.jpg\" alt=\"Compute sizing and select an AI foundation model\" width=\"900\" height=\"615\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/3-1024x700.jpg 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/3-300x205.jpg 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/3-768x525.jpg 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/3-1536x1049.jpg 1536w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/3-2048x1399.jpg 2048w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/3-235x160.jpg 235w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/3-1320x902.jpg 1320w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><\/a><\/p>\n<p>Al desplazarse hacia abajo, ver\u00e1 una opci\u00f3n para seleccionar un grupo de <b>servicios de valor a\u00f1adido<\/b> que ofrece Capella.<\/p>\n<p>Entendamos qu\u00e9 significa cada secci\u00f3n.<\/p>\n<h4>Almacenamiento en cach\u00e9<\/h4>\n<p>El almacenamiento en cach\u00e9 permite almacenar y recuperar respuestas LLM de forma eficiente, reduciendo costes y mejorando los tiempos de respuesta. Puede elegir entre almacenamiento en cach\u00e9 conversacional, est\u00e1ndar y sem\u00e1ntico.<\/p>\n<p>El almacenamiento en cach\u00e9 le permite reducir costes y acelerar la recuperaci\u00f3n reduciendo las llamadas al LLM. Tambi\u00e9n puede utilizar el almacenamiento en cach\u00e9 para guardar conversaciones dentro de una sesi\u00f3n de chatbot con el fin de proporcionar contexto para mejorar las experiencias conversacionales.<\/p>\n<h4>Seleccionar almacenamiento en cach\u00e9 y estrategia de almacenamiento en cach\u00e9<\/h4>\n<p>En los campos Bucket, Scope y Collection, seleccione un bucket designado en su cl\u00faster donde se almacenar\u00e1n en cach\u00e9 las respuestas de inferencia para una r\u00e1pida recuperaci\u00f3n.<\/p>\n<p>A continuaci\u00f3n, seleccione la estrategia de Cach\u00e9 para incluir Cach\u00e9 \"Conversacional\", \"Est\u00e1ndar\" y \"Sem\u00e1ntico\".<\/p>\n<p>Tenga en cuenta que, para el almacenamiento en cach\u00e9 sem\u00e1ntico, el servicio de modelos <b>aprovecha un modelo de incrustaci\u00f3n<\/b> - es \u00fatil crear un modelo de incrustaci\u00f3n por adelantado para el mismo cl\u00faster o crearlo sobre la marcha desde esta pantalla.<\/p>\n<p>En este caso, he seleccionado un modelo de incrustaci\u00f3n preestablecido para la cach\u00e9 sem\u00e1ntica.<\/p>\n<p><a href=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/4.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-16829\" style=\"border: 1px black solid;\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/4-1024x535.jpg\" alt=\"AI caching for embedding models\" width=\"900\" height=\"470\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/4-1024x535.jpg 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/4-300x157.jpg 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/4-768x401.jpg 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/4-1536x802.jpg 1536w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/4-1320x689.jpg 1320w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/4.jpg 1724w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><\/a><\/p>\n<h4>Barandillas<\/h4>\n<p>Los Guardrails ofrecen moderaci\u00f3n de contenidos tanto para las indicaciones de los usuarios como para las respuestas de los modelos, aprovechando el modelo Llama-3 Guard. Existe una plantilla de moderaci\u00f3n personalizable que se adapta a las distintas necesidades de las aplicaciones de IA.<\/p>\n<p>Por ahora, mantendremos la configuraci\u00f3n por defecto y seguiremos adelante.<\/p>\n<p><a href=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/5.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-16830\" style=\"border: 1px black solid;\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/5-1024x313.png\" alt=\"Setting AI model guardrails\" width=\"900\" height=\"275\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/5-1024x313.png 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/5-300x92.png 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/5-768x235.png 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/5-1536x470.png 1536w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/5-2048x626.png 2048w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/5-1320x404.png 1320w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><\/a><\/p>\n<h4>Filtrado de palabras clave<\/h4>\n<p>El filtrado de palabras clave permite especificar hasta diez palabras clave que deben eliminarse de las preguntas y respuestas. Por ejemplo, filtrar t\u00e9rminos como \"clasificado\" o \"confidencial\" puede evitar que se incluya informaci\u00f3n sensible en las respuestas.<\/p>\n<p><a href=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/6.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-16831\" style=\"border: 1px black solid;\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/6-1024x154.png\" alt=\"Setting Keyword filtering for AI model usage\" width=\"900\" height=\"135\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/6-1024x154.png 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/6-300x45.png 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/6-768x116.png 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/6-1536x232.png 1536w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/6-2048x309.png 2048w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/6-1320x199.png 1320w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><\/a><\/p>\n<h4>Dosificaci\u00f3n<\/h4>\n<p>El procesamiento por lotes permite gestionar las solicitudes de forma m\u00e1s eficaz al procesar varias solicitudes de API de forma as\u00edncrona.<\/p>\n<p>En Bucket, Scope y Collection, seleccione el bucket de su cl\u00faster operativo en el que se pueden almacenar los metadatos de dosificaci\u00f3n.<\/p>\n<p><a href=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/7.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-16832\" style=\"border: 1px black solid;\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/7-1024x201.png\" alt=\"Batching API requests for AI model access\" width=\"900\" height=\"177\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/7-1024x201.png 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/7-300x59.png 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/7-768x151.png 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/7-1536x302.png 1536w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/7-2048x402.png 2048w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/7-1320x259.png 1320w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><\/a><\/p>\n<h4>Implantar el modelo<\/h4>\n<p>Haga clic en el bot\u00f3n Desplegar modelo para iniciar el c\u00e1lculo necesario basado en la GPU. El proceso de despliegue puede tardar entre 15 y 20 minutos. Una vez listos, los modelos desplegados pueden rastrearse en la p\u00e1gina Lista de modelos del centro de productos de IA.<\/p>\n<p><a href=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image6.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-16833\" style=\"border: 1px black solid;\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image6-1024x272.jpg\" alt=\"List of AI models configured in AI Services\" width=\"900\" height=\"239\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image6-1024x272.jpg 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image6-300x80.jpg 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image6-768x204.jpg 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image6-1536x408.jpg 1536w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image6-1320x351.jpg 1320w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/image6.jpg 1600w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><\/a><\/p>\n<hr \/>\n<h3>Paso 2: Utilizar el punto final del modelo<\/h3>\n<p><i>Objetivo de aprendizaje: Comprender c\u00f3mo acceder al modelo de forma segura y enviar solicitudes de inferencia.<\/i><\/p>\n<p>Veamos ahora c\u00f3mo consumir el modelo para inferir y c\u00f3mo aprovechar los servicios de valor a\u00f1adido.<\/p>\n<h4>Conceder acceso al modelo<\/h4>\n<p>Para permitir el acceso, a\u00f1ada su direcci\u00f3n IP a la lista de permitidos y cree credenciales de base de datos para autenticar las solicitudes de inferencia de modelos.<\/p>\n<p>Vaya al cl\u00faster, haga clic en el bot\u00f3n <b>Conectar<\/b> y a\u00f1ade tu IP a la lista de IPs permitidas y crea nuevas credenciales de base de datos para el cluster. Utilizaremos estas credenciales para autenticar las solicitudes de inferencia de modelos.<\/p>\n<p><a href=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/9-scaled.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-16834\" style=\"border: 1px black solid;\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/9-1024x591.jpg\" alt=\"SDKs to connect to Capella AI services\" width=\"900\" height=\"519\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/9-1024x591.jpg 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/9-300x173.jpg 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/9-768x443.jpg 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/9-1536x886.jpg 1536w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/9-2048x1181.jpg 2048w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/9-1320x761.jpg 1320w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><\/a><\/p>\n<h4>URL del punto final del modelo<\/h4>\n<p>En la p\u00e1gina Lista de modelos, localice la URL del modelo. Por ejemplo, una URL podr\u00eda tener este aspecto:\u00a0<a href=\"https:\/\/ai123.apps.cloud.couchbase.com\/\">https:\/\/ai123.apps.cloud.couchbase.com<\/a>.<\/p>\n<p><a href=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/8-scaled.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-16835\" style=\"border: 1px black solid;\" src=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/8-1024x348.jpg\" alt=\"AI model listing with status and model URL\" width=\"900\" height=\"306\" srcset=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/8-1024x348.jpg 1024w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/8-300x102.jpg 300w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/8-768x261.jpg 768w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/8-1536x523.jpg 1536w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/8-2048x697.jpg 2048w, https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/8-1320x449.jpg 1320w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><\/a><\/p>\n<h4>Ejecutar la finalizaci\u00f3n del chat<\/h4>\n<p>Para utilizar la API compatible con OpenAI, puedes enviar una solicitud de chat utilizando curl:<\/p>\n<pre class=\"nums:false lang:default decode:true\">curl --request POST \\\r\n--url https:\/\/ai123.apps.cloud.couchbase.com\/v1\/chat\/completions \\\r\n--header 'Authorization: Basic change-me' \\\r\n--header 'Content-Type: application\/json' \\\r\n--data '{\r\n  \"model\": \"meta-llama\/Llama-3.1-8B-Instruct\",\r\n  \"reasoning_effort\": \"high\",\r\n  \"messages\": [\r\n    {\"role\": \"system\", \"content\": \"You are a helpful travel assistant\"},\r\n    {\"role\": \"user\", \"content\": \"What are some fun things to do in San Francisco?\"}\r\n  ],\r\n  \"stream\": false,\r\n  \"max_tokens\": 500\r\n}'<\/pre>\n<p>Todos <a href=\"https:\/\/platform.openai.com\/docs\/api-reference\/introduction\">API de OpenAI aqu\u00ed<\/a> son compatibles con Capella Model Service.<\/p>\n<h4>Generar incrustaciones<\/h4>\n<p>Para generar incrustaciones para la entrada de texto, utilice el siguiente comando curl:<\/p>\n<pre class=\"nums:false lang:default decode:true\">curl --request POST \\\r\n--url https:\/\/ai123.apps.cloud.couchbase.com\/v1\/embeddings \\\r\n--header 'Content-Type: application\/json' \\\r\n--data '{\r\n  \"input\": \"This is my input string\",\r\n  \"model\": \"intfloat\/e5-mistral-7b-instruct\"\r\n}'<\/pre>\n<hr \/>\n<h3>Paso 3: utilizar funciones de valor a\u00f1adido<\/h3>\n<p><i>Objetivo de aprendizaje: Optimizar el rendimiento de la IA con el almacenamiento en cach\u00e9, el procesamiento por lotes, la moderaci\u00f3n y el filtrado de palabras clave.<\/i><\/p>\n<p>En esta secci\u00f3n, aprenderemos a optimizar el rendimiento de tu aplicaci\u00f3n de IA y a hacer inferencias m\u00e1s r\u00e1pidas con las mejoras incorporadas.<\/p>\n<p>El almacenamiento en cach\u00e9 reduce los c\u00e1lculos redundantes. La agrupaci\u00f3n mejora la eficacia de las solicitudes. La moderaci\u00f3n de contenidos garantiza que las respuestas generadas por la IA sean adecuadas. El filtrado de palabras clave ayuda a restringir la aparici\u00f3n de t\u00e9rminos espec\u00edficos en los resultados.<\/p>\n<h5><b>Almacenamiento en cach\u00e9<\/b> - Reducir los c\u00e1lculos redundantes<\/h5>\n<p><span style=\"text-decoration: underline;\">Cach\u00e9 est\u00e1ndar<\/span><\/p>\n<p>Pass a header named `X-cb-cache` and provide value as \u201cstandard\u201d:<\/p>\n<pre class=\"nums:false lang:default decode:true\">curl --request POST \\\r\n--url https:\/\/ai123.apps.cloud.couchbase.com\/v1\/chat\/completions \\\r\n--header 'Authorization: Basic change-me' \\\r\n--header 'Content-Type: application\/json' \\\r\n--header 'User-Agent: insomnia\/10.3.0' \\\r\n--header 'X-cb-cache: standard' \\\r\n--data '{\r\n  \"model\": \"meta-llama\/Llama-3.1-8B-Instruct\",\r\n  \"reasoning_effort\": \"high\",\r\n  \"messages\": [\r\n    {\r\n      \"role\": \"system\",\r\n      \"content\": \"You are a helpful travel assistant\"\r\n    },\r\n    {\r\n      \"role\": \"user\",\r\n      \"content\": \"What are some fun things to do in San Francisco? Give me the names of 3 top tourist attractions\"\r\n    }\r\n  ],\r\n  \"stream\": false,\r\n  \"max_tokens\": 500\r\n}'<\/pre>\n<p><span style=\"text-decoration: underline;\">Respuesta<\/span><\/p>\n<p>(Tiempo empleado &lt; 500 ms)<\/p>\n<pre class=\"nums:false wrap:true lang:js decode:true\">date: Wed, 29 Jan 2025 23:50:37 GMT\r\ncontent-type: application\/json\r\ncontent-length: 1316\r\nx-cache: HIT\r\n\r\n{\r\n  \"choices\": [\r\n    {\r\n      \"finish_reason\": \"stop\",\r\n      \"index\": 0,\r\n      \"logprobs\": null,\r\n      \"message\": {\r\n         \"content\": \"San Francisco is a fantastic destination with plenty of exciting activities to enjoy. Here are 3 top tourist attractions to consider:\\n\\n1. **The Golden Gate Bridge**: An iconic symbol of San Francisco, the Golden Gate Bridge is a must-visit attraction. You can walk or bike across the bridge for spectacular views of the San Francisco Bay and the city skyline.\\n\\n2. **Alcatraz Island**: Take a ferry to Alcatraz Island, the former maximum-security prison that once held notorious inmates like Al Capone. Learn about the island's rich history and take in the stunning views of the city and the Bay.\\n\\n3. **Fisherman's Wharf**: This bustling waterfront district offers a taste of San Francisco's seafood culture, street performers, and stunning views of the Bay. You can also catch a cruise to the Golden Gate Bridge or take a stroll along the Pier 39 sea lions.\\n\\nThese are just a few of the many amazing experiences San Francisco has to offer. Let me know if you'd like more recommendations!\",\r\n         \"role\": \"assistant\"\r\n      }\r\n    }\r\n  ],\r\n  \"created\": 1738193254,\r\n  \"id\": \"\",\r\n  \"model\": \"meta-llama\/Llama-3.1-8B-Instruct\",\r\n  \"object\": \"chat.completion\",\r\n  \"system_fingerprint\": \"3.0.0-sha-8f326c9\",\r\n  \"usage\": {\r\n    \"completion_tokens\": 206,\r\n    \"prompt_tokens\": 62,\r\n    \"total_tokens\": 268\r\n  }\r\n}<\/pre>\n<p><span style=\"text-decoration: underline;\">Cach\u00e9 sem\u00e1ntico<\/span><\/p>\n<p>Para entender c\u00f3mo funciona la cach\u00e9 sem\u00e1ntica, podemos pasar una serie de inferencias al modelo, para consultar informaci\u00f3n en torno a la misma entidad - por ejemplo \"San Francisco\". Esta serie de inferencias crear\u00e1 incrustaciones en el cubo de almacenamiento en cach\u00e9 de la entrada y las utilizar\u00e1 para proporcionar los mejores resultados coincidentes con una alta puntuaci\u00f3n de relevancia.<\/p>\n<p>Modificamos ligeramente nuestra solicitud de entrada del ejemplo anterior para que diga -<\/p>\n<p><i>\"\u00bfPuede sugerirnos tres lugares tur\u00edsticos de visita obligada en San Francisco para disfrutar de una experiencia divertida?\".<\/i><\/p>\n<p>Esto devuelve el mismo resultado que la solicitud anterior para una pregunta similar, lo que demuestra que Model Service aprovecha la b\u00fasqueda sem\u00e1ntica para el almacenamiento en cach\u00e9.<\/p>\n<pre class=\"nums:false lang:default decode:true\">curl --request POST \\\r\n--url https:\/\/ai123.apps.cloud.couchbase.com\/v1\/chat\/completions \\\r\n--header 'Authorization: Basic change-me' \\\r\n--header 'Content-Type: application\/json' \\\r\n--header 'User-Agent: insomnia\/10.3.0' \\\r\n--header 'X-cb-cache: semantic' \\\r\n--data '{\r\n  \"model\": \"meta-llama\/Llama-3.1-8B-Instruct\",\r\n  \"reasoning_effort\": \"high\",\r\n  \"messages\": [\r\n    {\r\n      \"role\": \"system\",\r\n      \"content\": \"You are a helpful travel assistant\"\r\n    },\r\n    {\r\n      \"role\": \"user\",\r\n      \"content\": \"Can you suggest three must-visit tourist spots in San Francisco for a fun experience?\"\r\n    }\r\n  ],\r\n  \"stream\": false,\r\n  \"max_tokens\": 500\r\n}'<\/pre>\n<p><span style=\"text-decoration: underline;\">Respuesta<\/span><\/p>\n<pre class=\"nums:false wrap:true lang:default decode:true\">date: Wed, 29 Jan 2025 23:55:05 GMT\r\ncontent-type: application\/json\r\ncontent-length: 1575\r\nx-cache: HIT\r\n\r\n{\r\n  \"choices\": [\r\n    {\r\n      \"finish_reason\": \"stop\",\r\n      \"index\": 0,\r\n      \"logprobs\": null,\r\n      \"message\": {\r\n        \"content\": \"Sure! Here are three must-visit tourist spots in San Francisco for a fun experience:\\nGolden Gate Park is a massive urban park featuring gardens, museums, scenic trails, and even a paddock with bison. Lombard Street, known as the \\\"crookedest street in the world,\\\" is famous for its sharp turns, beautiful landscaping, and stunning city views. The Painted Ladies at Alamo Square offer a picturesque row of colorful Victorian houses with a park that provides breathtaking views of the San Francisco skyline.\",\r\n        \"role\": \"assistant\"\r\n      }\r\n    }\r\n  ],\r\n  \"created\": 1738194872,\r\n  \"id\": \"\",\r\n  \"model\": \"meta-llama\/Llama-3.1-8B-Instruct\",\r\n  \"object\": \"chat.completion\",\r\n  \"system_fingerprint\": \"3.0.0-sha-8f326c9\",\r\n  \"usage\": {\r\n    \"completion_tokens\": 282,\r\n    \"prompt_tokens\": 61,\r\n    \"total_tokens\": 343\r\n  }\r\n}<\/pre>\n<h5><b>Dosificaci\u00f3n <\/b>- Mejorar el rendimiento de las solicitudes m\u00faltiples<\/h5>\n<p>Si est\u00e1 trabajando en una aplicaci\u00f3n que consulta con frecuencia la API de Capella Model Service, la creaci\u00f3n de lotes es una forma eficaz de acelerar las respuestas y optimizar el uso de la API.<\/p>\n<p>Puede realizar varias solicitudes por lotes utilizando las mismas API de OpenAI aqu\u00ed - <a href=\"https:\/\/platform.openai.com\/docs\/api-reference\/batch\">https:\/\/platform.openai.com\/docs\/api-reference\/batch<\/a> para realizar inferencias a la vez.<\/p>\n<p>He aqu\u00ed un ejemplo de llamada curl:<\/p>\n<ol>\n<li style=\"list-style-type: none;\">\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\">Preparar un archivo por lotes de muestra - <em>batch_requests.jsonl<\/em> y cargar <a href=\"https:\/\/platform.openai.com\/docs\/api-reference\/files\/create\">utilizando la API \/v1\/files<\/a>\n<pre class=\"nums:false lang:js decode:true\">{\"messages\": [{\"role\": \"user\", \"content\": \"What is the capital of Japan?\"}], \"model\": \"gpt-4\", \"max_tokens\": 50}\r\n{\"messages\": [{\"role\": \"user\", \"content\": \"Who painted the Mona Lisa?\"}], \"model\": \"gpt-4\", \"max_tokens\": 50}\r\n{\"messages\": [{\"role\": \"user\", \"content\": \"Give me a motivational quote.\"}], \"model\": \"gpt-4\", \"max_tokens\": 50}<\/pre>\n<\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\">Cree el lote utilizando <em>\/v1\/archivos<\/em> API\n<pre class=\"nums:false lang:js decode:true\">curl https:\/\/ai123.apps.cloud.couchbase.com\/v1\/batches \\\r\n-H \"Authorization: Basic change-me\" \\\r\n-H \"Content-Type: application\/json\" \\\r\n-d '{\r\n  \"input_file_id\": \"BATCH_FILE_ID\",\r\n  \"endpoint\": \"\/v1\/chat\/completions\",\r\n  \"completion_window\": \"24h\"\r\n}'<\/pre>\n<\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\">Obtener los resultados de los lotes para seguir su estado\n<pre class=\"nums:false lang:js decode:true\">curl https:\/\/ai123.apps.cloud.couchbase.com\/batches\/BATCH_ID \\\r\n<span style=\"font-size: 15.2px;\">-H \"Authorization: Basic change-me\"<\/span>\r\n<\/pre>\n<\/li>\n<\/ol>\n<\/li>\n<\/ol>\n<h5><b>Moderaci\u00f3n de contenidos<\/b> - Filtrar contenidos sensibles<\/h5>\n<pre class=\"nums:false lang:default decode:true\">curl --request POST \\\r\n--url https:\/\/ai123.apps.cloud.couchbase.com\/v1\/chat\/completions \\\r\n--header 'Authorization: Basic change-me' \\\r\n--header 'Content-Type: application\/json' \\\r\n--header 'User-Agent: insomnia\/10.3.0' \\\r\n--data '{\r\n  \"model\": \"meta-llama\/Llama-3.1-8B-Instruct\",\r\n  \"messages\": [\r\n    {\r\n      \"role\": \"user\",\r\n      \"content\": \"Tell me how to commit tax fraud and avoid detection.\"\r\n    }\r\n  ],\r\n  \"stream\": false\r\n}'<\/pre>\n<p><span style=\"text-decoration: underline;\">Respuesta<\/span><\/p>\n<pre class=\"nums:false lang:default decode:true\">{\r\n  \"error\": {\r\n  \"message\": \"Error processing user prompt due to guardrail violation\",\r\n  \"type\": \"guardrail_violation_error\",\r\n  \"param\": {\r\n    \"categories\": \"s8: intellectual property.\"\r\n  },\r\n  \"code\": \"guardrail_violation_error\"\r\n  }\r\n}<\/pre>\n<p>&nbsp;<\/p>\n<h5><b>Filtrado de palabras clave<\/b> - Restringir palabras o frases concretas<\/h5>\n<pre class=\"nums:false lang:default decode:true\">curl --request POST \\\r\n--url https:\/\/ai123.apps.cloud.couchbase.com\/v1\/chat\/completions \\\r\n--header 'Authorization: Basic change-me' \\\r\n--header 'Content-Type: application\/json' \\\r\n--header 'User-Agent: insomnia\/10.3.0' \\\r\n--data '{\r\n  \"model\": \"meta-llama\/Llama-3.1-8B-Instruct\",\r\n  \"messages\": [\r\n    {\r\n      \"role\": \"user\",\r\n      \"content\": \"Tell me the unreleased product roadmap for Apple\u2019s next iPhone.\"\r\n    }\r\n  ],\r\n  \"stream\": false\r\n}'<\/pre>\n<p><span style=\"text-decoration: underline;\">Respuesta<\/span><\/p>\n<pre class=\"nums:false lang:default decode:true\">{\r\n  \"error\": {\r\n    \"message\": \"Error processing user prompt due to guardrail violation\",\r\n    \"type\": \"guardrail_violation_error\",\r\n    \"param\": {\r\n      \"categories\": \"s8: intellectual property.\"\r\n    },\r\n  \"code\": \"guardrail_violation_error\"\r\n  }\r\n}<\/pre>\n<hr \/>\n<h2>Reflexiones finales<\/h2>\n<p>El Servicio de Modelos de Capella ya est\u00e1 disponible para la Vista Previa Privada. Reg\u00edstrese para probarlo con cr\u00e9ditos gratuitos y darnos su opini\u00f3n para contribuir a su desarrollo futuro.<\/p>\n<p>Est\u00e9 atento a los pr\u00f3ximos blogs que explorar\u00e1n c\u00f3mo maximizar las capacidades de IA aprovechando la proximidad de los datos con los modelos ling\u00fc\u00edsticos implementados y los servicios de IA m\u00e1s amplios de Capella.<\/p>\n<p><a href=\"https:\/\/info.couchbase.com\/capella-ai-services-signup?_gl=1*s7h2vl*_gcl_au*MTc2ODAwNzM2NC4xNzM4MjkyNjY3\">Inscr\u00edbase en el preestreno privado aqu\u00ed.<\/a><\/p>\n<h2>Referencias<\/h2>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li style=\"font-weight: 300;\" aria-level=\"1\"><a href=\"https:\/\/www.couchbase.com\/blog\/es\/press-releases\/couchbase-introduces-capella-ai-services-to-streamline-development-of-agentic-ai-applications\/\">Leer el comunicado de prensa<\/a><\/li>\n<li style=\"font-weight: 300;\" aria-level=\"1\">Echa un vistazo <a href=\"https:\/\/www.couchbase.com\/blog\/es\/products\/ai-services\/\">Servicios de IA de Capella<\/a> o Inscr\u00edbase en el <a href=\"https:\/\/info.couchbase.com\/capella-ai-services-signup?_gl=1*s7h2vl*_gcl_au*MTc2ODAwNzM2NC4xNzM4MjkyNjY3\">Vista previa privada<\/a><\/li>\n<li style=\"font-weight: 300;\" aria-level=\"1\"><a href=\"https:\/\/preview2.docs-test.couchbase.com\/ai\/get-started\/intro.html\">Documentaci\u00f3n del Servicio Modelo Capella<\/a> (s\u00f3lo para clientes de vista previa)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h2>Agradecimientos<\/h2>\n<p><i>Gracias al equipo de Capella (Jagadesh M, Ajay A, Aniket K, Vishnu N, Skylar K, Aditya V, Soham B, Hardik N, Bharath P, Mohsin A, Nayan K, Nimiya J, Chandrakanth N, Pramada K, Kiran M, Vishwa Y, Rahul P, Mohan V, Nithish R, Denis S. y muchos m\u00e1s...).  \u00a1Gracias a todos los que ayudaron directa o indirectamente! &lt;3<\/i><\/p>\n<p>&nbsp;<\/p>","protected":false},"excerpt":{"rendered":"<p>Couchbase Capella has launched a Private Preview for AI services! Check out this blog for an overview of how these services simplify the process of building cloud-native, scalable AI applications and AI agents. In this blog, we\u2019ll explore the Model [&hellip;]<\/p>","protected":false},"author":85129,"featured_media":16836,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"footnotes":""},"categories":[10122,1815,2225],"tags":[10086,10087,9870,10085,1466],"ppma_author":[9931],"class_list":["post-16826","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence-ai","category-best-practices-and-tutorials","category-cloud","tag-language-models","tag-llama-models","tag-llms","tag-model-inferencing","tag-preview"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v26.0 (Yoast SEO v26.0) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Capella Model Service: Secure, Scalable, and OpenAI-Compatible<\/title>\n<meta name=\"description\" content=\"Capella Model Service lets you deploy secure, scalable AI models with OpenAI compatibility. Now in Private Preview!\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.couchbase.com\/blog\/es\/capella-model-service-ai\/\" \/>\n<meta property=\"og:locale\" content=\"es_MX\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Capella Model Service: Secure, Scalable, and OpenAI-Compatible\" \/>\n<meta property=\"og:description\" content=\"Capella Model Service lets you deploy secure, scalable AI models with OpenAI compatibility. Now in Private Preview!\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.couchbase.com\/blog\/es\/capella-model-service-ai\/\" \/>\n<meta property=\"og:site_name\" content=\"The Couchbase Blog\" \/>\n<meta property=\"article:published_time\" content=\"2025-02-04T01:35:16+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-06-13T23:36:21+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/blog-capella-ai-services-models-1024x536.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1024\" \/>\n\t<meta property=\"og:image:height\" content=\"536\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Talina Shrotriya, Software Engineering Manager\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Talina Shrotriya, Software Engineering Manager\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/capella-model-service-ai\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/capella-model-service-ai\/\"},\"author\":{\"name\":\"Talina Shrotriya, Senior Engineering Manager\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/50c96ba341a92708507fcd493a0ecbb8\"},\"headline\":\"Capella Model Service: Secure, Scalable, and OpenAI-Compatible\",\"datePublished\":\"2025-02-04T01:35:16+00:00\",\"dateModified\":\"2025-06-13T23:36:21+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/capella-model-service-ai\/\"},\"wordCount\":1315,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/capella-model-service-ai\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/blog-capella-ai-services-models.png\",\"keywords\":[\"language models\",\"llama models\",\"LLMs\",\"model inferencing\",\"preview\"],\"articleSection\":[\"Artificial Intelligence (AI)\",\"Best Practices and Tutorials\",\"Couchbase Capella\"],\"inLanguage\":\"es\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.couchbase.com\/blog\/capella-model-service-ai\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/capella-model-service-ai\/\",\"url\":\"https:\/\/www.couchbase.com\/blog\/capella-model-service-ai\/\",\"name\":\"Capella Model Service: Secure, Scalable, and OpenAI-Compatible\",\"isPartOf\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/capella-model-service-ai\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/capella-model-service-ai\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/blog-capella-ai-services-models.png\",\"datePublished\":\"2025-02-04T01:35:16+00:00\",\"dateModified\":\"2025-06-13T23:36:21+00:00\",\"description\":\"Capella Model Service lets you deploy secure, scalable AI models with OpenAI compatibility. Now in Private Preview!\",\"breadcrumb\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/capella-model-service-ai\/#breadcrumb\"},\"inLanguage\":\"es\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.couchbase.com\/blog\/capella-model-service-ai\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/capella-model-service-ai\/#primaryimage\",\"url\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/blog-capella-ai-services-models.png\",\"contentUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/blog-capella-ai-services-models.png\",\"width\":2400,\"height\":1256,\"caption\":\"AI model service in Capella AI Services\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/capella-model-service-ai\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.couchbase.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Capella Model Service: Secure, Scalable, and OpenAI-Compatible\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#website\",\"url\":\"https:\/\/www.couchbase.com\/blog\/\",\"name\":\"The Couchbase Blog\",\"description\":\"Couchbase, the NoSQL Database\",\"publisher\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.couchbase.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"es\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#organization\",\"name\":\"The Couchbase Blog\",\"url\":\"https:\/\/www.couchbase.com\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png\",\"contentUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png\",\"width\":218,\"height\":34,\"caption\":\"The Couchbase Blog\"},\"image\":{\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/50c96ba341a92708507fcd493a0ecbb8\",\"name\":\"Talina Shrotriya, Senior Engineering Manager\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/image\/1a67340659be31a858a1d3e12e015b0e\",\"url\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/03\/image_2024-03-07_092247517.png\",\"contentUrl\":\"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/03\/image_2024-03-07_092247517.png\",\"caption\":\"Talina Shrotriya, Senior Engineering Manager\"},\"url\":\"https:\/\/www.couchbase.com\/blog\/es\/author\/talinashrotriya\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Servicio Modelo Capella: Seguro, escalable y compatible con OpenAI","description":"Capella Model Service le permite desplegar modelos de IA seguros y escalables compatibles con OpenAI. \u00a1Ahora en Private Preview!","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.couchbase.com\/blog\/es\/capella-model-service-ai\/","og_locale":"es_MX","og_type":"article","og_title":"Capella Model Service: Secure, Scalable, and OpenAI-Compatible","og_description":"Capella Model Service lets you deploy secure, scalable AI models with OpenAI compatibility. Now in Private Preview!","og_url":"https:\/\/www.couchbase.com\/blog\/es\/capella-model-service-ai\/","og_site_name":"The Couchbase Blog","article_published_time":"2025-02-04T01:35:16+00:00","article_modified_time":"2025-06-13T23:36:21+00:00","og_image":[{"width":1024,"height":536,"url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/blog-capella-ai-services-models-1024x536.png","type":"image\/png"}],"author":"Talina Shrotriya, Software Engineering Manager","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Talina Shrotriya, Software Engineering Manager","Est. reading time":"8 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.couchbase.com\/blog\/capella-model-service-ai\/#article","isPartOf":{"@id":"https:\/\/www.couchbase.com\/blog\/capella-model-service-ai\/"},"author":{"name":"Talina Shrotriya, Senior Engineering Manager","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/50c96ba341a92708507fcd493a0ecbb8"},"headline":"Capella Model Service: Secure, Scalable, and OpenAI-Compatible","datePublished":"2025-02-04T01:35:16+00:00","dateModified":"2025-06-13T23:36:21+00:00","mainEntityOfPage":{"@id":"https:\/\/www.couchbase.com\/blog\/capella-model-service-ai\/"},"wordCount":1315,"commentCount":0,"publisher":{"@id":"https:\/\/www.couchbase.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/capella-model-service-ai\/#primaryimage"},"thumbnailUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/blog-capella-ai-services-models.png","keywords":["language models","llama models","LLMs","model inferencing","preview"],"articleSection":["Artificial Intelligence (AI)","Best Practices and Tutorials","Couchbase Capella"],"inLanguage":"es","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.couchbase.com\/blog\/capella-model-service-ai\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.couchbase.com\/blog\/capella-model-service-ai\/","url":"https:\/\/www.couchbase.com\/blog\/capella-model-service-ai\/","name":"Servicio Modelo Capella: Seguro, escalable y compatible con OpenAI","isPartOf":{"@id":"https:\/\/www.couchbase.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.couchbase.com\/blog\/capella-model-service-ai\/#primaryimage"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/capella-model-service-ai\/#primaryimage"},"thumbnailUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/blog-capella-ai-services-models.png","datePublished":"2025-02-04T01:35:16+00:00","dateModified":"2025-06-13T23:36:21+00:00","description":"Capella Model Service le permite desplegar modelos de IA seguros y escalables compatibles con OpenAI. \u00a1Ahora en Private Preview!","breadcrumb":{"@id":"https:\/\/www.couchbase.com\/blog\/capella-model-service-ai\/#breadcrumb"},"inLanguage":"es","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.couchbase.com\/blog\/capella-model-service-ai\/"]}]},{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/www.couchbase.com\/blog\/capella-model-service-ai\/#primaryimage","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/blog-capella-ai-services-models.png","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2025\/02\/blog-capella-ai-services-models.png","width":2400,"height":1256,"caption":"AI model service in Capella AI Services"},{"@type":"BreadcrumbList","@id":"https:\/\/www.couchbase.com\/blog\/capella-model-service-ai\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.couchbase.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Capella Model Service: Secure, Scalable, and OpenAI-Compatible"}]},{"@type":"WebSite","@id":"https:\/\/www.couchbase.com\/blog\/#website","url":"https:\/\/www.couchbase.com\/blog\/","name":"El blog de Couchbase","description":"Couchbase, la base de datos NoSQL","publisher":{"@id":"https:\/\/www.couchbase.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.couchbase.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"es"},{"@type":"Organization","@id":"https:\/\/www.couchbase.com\/blog\/#organization","name":"El blog de Couchbase","url":"https:\/\/www.couchbase.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/2023\/04\/admin-logo.png","width":218,"height":34,"caption":"The Couchbase Blog"},"image":{"@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/50c96ba341a92708507fcd493a0ecbb8","name":"Talina Shrotriya, Directora de Ingenier\u00eda","image":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/www.couchbase.com\/blog\/#\/schema\/person\/image\/1a67340659be31a858a1d3e12e015b0e","url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/03\/image_2024-03-07_092247517.png","contentUrl":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/03\/image_2024-03-07_092247517.png","caption":"Talina Shrotriya, Senior Engineering Manager"},"url":"https:\/\/www.couchbase.com\/blog\/es\/author\/talinashrotriya\/"}]}},"authors":[{"term_id":9931,"user_id":85129,"is_guest":0,"slug":"talinashrotriya","display_name":"Talina Shrotriya, Software Engineering Manager","avatar_url":{"url":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/03\/image_2024-03-07_092247517.png","url2x":"https:\/\/www.couchbase.com\/blog\/wp-content\/uploads\/sites\/1\/2024\/03\/image_2024-03-07_092247517.png"},"author_category":"","last_name":"Shrotriya, Software Engineering Manager","first_name":"Talina","job_title":"","user_url":"","description":""}],"_links":{"self":[{"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/posts\/16826","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/users\/85129"}],"replies":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/comments?post=16826"}],"version-history":[{"count":0,"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/posts\/16826\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/media\/16836"}],"wp:attachment":[{"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/media?parent=16826"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/categories?post=16826"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/tags?post=16826"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.couchbase.com\/blog\/es\/wp-json\/wp\/v2\/ppma_author?post=16826"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}