Skip to content

About Core-AI Capabilities

The Agent Platform offers a comprehensive suite of AI capabilities designed to empower enterprises in building, optimizing, evaluating, and safeguarding AI systems. These capabilities form the foundation of a robust, scalable, and responsible AI workflow. The four core components—Models Studio, Prompt Studio, Evaluation Studio, and Guardrails—work together to streamline model customization, improve prompt performance, ensure model quality, and uphold compliance and safety standards.

Models

Models Studio enables enterprises to tailor foundational language models to their domain-specific needs. With built-in tools for fine-tuning, external model integration, and deployment management, users can enhance base models using proprietary datasets or bring in commercial and open-source models to diversify and strengthen AI capabilities. Learn more .

  • Fine-tune models with your enterprise data.
  • Import 30+ open-source models or connect to providers like OpenAI, Anthropic, Cohere, and Google.
  • Quickly deploy models with custom parameters and infrastructure settings.

Prompt Studio

Prompt Studio is a workspace for developing, experimenting with, and optimizing prompts. It supports multi-model testing, template-driven design, and version control, helping teams find the best-performing prompt configurations through an iterative process. Whether testing open-source or fine-tuned models, Prompt Studio accelerates prompt refinement and deployment. Learn more

  • Compare prompts across different models in real time.
  • Use 65+ prebuilt templates or build from scratch.
  • Manage versions, incorporate variables, and export results for collaboration.

Evaluation Studio

Evaluation Studio provides a structured environment to analyze and benchmark the performance of LLMs using diverse datasets and scoring methods. Users can apply built-in evaluators or define custom evaluators to assess outputs for coherence, factual accuracy, safety, and more. Results are visualized and tracked across evaluation sessions for continuous improvement. Learn more .

  • Evaluate models using prebuilt or custom evaluators (e.g., coherence, toxicity).
  • Import datasets from production or offline sources.
  • Collaborate on evaluation projects and track longitudinal performance.

Guardrails

Guardrails are safeguards that ensure AI-generated responses from large language models (LLMs) are appropriate, safe, and aligned with organizational or regulatory standards. They enforce safety, privacy, and contextual relevance by scanning both inputs and outputs using a configurable set of scanners to detect issues such as prompt injection, offensive language, bias, and more. Guardrails play a critical role in maintaining ethical compliance and trust in AI-driven applications. Learn more .

  • Built-in scanners for toxicity, prompt injection, bias detection, and PII anonymization.
  • Define acceptable topic boundaries using ban-topic rules and regex patterns.
  • Monitor and mitigate risk in real time during AI interactions.

Together, these capabilities streamline the development, testing, deployment, and monitoring of intelligent, responsible AI solutions.