Models & Playground
Seclai exposes AI models in two different ways:
- A public models catalog for browsing supported LLMs, embedding models, and rerankers.
- An account-level models page where authenticated users can launch an LLM playground and test models directly.
This page explains how those two experiences fit together, plus what is currently available through the REST API and MCP server.
UI Flow
Public catalog
Visit /models to browse the full model catalog without signing in.
- LLMs show provider, context window, output limits, and capability badges.
- Embedding models show dimensions, credits, and language support.
- Rerankers show provider, credits, and default status.
For LLMs, each model detail page provides a Try This action.
- If you are signed in, Try This forwards you to your account-level playground with the selected model prefilled.
- If you are signed out, Seclai redirects you to login first and then returns you to the account resolver flow.
Account models page
Visit /app/{account_id}/models after signing in.
This page is the authenticated version of the model catalog. It keeps the catalog visible but adds LLM playground actions:
- Model Playground is a dedicated launcher page under the Models section.
- Starting from public model details via Try This pre-fills the selected model and prompt.
Playground Editor Modes
Simple Editor
Use the Simple editor when you want the familiar prompt-call flow:
- A user prompt template
- An optional system template
- Manual substitution inputs
- A quick response preview from the selected model
This is the fastest way to try a model for normal conversational or instruction-following prompts.
Advanced JSON Editor
Use the Advanced JSON editor when you need more control over the payload.
This mode is useful when:
- You want to send structured message arrays
- You need model-specific JSON fields
- You want to validate a more advanced prompt-call configuration before using it in an agent
Evaluation Depth
When using AI Evaluator in the playground, Evaluation Depth controls how much analysis the evaluator performs.
| Depth | What it does | Best for |
|---|---|---|
| Simple | Fast pass/fail style scoring and quick ranking | Rapid comparisons and low-cost checks |
| Standard | Multi-criteria scoring with clearer tradeoffs | Everyday side-by-side model testing |
| Complex | Most rigorous analysis with deeper reasoning | High-stakes prompt quality reviews |
In general, start with Standard, move to Simple for quick iterations, and use Complex when precision matters more than speed.
API Access
There are two separate API concepts related to models:
Model metadata APIs
Seclai already exposes REST endpoints for model metadata and lifecycle information, such as model recommendations and model lifecycle alerts. See Model Lifecycle for those endpoints.
Prompt-call testing
The UI playground uses an authenticated prompt-call test flow under the hood. This is useful for testing a model with prompt templates and substitutions before saving the configuration into an agent.
For API-first workflows, the practical guidance is:
- Use the normal REST API or SDKs to create/update agent prompt-call steps for durable configurations.
- Use the authenticated prompt-call testing flow when you specifically want ad hoc evaluation behavior like the UI playground.
If you are integrating programmatically and need stable automation, the agent-definition APIs are the better long-term entry point than treating the UI playground as your primary workflow.
MCP Access
The MCP server currently exposes model lifecycle tooling, not a standalone model playground tool.
Today, MCP supports model-related workflows such as:
- Listing model lifecycle alerts
- Fetching replacement recommendations for deprecated or sunset models
See MCP Server and Model Lifecycle for the current tool set.
If you are using MCP and want to test prompt behavior, the recommended approach today is still to work through agent configuration or the UI playground rather than a dedicated MCP playground command.
Recommended Workflows
I just want to explore what models exist
Use the public catalog at /models.
I want to try prompts against a model before editing an agent
Use the account models page at /app/{account_id}/models and launch the playground.
I want to make a durable prompt-call configuration
Use the agent editor or the agent-definition API/SDKs.
I want automated model lifecycle management
Use the REST API or MCP tools documented in Model Lifecycle.
Next Steps
- Embedding Models — Learn about embedding and reranker model choices
- Model Lifecycle — Recommendations, deprecations, and lifecycle alerts
- API Introduction — Authenticate and integrate through REST
- MCP Server — Use Seclai from MCP-compatible tools