Frequently Asked Questions
Find answers to common questions about Seclai. Use the search below or browse by category.
Getting Started
Seclai is a production LLM platform that handles everything around the model call — model portability, retries, evaluation, observability, RAG, memory, governance, prompt safety, and cost management. Build multi-step AI workflows, connect content sources, give agents persistent memory, and deploy with built-in safety and full observability.
Read: IntroductionHow do I create an account?
Click the 'Sign Up' button on the homepage. We support email registration and social login. Once you verify your email, you can start building agents, adding sources, and creating knowledge bases.
Read: Getting StartedWhat can I build with Seclai?
Knowledge base assistants with RAG and chat, content monitoring pipelines that trigger on new items, multi-step LLM workflows with conditional branching and retries, personalised chatbots with persistent memory, scheduled automation (daily digests, weekly reports), and data extraction pipelines that fetch, process, and deliver results via webhooks, email, or S3.
Read: IntroductionWhat types of data sources can I connect?
Seclai supports websites, RSS feeds, file uploads (documents, audio, video), and custom API endpoints. Sources are polled on a configurable schedule and their content is automatically chunked, embedded, and indexed for retrieval.
Learn: Content SourcesAgents & Workflows
What are agents?
Agents are multi-step LLM workflows. Each agent consists of a sequence of steps — prompt calls, gates, transforms, retrieval, memory operations, webhooks, sub-agent calls, and more — that execute in order. Agents can be triggered by user input, content changes, or cron schedules.
Learn: AgentsWhat step types are available?
There are 24 step types across five categories: Core Actions (prompt call, evaluate, gate, transform, combinator), Content Actions (knowledge base search, web fetch, web search), Memory (add, search, load memory), Integration (webhook, sub-agent, S3 write), and Output (display result, send email). Each step supports string substitutions, metadata, caching, and conditional execution.
Learn: Agent StepsCan agents stream responses in real time?
Yes. Any prompt call step can stream its output token-by-token via Server-Sent Events (SSE). This is used automatically by the knowledge base chat interface and is available via the API for custom integrations.
Learn: Agent StreamingHow do I trigger an agent?
Three ways: on-demand (user input via the UI, API, or MCP), on a schedule (hourly, daily, weekly, monthly, or custom cron), or on content change (automatically when a knowledge base receives new content). Template-based triggers can pre-fill inputs for recurring tasks.
Learn: Agent TriggersHow do retries and fallback models work?
Prompt call steps can auto-retry on bad or malformed output. You can configure a fallback model that's used when the primary model fails. For more advanced patterns, use an evaluate step to score the output, a gate step to check the score, and loop back to re-run the prompt — creating a self-improving retry loop.
Learn: Core ActionsHow do I debug agent runs?
Every agent run produces a detailed trace showing step-level input, output, latency, cost, and quality scores. The trace view also includes pseudo-steps for prompt scans, output scans, governance evaluations, and agent evaluations so you can see the full pipeline in one place.
Learn: Agent TracesKnowledge Bases
What are knowledge bases?
Knowledge bases are collections of content sources that power AI agents with contextual information. They index content, generate vector embeddings, and provide intelligent retrieval. You can search them programmatically, use them as context in agent pipelines, or chat with them directly.
Learn: Knowledge BasesWhat is Knowledge Base Chat?
Each knowledge base has a built-in chat interface where you can have interactive conversations with your indexed content. Messages trigger a RAG retrieval, and the response streams back in real time. You can choose between Fast, Balanced, and Thorough model tiers, manage multiple conversations, and regenerate responses.
Learn: Knowledge BasesHow do I tune retrieval quality?
Each knowledge base has configurable retrieval settings: Top N (initial vector search results), Top K (results kept after reranking), a reranker model for relevance scoring, and a score threshold to filter low-quality matches. Adjust these to balance recall and precision for your content.
Learn: Knowledge BasesMemory Banks
What are memory banks?
Memory banks give agents persistent memory that spans conversations and sessions. There are two types: conversation banks (chat-style history partitioned by key and speaker) and general banks (flat factual entries for structured knowledge). Agents can add, search, and load memory through dedicated step types.
Learn: Memory BanksHow does memory compaction work?
As memory banks grow, automatic compaction summarises older entries to keep memory manageable without losing important context. You can also configure retention policies to control how long entries are kept.
Learn: Memory BanksModels
Which AI models are supported?
Seclai supports 90+ models from OpenAI, Anthropic, Google, Amazon, DeepSeek, xAI, Meta, Mistral, Moonshot AI, Qwen, NVIDIA, Cohere, and more. You can swap models at any time without changing your workflow.
Browse: Models & PlaygroundWhat is the Model Playground?
The Model Playground lets you test any supported model side-by-side before using it in an agent. Send the same prompt to multiple models simultaneously to compare output quality, latency, and cost.
Browse: Models & PlaygroundWhat happens when a model is deprecated?
Seclai tracks model lifecycle events — deprecation announcements, sunset dates, and new version releases. You can configure automatic upgrade strategies with rollout policies and auto-rollback on failure signals. Alert notifications keep you informed of upcoming changes.
Browse: Models & PlaygroundSafety & Governance
How does Seclai keep AI outputs safe?
Three layers work together: the Prompt Scanner (an ML classifier that detects injection and jailbreaking attacks at every ingress point), Governance (LLM-based policy screening for safety, PII, bias, legal, and brand compliance), and Agent Evaluations (quality scoring to catch regressions). Each layer is independent and can be configured separately.
Read: Safety & Quality OverviewWhat is the Prompt Scanner?
The Prompt Scanner is an always-on ML classifier that detects prompt injection and jailbreaking attacks. It runs automatically on every user input and on outputs from external sources. It requires zero configuration, has zero LLM cost, and runs in sub-second latency.
Learn: Prompt ScannerWhat is governance and how do policies work?
Governance automatically screens agent outputs and source content against your safety, privacy, and compliance policies. Each policy defines what the evaluator checks for (e.g. 'block PII', 'flag biased language') with configurable flag and block thresholds. Policies can be scoped to the account, a specific agent, a step, or a source connection.
Learn: GovernanceWhat happens when governance flags content?
The evaluator produces a confidence score (0.0–1.0) which is compared against the policy's thresholds to produce a Pass, Flag, or Block verdict. Flagged content proceeds but is queued for human review. Blocked content is withheld until a reviewer resolves the evaluation. You can configure policies as blocking (synchronous gate) or non-blocking (async audit).
Learn: GovernanceHow do agent evaluations work?
Agent evaluations score step outputs against quality criteria you define. Three modes: manual output expectations (one-off checks), eval-and-retry (every run, auto-retry if below threshold), and sampled monitoring (periodically sample runs and flag quality drift). Evaluations appear inline in agent traces.
Learn: Agent EvaluationsSolutions & AI Assistants
What are solutions?
Solutions group related agents, knowledge bases, content sources, and memory banks into a single project. This makes it easy to manage complex setups that involve multiple resources working together.
Learn: SolutionsWhat are AI Assistants?
AI Assistants are natural-language interfaces that help you configure Seclai resources. They can generate agent workflows from a description, configure content sources and knowledge bases, manage governance policies, and scaffold entire solutions — all from a single prompt.
Learn: AI AssistantsAPI & Integrations
How do I access the API?
The REST API supports full CRUD for all resources — agents, knowledge bases, sources, memory banks, governance, and more. Authenticate with an API key (X-API-Key header) or an OAuth Bearer token. Interactive API docs are available at your instance's /docs endpoint.
Read: API IntroductionWhat is the MCP Server?
The MCP (Model Context Protocol) Server lets you manage your entire Seclai account from AI coding tools like Claude Desktop, Cursor, and VS Code Copilot. It exposes all major operations as MCP tools — create agents, chat with knowledge bases, manage governance, trigger runs, and more.
Learn: MCP ServerAre there SDKs or a CLI?
Yes. Official SDKs are available for Python, JavaScript, Go, and C#. There's also a CLI for scripting and CI/CD integration that includes built-in skills — pre-packaged workflows you can run in a single command as an efficient alternative to MCP. All SDKs are open source on GitHub.
Browse: SDKsPlans & Billing
What plans are available?
We offer several plans: Personal and Starter for individuals, Team for collaboration with more credits and advanced features, and Pro for power users and larger organisations. Governance is available on Starter, Team, and Pro plans (the Personal plan does not include it). You can also be invited as a Viewer to access shared knowledge bases.
Understand: Credits & UsageWhat are credits and how do they work?
Credits power all AI operations — prompt calls, content processing, governance evaluations, AI assistants, and knowledge base chat. Each plan includes a monthly allocation. Different operations consume different amounts based on the model used and the complexity of the task. You can monitor usage per agent, per model, and per use case from the dashboard.
Learn: Credits & UsageCan I upgrade or downgrade my plan?
Yes, you can change your plan at any time from your account settings. Upgrades take effect immediately, while downgrades take effect at your next billing cycle. Monthly credits reset each billing cycle and do not roll over.
Learn: Credits & UsageWhat happens when I run out of credits?
When credits are exhausted, AI operations (agent runs, chat, governance evaluations) will be paused until your next billing cycle or until you upgrade. You can configure credit threshold alerts to get notified before running out.
Learn: AlertsAccount & Settings
How do organisations work?
Organisations let you share resources with team members. Owners can invite members, assign roles, and manage access. Each organisation has its own knowledge bases, agents, and sources. You can belong to multiple organisations and switch between them.
Learn: OrganizationsWhat alerts are available?
There are 15+ alert types covering agent failures, content polling issues, credit thresholds, model lifecycle events (deprecation, sunset), and governance verdicts. Alerts are delivered via email and in-app notifications. You can configure which alerts you receive from the Alerts settings page.
Learn: AlertsCan I transfer resources between accounts?
Yes. The resource transfer system moves agents, knowledge bases, sources, and memory banks between accounts with full dependency resolution. It computes the transitive closure (an agent's knowledge bases, sources, memory banks, and sub-agents), detects blockers like name conflicts, and executes the transfer atomically.
Learn: Transferring ResourcesCan I export my data?
Yes. You can export knowledge bases, memory banks, solutions, agent traces, agent evaluations, governance policies, and governance evaluations. Exports are available in JSON, JSONL, or CSV depending on the resource type. Use the Export button on any resource page, the REST API, or MCP tools. Agent definitions can also be exported as portable JSON snapshots.
Read: Export FormatsHow is my data protected?
All data is encrypted in transit and at rest. We follow industry best practices for security and compliance. You maintain full ownership of your content, and we never use your data to train models or share it with third parties.
Browse: DocumentationHow do I delete my account?
You can delete your account from the account settings page. This permanently removes all your data, sources, agents, and associated content. This action cannot be undone — make sure to export any important data before deletion.
Browse: DocumentationTroubleshooting
I'm having trouble logging in
First try resetting your password. Make sure you're using the correct email address and check your spam folder for verification emails. If you signed up with a social login (Google, GitHub), use the same provider to log in. If the issue persists, contact support.
Read: TroubleshootingMy source isn't processing or updating
Check that the URL is accessible and hasn't changed. Some sites have restrictions that prevent automated access (robots.txt, Cloudflare protection). Check the source's error log for specific failure messages. You can also manually trigger a re-poll from the source detail page.
Read: TroubleshootingMy agent is failing or producing unexpected output
Open the agent's trace view to see exactly what happened at each step — input, output, errors, and latency. Common issues: the prompt is too vague, a knowledge base search returned no results, or the model hit a token limit. Check the step configuration and try running with a different model tier.
Learn: Agent TracesGovernance is blocking content I think is fine
Check the evaluation details in the Governance review queue — you'll see the confidence score and reasoning. If the threshold is too aggressive, raise the flag or block threshold on that policy. You can also test policies against sample content before applying them to production.
Learn: GovernanceResponses are slow
Response time depends on the model tier (Fast < Balanced < Thorough), the number of steps in the pipeline, and whether governance screening is enabled. Try using a faster model tier or reducing the number of retrieval results (Top N / Top K). For agent pipelines, check the trace to identify which step is the bottleneck.
Read: Troubleshooting