AI Assistants
Seclai includes built-in AI assistants throughout the platform that help you configure resources, generate agent workflows, manage governance policies, and orchestrate entire solutions — all from plain English descriptions. Every assistant is available in the UI, via the API, and through MCP tools.
How They Work
All AI assistants follow the same propose-then-accept pattern:
- You describe what you want in natural language
- The assistant generates a plan — a set of specific, reviewable actions
- You review and accept or decline — accepted plans are executed automatically; declined plans are discarded
This pattern keeps you in control. The assistant never makes changes without your explicit approval. You can also refine your request and try again if the initial plan doesn't match your intent.
Assistants stream their responses in real time via Server-Sent Events, so you see progress as the plan is being generated.
Available Assistants
Each assistant is documented on its resource page. See the linked documentation for full details, examples, and API access.
| Assistant | Where to find it | What it does | Documentation |
|---|---|---|---|
| Agent Workflow Generator | Agent editor → Generate with AI | Creates a complete multi-step agent workflow from a description | Agents → AI Assistants |
| Agent Step Configurator | Individual step editor → AI Assistant button | Refines a single step's configuration (regex, gate conditions, templates) | Agents → Step Configurator |
| Solution Assistant | Solution detail page → AI Assistant | Proposes cross-resource changes to a solution (agents, KBs, sources) | Solutions → AI Assistants |
| Source Assistant | Solution detail page → Sources tab → AI Assistant | Recommends and creates content sources | Solutions → Source AI Assistant |
| Knowledge Base Assistant | Solution detail page → Knowledge Bases tab → AI Assistant | Suggests how to group sources into knowledge bases | Solutions → KB AI Assistant |
| Memory Bank Assistant | Memory bank create/edit page → Use AI assistant | Suggests compaction prompts, thresholds, and retention settings | Memory Banks → AI Assistant |
| Governance Assistant | Governance page → AI Assistant | Creates, updates, and manages governance policies | Governance → AI Assistant |
Best Practices
The following guidelines help you get the best results from all AI assistants.
Writing Effective Prompts
Be specific about what you want, not how to build it.
The assistants understand Seclai's capabilities — you don't need to specify step types, model names, or configuration fields. Describe the outcome you want and let the assistant choose the implementation.
| Less effective | More effective |
|---|---|
| "Create a prompt_call step with retrieval from KB 29246ccc" | "Build a chatbot that answers product questions using my Product Information knowledge base" |
| "Add an insight step with output_format json_object" | "Extract the topic, sentiment, and key quotes from each news article" |
| "Create a gate step with condition score > 0.8" | "Only forward messages that are clearly relevant to our product" |
Name your resources. When referring to knowledge bases, memory banks, or agents, use their names instead of IDs. The assistant resolves names to IDs automatically.
- Good: "Use the Product Information knowledge base"
- Less good: "Use KB 29246ccc-7c97-46c6-ad05-34ce83821c3f"
State your constraints up front. If you have specific requirements, include them in your initial prompt rather than correcting after the fact.
- "Build a news digest agent that emails results — use Claude Haiku for classification and Sonnet for the final summary to keep costs low"
- "Create a chatbot that remembers conversations and responds in JSON format only"
Describe the end-to-end workflow. For complex agents, describe the full pipeline in one prompt. The assistant handles branching, parallel execution, and step ordering.
- "Build an agent that: 1) takes a customer question, 2) searches the product KB, 3) checks if the answer is confident enough, 4) if yes, formats and responds, 5) if no, escalates to a support email"
Include example inputs and outputs. When you have a specific format in mind, show an example:
- "Extract article metadata and return JSON like this: {"title": "...", "author": "...", "date": "...", "tags": ["..."]}"
Mention who will consume the output. This helps the assistant choose the right output format, model, and delivery method:
- "This agent will be called by our mobile app via API" (optimises for JSON, fast models)
- "End users will chat with this agent in the web UI" (optimises for conversational flow, streaming)
- "This runs on a schedule with no human in the loop" (optimises for reliability, structured output, email/webhook delivery)
Working with Plans
Review before accepting. Always read through the proposed plan. Check that:
- The right resources are being created, updated, or deleted
- Names and descriptions make sense for your use case
- For agent workflows: the step order and branching logic match your intent
- For governance: thresholds match your risk tolerance
Decline and refine. If a plan is close but not right, decline it and provide more specific guidance about what to change:
- First attempt: "Build a news monitoring agent"
- Refined: "Build a news monitoring agent — use my World News KB, summarise into bullet points, and only include articles about AI. Email results daily to team@example.com"
Start simple, then add complexity. For complex workflows, consider building in stages:
- Generate the core workflow (e.g., "chatbot that answers questions from my KB")
- Use the step configurator to fine-tune individual steps
- Add additional steps manually or with another generation pass
Use the Solution Assistant for cross-resource work. When you need to create sources, knowledge bases, and agents together, the Solution Assistant handles the dependencies and wiring automatically. The individual resource assistants are better for focused single-resource tasks.
Iterating on Results
Test with real inputs. After accepting a generated agent workflow, test it with actual inputs from the Agent Triggers test panel or the API. Check the Agent Traces to see how each step processed the data.
Fine-tune individual steps. The Agent Step Configurator lets you adjust any step after generation:
- Rewrite prompt templates for better output quality
- Adjust temperature (lower for consistent/factual, higher for creative)
- Change the model (faster/cheaper for simple tasks, more capable for complex ones)
- Tune retrieval top_n (more results for broad questions, fewer for focused ones)
Use evaluations for ongoing quality. Set up Agent Evaluations to automatically assess output quality on every run or a sample. This catches regressions when you change prompts or models.
Check governance first. If your agent handles user input, set up Governance policies before deploying. The Governance AI Assistant can configure appropriate policies in seconds.
MCP and API Access
All AI assistants are available through the MCP Server and the authenticated REST API, enabling programmatic and AI-powered workflows.
MCP tools:
| Tool | Assistant |
|---|---|
generate_agent_steps | Agent Workflow Generator |
generate_step_config | Agent Step Configurator |
generate_solution_plan / accept_solution_plan / decline_solution_plan | Solution Assistant |
generate_source_plan | Source Assistant |
generate_kb_plan | Knowledge Base Assistant |
generate_memory_bank_config | Memory Bank Assistant |
generate_governance_plan | Governance Assistant (via governance tools) |
All plan-based tools follow the same propose → accept/decline flow. The MCP tools accept the same natural language inputs as the UI.
See MCP Server for the complete tool reference and authentication setup.
Credits
AI assistant interactions consume credits. The cost varies by conversation length and complexity:
- Typical interaction: 2–10 credits per turn
- Complex workflow generation: 5–15 credits
- Governance plans: ~5 credits per request
Credits are categorised as AI Assistant usage in your Credits & Usage dashboard and are separate from agent execution credits.