AI Assistants

AI Assistants

Seclai includes built-in AI assistants throughout the platform that help you configure resources, generate agent workflows, manage governance policies, and orchestrate entire solutions — all from plain English descriptions. Every assistant is available in the UI, via the API, and through MCP tools.

How They Work

All AI assistants follow the same propose-then-accept pattern:

  1. You describe what you want in natural language
  2. The assistant generates a plan — a set of specific, reviewable actions
  3. You review and accept or decline — accepted plans are executed automatically; declined plans are discarded

This pattern keeps you in control. The assistant never makes changes without your explicit approval. You can also refine your request and try again if the initial plan doesn't match your intent.

Assistants stream their responses in real time via Server-Sent Events, so you see progress as the plan is being generated.

Available Assistants

Each assistant is documented on its resource page. See the linked documentation for full details, examples, and API access.

AssistantWhere to find itWhat it doesDocumentation
Agent Workflow GeneratorAgent editor → Generate with AICreates a complete multi-step agent workflow from a descriptionAgents → AI Assistants
Agent Step ConfiguratorIndividual step editor → AI Assistant buttonRefines a single step's configuration (regex, gate conditions, templates)Agents → Step Configurator
Solution AssistantSolution detail page → AI AssistantProposes cross-resource changes to a solution (agents, KBs, sources)Solutions → AI Assistants
Source AssistantSolution detail page → Sources tab → AI AssistantRecommends and creates content sourcesSolutions → Source AI Assistant
Knowledge Base AssistantSolution detail page → Knowledge Bases tab → AI AssistantSuggests how to group sources into knowledge basesSolutions → KB AI Assistant
Memory Bank AssistantMemory bank create/edit page → Use AI assistantSuggests compaction prompts, thresholds, and retention settingsMemory Banks → AI Assistant
Governance AssistantGovernance page → AI AssistantCreates, updates, and manages governance policiesGovernance → AI Assistant

Best Practices

The following guidelines help you get the best results from all AI assistants.

Writing Effective Prompts

Be specific about what you want, not how to build it.

The assistants understand Seclai's capabilities — you don't need to specify step types, model names, or configuration fields. Describe the outcome you want and let the assistant choose the implementation.

Less effectiveMore effective
"Create a prompt_call step with retrieval from KB 29246ccc""Build a chatbot that answers product questions using my Product Information knowledge base"
"Add an insight step with output_format json_object""Extract the topic, sentiment, and key quotes from each news article"
"Create a gate step with condition score > 0.8""Only forward messages that are clearly relevant to our product"

Name your resources. When referring to knowledge bases, memory banks, or agents, use their names instead of IDs. The assistant resolves names to IDs automatically.

  • Good: "Use the Product Information knowledge base"
  • Less good: "Use KB 29246ccc-7c97-46c6-ad05-34ce83821c3f"

State your constraints up front. If you have specific requirements, include them in your initial prompt rather than correcting after the fact.

  • "Build a news digest agent that emails results — use Claude Haiku for classification and Sonnet for the final summary to keep costs low"
  • "Create a chatbot that remembers conversations and responds in JSON format only"

Describe the end-to-end workflow. For complex agents, describe the full pipeline in one prompt. The assistant handles branching, parallel execution, and step ordering.

  • "Build an agent that: 1) takes a customer question, 2) searches the product KB, 3) checks if the answer is confident enough, 4) if yes, formats and responds, 5) if no, escalates to a support email"

Include example inputs and outputs. When you have a specific format in mind, show an example:

  • "Extract article metadata and return JSON like this: {"title": "...", "author": "...", "date": "...", "tags": ["..."]}"

Mention who will consume the output. This helps the assistant choose the right output format, model, and delivery method:

  • "This agent will be called by our mobile app via API" (optimises for JSON, fast models)
  • "End users will chat with this agent in the web UI" (optimises for conversational flow, streaming)
  • "This runs on a schedule with no human in the loop" (optimises for reliability, structured output, email/webhook delivery)

Working with Plans

Review before accepting. Always read through the proposed plan. Check that:

  • The right resources are being created, updated, or deleted
  • Names and descriptions make sense for your use case
  • For agent workflows: the step order and branching logic match your intent
  • For governance: thresholds match your risk tolerance

Decline and refine. If a plan is close but not right, decline it and provide more specific guidance about what to change:

  • First attempt: "Build a news monitoring agent"
  • Refined: "Build a news monitoring agent — use my World News KB, summarise into bullet points, and only include articles about AI. Email results daily to team@example.com"

Start simple, then add complexity. For complex workflows, consider building in stages:

  1. Generate the core workflow (e.g., "chatbot that answers questions from my KB")
  2. Use the step configurator to fine-tune individual steps
  3. Add additional steps manually or with another generation pass

Use the Solution Assistant for cross-resource work. When you need to create sources, knowledge bases, and agents together, the Solution Assistant handles the dependencies and wiring automatically. The individual resource assistants are better for focused single-resource tasks.

Iterating on Results

Test with real inputs. After accepting a generated agent workflow, test it with actual inputs from the Agent Triggers test panel or the API. Check the Agent Traces to see how each step processed the data.

Fine-tune individual steps. The Agent Step Configurator lets you adjust any step after generation:

  • Rewrite prompt templates for better output quality
  • Adjust temperature (lower for consistent/factual, higher for creative)
  • Change the model (faster/cheaper for simple tasks, more capable for complex ones)
  • Tune retrieval top_n (more results for broad questions, fewer for focused ones)

Use evaluations for ongoing quality. Set up Agent Evaluations to automatically assess output quality on every run or a sample. This catches regressions when you change prompts or models.

Check governance first. If your agent handles user input, set up Governance policies before deploying. The Governance AI Assistant can configure appropriate policies in seconds.

MCP and API Access

All AI assistants are available through the MCP Server and the authenticated REST API, enabling programmatic and AI-powered workflows.

MCP tools:

ToolAssistant
generate_agent_stepsAgent Workflow Generator
generate_step_configAgent Step Configurator
generate_solution_plan / accept_solution_plan / decline_solution_planSolution Assistant
generate_source_planSource Assistant
generate_kb_planKnowledge Base Assistant
generate_memory_bank_configMemory Bank Assistant
generate_governance_planGovernance Assistant (via governance tools)

All plan-based tools follow the same propose → accept/decline flow. The MCP tools accept the same natural language inputs as the UI.

See MCP Server for the complete tool reference and authentication setup.

Credits

AI assistant interactions consume credits. The cost varies by conversation length and complexity:

  • Typical interaction: 2–10 credits per turn
  • Complex workflow generation: 5–15 credits
  • Governance plans: ~5 credits per request

Credits are categorised as AI Assistant usage in your Credits & Usage dashboard and are separate from agent execution credits.