Agent Steps
Steps are the building blocks of an agent workflow. Each agent contains an ordered list of steps that execute sequentially — the output of one step becomes the input to the next. Steps can also reference outputs from any earlier step using substitution variables.
Common Fields
Every step type shares these base fields:
| Field | Type | Required | Description |
|---|---|---|---|
step_type | enum | Yes | The type of step (see sections below) |
id | string | Auto | A unique identifier for the step. Must match ^[a-zA-Z0-9_-]+$. Auto-generated if not provided. Used to reference this step's output from other steps via {{step.id.output}}. |
name | string | No | A human-readable label for the step, shown in the UI |
purpose | string | No | A description of what this step does. Used by the AI assistant to understand context. |
Child Steps (Composite Steps)
Most step types are composite — they can contain nested child steps. Child steps execute after their parent and receive the parent's output as input.
| Field | Type | Default | Description |
|---|---|---|---|
steps | array | [] | Ordered list of child steps to execute after this step |
The following step types support child steps: Prompt Call, Retrieval, Transform, Extract JSON, Extract HTML, Extract XML, Filter, Gate, Combinator, Text, Write Metadata, Write Content Attachment, Load Content Attachment.
The following step types do not support child steps: Display Result, Join, Retry.
String Substitutions
Most step fields support dynamic variable substitution using {{placeholder}} syntax. This lets steps reference agent input, other steps' outputs, metadata, and runtime values.
Available Variables
| Variable | Description |
|---|---|
{{input}} | The current step's input (output of the previous step, or agent input for the first step) |
{{agent.input}} | The original agent input (always the initial input, regardless of step position) |
{{agent.name}} | The agent's name |
{{agent.id}} | The agent's unique ID |
{{agent.run_id}} | The current run's unique ID |
{{agent_step.id}} | The current step's ID |
{{agent_step.name}} | The current step's name |
{{step.<step_id>.output}} | Output from a previous step with the given ID |
{{step.<step_id>.input}} | Input to a previous step with the given ID |
{{metadata.<field>}} | A metadata field value from the trigger |
{{knowledge_base.name}} | Knowledge base name (in retrieval context) |
{{knowledge_base.description}} | Knowledge base description (in retrieval context) |
{{organization.name}} | Organization name (falls back to user name) |
{{organization.description}} | Organization description |
{{user.name}} | Name of the user who initiated the run |
Date & Time Variables
Date and time variables accept an IANA timezone and an optional locale code. Currently, the locale code selects the format pattern (for example, ordering and separators), but month and day names are rendered in English by the backend implementation.
| Variable | Format / Behavior | Example Output |
|---|---|---|
{{date UTC}} | YYYY-MM-DD | 2026-02-17 |
{{date America/New_York}} | YYYY-MM-DD | 2026-02-17 |
{{date Europe/London en}} | Locale-specific date pattern (English month/day names) | February 17, 2026 |
{{date Asia/Tokyo ja}} | Locale-specific date pattern (English month/day names) | February 17, 2026 |
{{time UTC}} | HH:MM:SS | 14:30:00 |
{{time America/New_York}} | HH:MM:SS | 09:30:00 |
{{datetime UTC}} | YYYY-MM-DD HH:MM:SS | 2026-02-17 14:30:00 |
{{datetime Europe/Paris fr}} | Locale-specific datetime pattern (English month/day names) | February 17, 2026 03:30:00 PM |
Function-style syntax is also supported: {{datetime(UTC)}} or {{datetime(UTC, es)}}. As with the inline syntax, the locale argument currently affects only the pattern, not the language of month or day names.
Unrecognized placeholders are left unchanged in the output, making it safe to include template syntax that should not be resolved.
Prompt Call Step
Calls an AI model to generate a response. This is the most versatile step type, supporting 50+ models across multiple providers with features like structured JSON output, tool calling, and response formatting.
Fields
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
model | enum | Yes | — | The AI model to use (see supported models below) |
model_variant | string | No | null | Model variant identifier, if applicable |
prompt_template | string or object | No | null | The prompt sent to the model. Supports substitutions. If not set, the step's input is used as the prompt. |
system_template | string | No | null | System instructions for the model. Sets the model's behavior and context. Supports substitutions. |
temperature | float | No | null (model default) | Controls randomness of the response. Range: 0.0 (deterministic) to 1.0 (creative). |
max_tokens | integer | No | null (model default) | Maximum number of tokens in the response |
simple_format | boolean | No | true | Controls the prompt input format. When true, uses a plain text prompt and system prompt (like ChatGPT/Claude). When false, uses an advanced JSON payload that leverages the model's native API features. |
json_template | object | No | null | The JSON payload sent to the model when simple_format is false. Uses the model's native prompt format, allowing access to provider-specific features. Supports substitutions. |
tools | array | No | [] | List of tool IDs the model can call during generation (see tools) |
formatting_cleanup | enum | No | null | Post-processing to apply to the model's output |
extended_caching_days | integer | No | null | Number of days to cache results (must be > 0) |
cache_all_minutes | integer | No | null | Cache all identical requests for N minutes (1–60) |
Supported Models
Seclai supports models from these providers:
| Provider | Models |
|---|---|
| OpenAI | GPT-5.2, GPT-5.1, GPT-5, GPT-5 Pro, GPT-5 Mini, GPT-5 Nano, GPT-4.1, GPT-4.1 Mini, GPT-4.1 Nano, GPT OSS 120B, GPT OSS 20B |
| Anthropic | Claude Opus 4.6, Claude Opus 4.5, Claude Opus 4.1, Claude Sonnet 4.5, Claude Haiku 4.5, Claude 3.5 Haiku |
| Gemini 3 Pro, Gemini 3 Flash, Gemini 2.5 Pro, Gemini 2.5 Flash, Gemini 2.5 Flash Lite | |
| Amazon | Nova Premier, Nova Pro, Nova 2 Lite, Nova Lite, Nova Micro |
| DeepSeek | DeepSeek 3.1, DeepSeek R1 |
| xAI (Grok) | Grok 4.1 Fast (Reasoning/Non-Reasoning), Grok 4 Fast (Reasoning/Non-Reasoning), Grok Code Fast 1 |
| Meta | Llama 4 Maverick, Llama 4 Scout |
| Mistral | Mistral Large 3, Magistral Small, Ministral 14B/8B/3B |
| Moonshot AI | Kimi K2.5, Kimi K2, Kimi K2 Thinking |
| Qwen | Qwen3 235B, Qwen3 32B, Qwen3 Coder 480B, Qwen3 Next 80B, Qwen3 Next 80B Thinking |
| NVIDIA | Nemotron Nano 12B VL, Nemotron Nano 9B |
| Cohere | Command R+, Command R |
| Pixtral | Pixtral Large |
Tools (Function Calling)
Prompt call steps can use tools — functions that the AI model can invoke during generation to retrieve additional information. This enables the model to dynamically search knowledge bases, load documents, and more.
Available tools:
| Tool | Description |
|---|---|
search_knowledge_base | Semantic similarity search on a knowledge base. Returns relevant content chunks. |
list_knowledge_bases | Lists available knowledge bases (discovery mode) |
load_content | Load the full text of a source document |
peek_content | Read a character range from a document |
grep_content | Search within a document for text occurrences |
get_content_stats | Get document statistics (length, line count, word count, content type) |
list_content_sources | List available content sources and their recent content items |
When tools are configured, the model can autonomously decide when to call them based on the conversation context. For example, when asked a question, the model might search a knowledge base, load a specific document, and then synthesize an answer.
Formatting Cleanup
Apply automatic post-processing to the model's response:
| Value | Description |
|---|---|
convert_markdown_to_html | Converts Markdown output to HTML |
convert_html_to_markdown | Converts HTML output to Markdown |
plain_text_only | Strips all formatting, returning plain text |
Simple vs Advanced Format
The simple_format field controls how the prompt is sent to the model:
Simple format (simple_format: true, default):
- Uses
prompt_template(string) andsystem_template(string) - Works like interactive chat interfaces (ChatGPT, Claude)
- Portable across model providers with little or no modification
json_templatemust not be provided
Advanced format (simple_format: false):
- Uses
json_template(JSON object) — the model's native prompt payload - Gives access to provider-specific features and fine-grained control
- May require modifications when switching between model providers
prompt_templatemust not be provided
Example advanced format JSON template:
{
"messages": [
{
"role": "system",
"content": "You are an expert analyst for {{organization.name}}."
},
{
"role": "user",
"content": "Analyze this: {{agent.input}}"
}
]
}
String substitutions (e.g., {{agent.input}}, {{metadata.*}}) are supported inside both prompt_template and json_template.
Use Case Examples
Content summarization:
System: You are a concise content summarizer. Output a 2-3 sentence summary.
Prompt: Summarize this article:
{{agent.input}}
Question answering with context (RAG):
System: You are a helpful assistant for {{organization.name}}.
Answer questions using ONLY the provided context.
If the answer is not in the context, say "I don't have that information."
Prompt: Context:
{{step.retrieval.output}}
Question: {{agent.input}}
Structured data extraction:
System: Extract the following information from the text and return as JSON.
Prompt: {{agent.input}}
With json_template:
{
"type": "object",
"properties": {
"company_name": { "type": "string" },
"revenue": { "type": "string" },
"employees": { "type": "integer" },
"headquarters": { "type": "string" }
}
}
Multi-language response:
System: You are a translator. Translate the input to {{metadata.target_language}}.
Maintain the original formatting and tone.
Prompt: {{agent.input}}
AI Assistant
The prompt call step includes an AI assistant that can generate a complete configuration based on a natural-language description of what you want the step to do. Click the AI Assistant button (requires a model to be selected first) to open the assistant modal.
The assistant can set all prompt call fields:
- Model and model variant — selects a cost-effective model that fits the task (e.g. tool use, structured output, long context)
- Mode — chooses between simple mode (prompt + system prompt) and JSON mode (vendor-native payload)
- Prompt template and system template — writes templates using substitution variables like
{{input}}and{{step.<id>.output}} - Temperature and max tokens — tunes generation parameters for the task
- Tools — enables tool use (e.g. web search) when the task requires it
- JSON template — builds vendor-specific payloads when JSON mode is appropriate
- Formatting cleanup — selects post-processing (Markdown → HTML, HTML → Markdown, plain text only)
The assistant is aware of the full agent workflow context — all steps, the trigger configuration, and the knowledge bases available — so it can reference outputs from earlier steps and tailor the prompt to the agent's purpose.
Each response also includes a purpose field (a brief description of the step's intent) and a note explaining what was changed or asking a follow-up question.
Retrieval Step
Searches a knowledge base using semantic similarity (vector search) to find the most relevant content for a given query. This is the foundation of RAG (Retrieval-Augmented Generation) workflows.
Fields
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
knowledge_base_id | UUID | Yes | — | The knowledge base to search |
query | string | No | null | The search query. Supports substitutions. If not set, the step's input is used as the query. |
top_n | integer | No | 20 | Number of results to return (1–100) |
reranker_model | enum | No | null | Reranker model to apply after initial retrieval (see below) |
top_k | integer | No | null | Number of results to keep after reranking (1–100). Only used with a reranker model. |
content_type | enum | No | application/json | Output format: application/json or text/plain |
filter | object | No | null | Metadata filter to narrow results (MongoDB-style query, see Metadata Filters) |
added_after | string | No | null | Only return content added after this date/time. Supports substitutions. |
added_before | string | No | null | Only return content added before this date/time. Supports substitutions. |
extended_caching_days | integer | No | null | Number of days to cache results (must be > 0) |
cache_all_minutes | integer | No | null | Cache all identical requests for N minutes (1–60) |
include_attachments | boolean | No | true | When enabled, indexed content attachments are included in retrieval results alongside the main content body. Disable to limit results to original content only. |
Reranker Models
Rerankers improve result quality by re-scoring the initial retrieval results using a cross-encoder model. The retrieval first fetches top_n results, then the reranker re-scores them and returns the top top_k.
| Model | Description |
|---|---|
| Qwen3 Reranker (0.6B) | Small but capable reranker model |
| Amazon Rerank v1 | AWS Bedrock-hosted reranker |
| Cohere Reranker v3.5 | High-quality reranker from Cohere |
Output Format
When content_type is application/json (default), the output is a JSON array of matching documents with their metadata and relevance scores.
When content_type is text/plain, the output is the concatenated text of matching documents, suitable for direct use in prompts.
Use Case Examples
Basic semantic search:
- Knowledge base: Product documentation
- Query:
{{agent.input}} - Top N: 10
- Content type:
text/plain
The step searches for content semantically similar to the user's input and returns the top 10 matches as plain text.
Filtered retrieval with reranking:
- Knowledge base: News articles
- Query:
{{agent.input}} - Filter:
{"category": {"$eq": "{{metadata.category}}"}} - Top N: 50
- Reranker: Cohere v3.5
- Top K: 10
- Added after:
{{metadata.start_date}}
First retrieves 50 candidates matching the category filter added after the start date, then reranks them to select the 10 most relevant.
Time-bounded retrieval:
- Knowledge base: RSS feed content
- Query:
Latest news about AI - Added after:
2026-02-10 - Added before:
2026-02-17 - Top N: 20
Retrieves only content added within the specified date range.
AI Assistant
The retrieval step includes an AI assistant that can generate a complete configuration from a natural-language description. Click the AI Assistant button to open the assistant modal.
The assistant can set all retrieval fields:
- Knowledge base — selects the most appropriate knowledge base when none is chosen, or suggests switching when another KB better matches the request
- Query — writes query templates using substitution variables like
{{input}}and{{step.<id>.output}} - Top N / Top K — sets retrieval and reranking counts based on use case
- Reranker model — recommends a reranker when high relevance or multilingual content is needed
- Time range — sets
added_after/added_beforefilters using date templates - Metadata filter — builds MongoDB-style filter objects using known metadata fields from the knowledge base
- Content type — selects the output format (JSON, plain text, HTML, XML)
The assistant is aware of the available knowledge bases (their names, descriptions, source counts, and detected metadata fields), the available reranker models, and the full agent workflow context. When no knowledge base is selected, the assistant will recommend one based on your request.
Text Step
Produces output from a template string with variable substitutions. This is the simplest way to construct formatted text, combine outputs from multiple steps, or create static content.
Fields
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
template | string | Yes | — | The template string to render. Supports all substitution variables. |
content_type | enum | No | null | Output content type: text/plain, text/html, application/json, application/xml. When null, inherits from input. |
Use Case Examples
Composing a prompt from multiple sources:
Here is the user's question:
{{agent.input}}
Here is the relevant context from our knowledge base:
{{step.retrieval.output}}
Here is the user's profile:
Name: {{metadata.user_name}}
Plan: {{metadata.plan}}
Generating a JSON payload:
Template:
{
"agent_id": "{{agent.id}}",
"run_id": "{{agent.run_id}}",
"result": "{{step.analysis.output}}",
"timestamp": "{{datetime UTC}}"
}
Content type: application/json
Creating an HTML report:
Template:
<h1>Daily Report — {{date America/New_York}}</h1>
<h2>Summary</h2>
<div>{{step.summary.output}}</div>
<h2>Key Findings</h2>
<div>{{step.findings.output}}</div>
<footer>Generated by {{agent.name}} at {{time America/New_York}}</footer>
Content type: text/html
AI Assistant
The AI assistant can help generate text step templates. Describe what output you want, and it will create a template using the appropriate substitution variables based on your agent's step structure.
Transform Step
Transforms text using an ordered list of regex-based substitution rules. Each rule applies a pattern match and optional replacement to the step's input, and rules execute sequentially.
Fields
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
rules | array | Yes | — | An ordered list of transformation rules to apply |
Each rule has:
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
pattern | string | Yes | — | A regex pattern to match in the input text |
substitution | string | No | null | The replacement string. If null, matched text is removed. Supports regex capture groups (\1, \2, etc.) and substitution variables. |
comment | string | No | null | A human-readable description of what this rule does |
Rules are applied sequentially — each rule operates on the result of the previous rule.
Use Case Examples
Clean HTML tags from text:
| Pattern | Substitution | Comment |
|---|---|---|
<[^>]+> | (empty) | Remove all HTML tags |
& | & | Decode HTML entity |
< | < | Decode HTML entity |
> | > | Decode HTML entity |
Extract email addresses:
| Pattern | Substitution | Comment |
|---|---|---|
^.*$ | (use a prompt call for complex extraction) | — |
Normalize whitespace:
| Pattern | Substitution | Comment |
|---|---|---|
\r\n | \n | Normalize line endings |
[ \t]+ | | Collapse multiple spaces/tabs |
\n{3,} | \n\n | Collapse excessive blank lines |
Redact sensitive data:
| Pattern | Substitution | Comment |
|---|---|---|
\b\d{3}-\d{2}-\d{4}\b | [SSN REDACTED] | Redact Social Security numbers |
\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z]{2,}\b | [EMAIL REDACTED] | Redact email addresses |
\b\d{4}[\s-]?\d{4}[\s-]?\d{4}[\s-]?\d{4}\b | [CARD REDACTED] | Redact credit card numbers |
AI Assistant
The AI assistant can generate transform rules for you. Describe the transformation you want (e.g., "remove all URLs from the text" or "extract only the first paragraph"), and it will produce the appropriate regex patterns and substitutions.
The assistant generates rules with:
pattern— The regex to matchsubstitution— The replacement text (or null to remove)comment— An explanation of what the rule does
Extract JSON Step
Extracts and validates JSON data from text input. Useful for parsing AI model responses that contain JSON, or extracting structured data from mixed text-and-JSON content.
Fields
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
expected_type | enum | Yes | — | The expected JSON structure: array or object |
json_path | string | No | null | A JSONPath expression to extract a specific value from the parsed JSON |
How It Works
- The step scans the input text for JSON content
- It validates that the found JSON matches the
expected_type - If
json_pathis provided, it extracts the value at that path - The extracted JSON is output as a string
This step is particularly useful after a Prompt Call step that uses json_template — it ensures the output is valid JSON and can extract specific fields.
Use Case Examples
Parse a model's JSON response:
If a prompt call returns:
Here is the analysis:
{"sentiment": "positive", "score": 0.92, "topics": ["AI", "technology"]}
Extract JSON step with expected_type: object will output:
{ "sentiment": "positive", "score": 0.92, "topics": ["AI", "technology"] }
Extract a specific field with JSONPath:
With json_path: $.topics, the output would be:
["AI", "technology"]
Validate array output:
If you expect a list of items from a model, set expected_type: array to ensure the output is a valid JSON array. The step will fail if the model returns an object instead.
Extract HTML Step
Extracts content from HTML input using CSS-like tag matching. Useful for web scraping results, parsing HTML content from sources, or extracting specific elements from HTML documents.
Fields
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
expected_tag | string | No | null | The HTML tag to extract (e.g., article, div, p). If null, extracts the entire HTML body content. |
Use Case Examples
Extract article content:
With expected_tag: article, given input:
<html>
<head>
<title>News</title>
</head>
<body>
<nav>...</nav>
<article>
<h1>Breaking News</h1>
<p>Important story details...</p>
</article>
<footer>...</footer>
</body>
</html>
Output:
<article>
<h1>Breaking News</h1>
<p>Important story details...</p>
</article>
Extract all paragraphs:
With expected_tag: p, extracts all <p> elements from the HTML.
Extract XML Step
Extracts data from XML input using XPath expressions. Ideal for processing RSS feeds, API responses in XML format, or any structured XML data.
Fields
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
xml_path | string | No | null | An XPath expression to select specific nodes from the XML |
expected_tag | string | No | null | The expected root element tag name |
Use Case Examples
Extract RSS feed items:
With xml_path: //item, given an RSS feed:
<rss>
<channel>
<title>My Feed</title>
<item>
<title>First Article</title>
<description>...</description>
</item>
<item>
<title>Second Article</title>
<description>...</description>
</item>
</channel>
</rss>
Output: The matched <item> elements.
Extract specific elements:
With xml_path: //channel/title, extracts the feed title.
Gate Step
Evaluates conditions against the step's input to decide whether subsequent steps should execute. A gate acts as a conditional branch point — when conditions are met (or not met, depending on configuration), the gate either passes the input through or blocks execution.
Fields
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
conditions | array | Yes | — | The list of conditions to evaluate |
match | enum | No | all | How to combine condition results: all (AND — every condition must match) or any (OR — at least one must match) |
on_match | enum | No | continue | What to do when conditions are met: continue (pass input through) or stop (block execution, output is empty) |
Each condition has:
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
target | string | Yes | — | What to evaluate. Built-in targets: input (step input text), input_length (character count), input_content_type (MIME type). Also supports metadata.<field> for metadata values. |
operator | enum | Yes | — | The comparison operator (see below) |
value | any | No | null | The value to compare against. Supports substitution variables. |
value_type | enum | No | null | How to interpret the value: number, date, datetime, or relative_time |
comment | string | No | null | A description of what this condition checks |
Condition Operators
| Operator | Description | Example |
|---|---|---|
$eq | Equals | target == value |
$ne | Not equals | target != value |
$lt | Less than | target < value |
$lte | Less than or equal | target <= value |
$gt | Greater than | target > value |
$gte | Greater than or equal | target >= value |
$in | In list | target in [value1, value2, ...] |
$nin | Not in list | target not in [value1, value2, ...] |
$regex | Matches regex pattern | re.search(value, target) |
$not_regex | Does not match regex | not re.search(value, target) |
$empty | Is empty or null | target is None or target.strip() == "" |
$not_empty | Is not empty | target is not None and target.strip() != "" |
For $lt, $lte, $gt, $gte: numeric comparison is attempted first; if both values are not numeric, string comparison is used.
Value Types
The value_type field controls how the comparison value is interpreted:
| Value Type | Description | Example Values |
|---|---|---|
number | Treat as a numeric value | 100, 3.14 |
date | Treat as a date | 2026-02-17 |
datetime | Treat as a datetime | 2026-02-17T14:30:00 |
relative_time | Parse as a relative time expression | now, today, yesterday, 3 days ago, 1 week ago, 2 hours ago, 5 days from now |
Relative time expressions are resolved at execution time. Supported expressions include:
now— Current datetimetoday— Start of todayyesterday— Start of yesterdaythis week/last week— Start of current/previous weekN days ago/N hours ago/N minutes agoN days from now/N hours from now
Use Case Examples
Only process non-empty input:
| Condition | Target | Operator | Value |
|---|---|---|---|
| 1 | input | $not_empty | — |
Match: all, On match: continue
Filter by content length (skip short content):
| Condition | Target | Operator | Value | Value Type |
|---|---|---|---|---|
| 1 | input_length | $gt | 100 | number |
Match: all, On match: continue — Only continue if input is longer than 100 characters.
Route by category (only process technology articles):
| Condition | Target | Operator | Value |
|---|---|---|---|
| 1 | metadata.category | $eq | technology |
Block specific content types:
| Condition | Target | Operator | Value |
|---|---|---|---|
| 1 | input_content_type | $in | ["text/html", "application/xml"] |
Match: all, On match: stop — Block HTML and XML content from proceeding.
Time-based gating (only process recent content):
| Condition | Target | Operator | Value | Value Type |
|---|---|---|---|---|
| 1 | metadata.published_date | $gt | 7 days ago | relative_time |
Match: all, On match: continue — Only process content published in the last 7 days.
Complex multi-condition gate:
| Condition | Target | Operator | Value |
|---|---|---|---|
| 1 | input | $not_empty | — |
| 2 | input_length | $gt | 50 |
| 3 | metadata.status | $ne | draft |
| 4 | metadata.language | $in | ["en", "es", "fr"] |
Match: all, On match: continue — All four conditions must be true to proceed.
AI Assistant
The AI assistant can generate gate conditions for you. Describe the filtering logic you need (e.g., "only process articles longer than 200 characters about technology"), and it will create the appropriate conditions with the correct operators and match mode.
Combinator Step
Merges outputs from multiple parallel branches into a single output using a template. Combinators work with Join steps to collect results from different processing paths and combine them.
Fields
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
combinator_mode | string | No | custom | How to generate the output template (see modes below) |
combinator_xml_tag | string | No | output | Custom XML tag name when using xml_custom_tag mode |
output_template | string | No | "" | The template string for custom mode. Supports all substitution variables. |
content_type | enum | No | null | Output content type: text/plain, text/html, application/json, application/xml. When null, inherits from input. |
Combinator Modes
| Mode | Description |
|---|---|
custom | You write the output template directly using substitution variables |
exclusive | Passes through the first non-empty parent output |
xml_custom_tag | Wraps each parent output in XML tags using combinator_xml_tag as the tag name |
xml_step_ids | Wraps each parent output in XML tags using the step IDs as tag names |
json_array | Combines parent outputs into a JSON array |
json_object | Combines parent outputs into a JSON object keyed by step IDs |
Using Join Steps with Combinators
A Join step connects a parallel branch to a combinator. The join step's target field specifies which combinator it feeds into. The combinator waits for all connected join steps to complete before executing.
Example: Parallel processing with combinator
Agent Input
├── Step: retrieval (search KB)
│ └── Step: summarize (prompt call)
│ └── Step: join_summary (join → combine)
├── Step: extract_entities (prompt call)
│ └── Step: join_entities (join → combine)
└── Step: combine (combinator)
└── Step: final_output (display result)
In this workflow:
- Two branches run in parallel: one retrieves and summarizes, the other extracts entities
- Both branches have join steps targeting
combine - The combinator waits for both joins to complete
- The combinator merges the results using its output template
Custom output template for the combinator:
Summary: {{step.summarize.output}}
Entities: {{step.extract_entities.output}}
Use Case Examples
Merge analysis results (custom mode):
Output template:
{
"summary": "{{step.summarize.output}}",
"sentiment": "{{step.sentiment.output}}",
"entities": {{step.entities.output}},
"generated_at": "{{datetime UTC}}"
}
Content type: application/json
Wrap outputs in XML (xml_step_ids mode):
Auto-generated output:
<summarize>The article discusses...</summarize>
<entities>["AI", "machine learning"]</entities>
AI Assistant
The AI assistant can generate combinator output templates. Describe the structure you want for your combined output, and it will create a template referencing the appropriate step outputs.
Join Step
Connects a parallel branch to a Combinator step. The join step is a transparent relay — its output is identical to its input. Its purpose is to signal to the combinator that a branch has completed.
Fields
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
target | string | Yes | — | The ID of the combinator step this join feeds into. Must match ^[a-zA-Z0-9_-]+$. |
Join steps cannot have child steps.
Use Case Example
See the Combinator step section for a complete example of how join steps connect parallel branches to a combinator.
Filter Step
Filters documents from a knowledge base based on metadata criteria using MongoDB-style query filters. Supports complex conditions with logical operators, sorting, and pagination.
Note: The filter step is currently defined in the schema but execution is not yet implemented. Use the Retrieval step with a
filterfield for metadata-based filtering.
Fields
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
filter | object | Yes | — | MongoDB-style metadata filter (see Metadata Filters) |
order_by | array | No | null | Sort order for results |
limit | integer | No | null | Maximum number of results to return (must be > 0) |
offset | integer | No | null | Number of results to skip (must be ≥ 0) |
Each order_by entry has:
| Field | Type | Description |
|---|---|---|
field | string | The metadata field to sort by |
direction | enum | asc (ascending) or desc (descending) |
Display Result Step
Marks a step's output as the final visible result of the agent run. The output of this step is what users see in the UI and what is returned in the API response.
Fields
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
template | string | No | null | A template for the output. If null, the step's input is used directly. Supports substitutions. |
html_format | boolean | No | true | When true, the output is treated as HTML for rendering in the UI |
Display result steps cannot have child steps.
Use Case Examples
Simple pass-through:
No template, html_format: true — Displays the previous step's output as HTML.
Formatted output with template:
Template:
<div class="result">
<h2>Analysis Results</h2>
{{step.analysis.output}}
<p><em>Generated on {{date UTC}} by {{agent.name}}</em></p>
</div>
Plain text output:
Template: {{step.final.output}}
html_format: false — Returns plain text without HTML rendering.
Webhook Call Step
Makes an HTTP request to an external URL. Use this to integrate with external APIs, trigger workflows in other systems, post to messaging platforms, or send data to any HTTP endpoint.
Fields
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
url | string | Yes | — | The endpoint URL (must start with http:// or https://). Supports substitutions. |
method | enum | No | POST | HTTP method: POST or PUT |
content_type | enum | No | application/json | Request body content type: application/json, text/plain, text/html, application/xml |
headers | object | No | null | Custom HTTP headers as key-value pairs. Header values support substitutions. |
payload | string | No | null | The request body. If null, the step's input is sent as the body. Supports substitutions. |
Use Case Examples
Post to Slack:
URL: https://hooks.slack.com/services/YOUR/WEBHOOK/URL
Method: POST
Content-Type: application/json
Payload:
{
"text": "Agent {{agent.name}} completed: {{step.summary.output}}"
}
Send to a custom API:
URL: https://api.example.com/ingest
Method: POST
Content-Type: application/json
Headers:
Authorization: Bearer {{metadata.api_token}}
X-Source: seclai-agent
Payload:
{
"agent_id": "{{agent.id}}",
"run_id": "{{agent.run_id}}",
"result": {{step.extract_json.output}},
"processed_at": "{{datetime UTC}}"
}
Trigger an external workflow:
URL: https://automation.example.com/webhooks/{{metadata.workflow_id}}
Method: POST
Content-Type: application/json
Payload:
{
"event": "content_processed",
"data": "{{step.final.output}}"
}
Write AWS S3 Object Step
Saves content to an AWS S3 bucket. Use this to archive agent results, store generated reports, export data, or create file backups.
Fields
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
bucket_name | string | Yes | — | The S3 bucket name. Supports substitutions. |
object_key | string | Yes | — | The S3 object key (file path). Supports substitutions. |
content_type | enum | No | text/plain | Content type for the stored object: text/plain, text/html, application/json, application/xml |
content | string | No | null | The content to write. If null, the step's input is used. Supports substitutions. |
Use Case Examples
Store a daily report:
Bucket: my-reports-bucket
Key: reports/{{date UTC}}/daily-summary.html
Content Type: text/html
Content: (uses step input — the HTML report from previous step)
Archive agent results as JSON:
Bucket: agent-results
Key: agents/{{agent.id}}/runs/{{agent.run_id}}/output.json
Content Type: application/json
Content: {{step.extract_json.output}}
Organize by metadata:
Bucket: content-exports
Key: {{metadata.category}}/{{metadata.article_id}}/analysis.json
Content Type: application/json
Send Email Step
Sends an email notification. Use this to deliver reports, send alerts, or notify team members about agent results.
Fields
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
recipient_user_id | string | Yes | — | The ID of the Seclai user to send the email to. Supports substitutions. |
subject | string | Yes | — | The email subject line. Supports substitutions. |
html_body | string | No | null | The HTML email body. Supports substitutions. |
text_body | string | No | null | The plain text email body (fallback for email clients that don't support HTML). Supports substitutions. |
Use Case Examples
Daily report delivery:
Subject: Daily Summary — {{date America/New_York}}
HTML Body:
<h1>Daily Summary</h1>
{{step.report.output}}
<p>Generated by {{agent.name}} at {{time America/New_York}}</p>
Alert notification:
Subject: Alert: {{metadata.alert_type}} detected
HTML Body:
<h2>Alert Details</h2>
<p><strong>Type:</strong> {{metadata.alert_type}}</p>
<p><strong>Source:</strong> {{metadata.source_url}}</p>
<p><strong>Details:</strong></p>
{{step.analysis.output}}
Text Body:
Alert: {{metadata.alert_type}} detected
Source: {{metadata.source_url}}
Details: {{step.analysis.output}}
Call Agent Step
Calls another agent as a sub-agent, running it synchronously and using its output as this step's output. Use this to compose complex workflows from smaller, reusable agents.
Fields
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
agent_id | string | Yes | — | The ID of the agent to call. |
pass_input | boolean | No | true | When enabled, forwards this step's input as the called agent's input. |
pass_metadata | boolean | No | true | When enabled, forwards the parent agent run's metadata to the called agent. |
content_version_id | string | No | null | An optional content version ID to pass to the called agent for content-based triggers. |
Use Case Examples
Chain agents for multi-stage processing:
A summarization agent calls an analysis agent first, then processes its output:
Step 1: Call Agent (agent_id: "analysis-agent", pass_input: true, pass_metadata: true)
Step 2: Prompt Call — Summarize the analysis output
Fan-out to specialized agents:
Use parallel branches with call_agent steps to run multiple agents on the same input simultaneously:
Branch 1: Call Agent → "sentiment-agent"
Branch 2: Call Agent → "keyword-agent"
Combinator: Merge results from both agents
Important Considerations
- Recursion protection: A maximum call depth is enforced to prevent infinite loops. If an agent calls itself (directly or indirectly), the run will fail once the depth limit is reached.
- Synchronous execution: The called agent runs to completion before the parent continues. Long-running sub-agents will increase the overall run time.
- Credit usage: Each sub-agent run consumes credits independently based on its own steps.
Insight Step
Uses an AI model with progressive-disclosure tools to derive insights from potentially large input content. Unlike a Prompt Call (which passes the full input in the prompt), the Insight step gives the model tools to incrementally inspect the input — making it ideal for large documents, feeds, or data dumps that might exceed context-window limits.
Fields
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
prompt_template | string | Yes | — | Describes the desired analysis — what kind of insight to derive from the input (e.g. summary, topics, keywords, sentiment). Supports substitutions. |
output_format | "text" | "json_object" | "json_array" | No | "text" | Controls the format of the model's response. json_object and json_array instruct the model to produce strict JSON output. |
output_schema | object | No | null | An optional JSON Schema describing the desired output structure. When provided, overrides output_format and instructs the model to conform to the schema. |
model | enum | No | Claude Haiku 4.5 | The AI model to use. Must support tool calling. When omitted, the system selects a capable default. |
model_variant | string | No | null | Model variant identifier, if applicable. |
temperature | float | No | 0.3 | Sampling temperature (0.0–1.0). Lower values produce more focused, deterministic output. |
max_tokens | integer | No | null (model default) | Maximum number of tokens in the model's response. |
When to Use Insight vs Prompt Call
| Consideration | Prompt Call | Insight |
|---|---|---|
| Input fits in context window | Pass it directly in the prompt | Unnecessary overhead |
| Input may be very large | Risk of truncation or token-limit errors | Model scans progressively via tools |
| Need structured JSON output | Use json_template / JSON mode | Use output_format or output_schema |
| Need a system prompt | system_template field | System prompt is auto-generated |
| Need custom LLM tools | Configure via tools field | Not supported — uses built-in insight tools only |
Rule of thumb: If the input is small and you want full control over the prompt, use Prompt Call. If the input could be large and you want the model to intelligently scan it, use Insight.
How It Works
-
The step receives the previous step's output (or agent input) as a content handle — a lazy abstraction that avoids loading everything into memory.
-
Three built-in tools are automatically provided to the model:
| Tool | Description |
|---|---|
get_input_size | Returns byte size and approximate character count. Call first to plan scanning strategy. |
read_input_range | Reads a byte-range slice of the input (max 50,000 bytes per call). |
search_input | Searches for lines matching a regex or plain-text pattern (max 50 matches). |
-
An auto-generated system prompt tells the model to:
- Call
get_input_sizefirst to understand the content volume. - For small content (< 50,000 bytes), read it all at once.
- For larger content, scan progressively: read the beginning, search for key sections, and sample from different parts.
- Produce the requested output directly — no meta-commentary.
- Call
-
The model iterates through tool calls until it has enough context, then produces the final output.
Output Content Type
The output content type is determined automatically:
- Text format (
output_format: "text"with nooutput_schema) →text/plain - Structured format (
output_format: "json_object"or"json_array", or anyoutput_schema) →application/json
Use Case Examples
Summarize an RSS feed:
prompt_template: Summarize the key topics and themes from this RSS feed content.
Group related items together.
model: anthropic_claude_haiku_4_5
temperature: 0.3
Extract structured topics (JSON array):
prompt_template: Extract all distinct topics mentioned in the input content.
output_format: json_array
model: anthropic_claude_haiku_4_5
temperature: 0.2
Full content analysis with JSON Schema:
prompt_template: Analyze the input content and produce a structured report.
output_schema:
type: object
properties:
summary:
type: string
topics:
type: array
items:
type: string
sentiment:
type: string
enum: [positive, negative, neutral, mixed]
key_entities:
type: array
items:
type: object
properties:
name:
type: string
type:
type: string
required: [summary, topics, sentiment]
model: anthropic_claude_haiku_4_5
temperature: 0.2
AI Assistant
The Insight step includes full AI assistant support. Click the AI Assistant button to describe what you want in natural language, and the assistant will generate the complete configuration — prompt_template, output_format, output_schema, model, temperature, and max_tokens.
The agent-level AI assistant (for generating full workflows) also understands the Insight step and will include it when the task involves analyzing large content.
Write Metadata Step
Writes a value to the content's metadata by key. This enables agents to persist structured information — such as AI-generated classifications, tags, or scores — directly on the content record so that it can be used for retrieval filtering and gate conditions in future agent runs. For larger information like summaries, consider using Write Content Attachment and Read Content Attachment steps instead.
Fields
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
metadata_key | string | Yes | — | The key under which the value will be stored in the content metadata. Supports substitutions. |
content | string | No | null | The value to write. If omitted, the step's input is used (equivalent to {{input}}). Supports substitutions. |
Behavior
- If the value is valid JSON, it is stored as a parsed JSON value (object, array, number, boolean, or null). Otherwise it is stored as a plain string.
- The total serialized metadata payload must not exceed 10 KB. Performance may degrade with payloads above 2 KB.
- Metadata written by this step is merged with content version metadata when content is retrieved. The content metadata takes precedence over content version metadata for duplicate keys.
- The step requires a
source_connection_content_version_idin the run metadata — this is automatically provided when the agent is triggered by a content event (content_added,content_updated, orcontent_added_or_updated).
Use Case Examples
Persist an AI-generated category for retrieval filtering:
Step 1: Insight — Classify the content into a category
Step 2: Write Metadata (metadata_key: "category")
A subsequent retrieval step can filter by {"category": {"$eq": "technology"}} to narrow results.
Store a sentiment score for gate conditions:
Step 1: Prompt Call — "Rate the sentiment of this content as positive, negative, or neutral"
Step 2: Write Metadata (metadata_key: "sentiment")
A gate step in a later agent can use metadata.sentiment with $eq to route content conditionally.
Write a JSON summary object:
Step 1: Insight — Extract key topics and summary as JSON
Step 2: Write Metadata (metadata_key: "analysis", content: "{{step.extract-insight.output}}")
The JSON object is stored as structured metadata, and individual fields can be accessed in filters.
Write Content Attachment Step
Writes an attachment to content. The step input (or explicit content) is stored as a file-backed attachment under a specified key. This is ideal for persisting large outputs — such as full translations, analysis reports, or transformed content — that are too large for metadata but should be associated with the content record.
Fields
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
attachment_key | string | Yes | — | A short identifier for the attachment (e.g. summary, translation). Must be unique per content version. Supports substitutions. |
content_type | enum | No | text/plain | The MIME type of the attachment: text/plain, text/html, application/json, or application/xml. |
content | string | No | null | The content to write. If omitted, the step's input is used (equivalent to {{input}}). Supports substitutions. |
indexed | boolean | No | false | When enabled, the attachment text is indexed so that it appears in retrieval search results alongside the main content body. |
Behavior
- Attachments are stored in file storage and linked to the content version.
- Unlike metadata (which has a 10 KB limit), attachments can store large content.
- When
indexedis enabled, the attachment's text is chunked and embedded into the knowledge base's vector store, making it searchable via retrieval steps. The retrieval step'sinclude_attachmentsfield controls whether indexed attachments appear in results. - The step requires a
source_connection_content_version_idin the run metadata.
Use Case Examples
Store a full translation alongside original content:
Step 1: Prompt Call — Translate the content to Spanish
Step 2: Write Content Attachment (attachment_key: "translation_es", content_type: text/plain)
Persist an indexed analysis that's searchable:
Step 1: Insight — Generate a detailed analysis report
Step 2: Write Content Attachment (attachment_key: "analysis", indexed: true)
The analysis is now searchable via retrieval, appearing alongside the original content in results.
Store structured data as a JSON attachment:
Step 1: Insight — Extract entities and relationships as JSON
Step 2: Write Content Attachment (attachment_key: "entities", content_type: application/json)
Load Content Attachment Step
Loads a previously written attachment from content by key and returns its text content as the step output. Use this to recall persisted content — such as a previous analysis, translation, or extracted data — for use in subsequent processing steps.
Fields
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
attachment_key | string | Yes | — | The key of the attachment to load. Supports substitutions. |
Behavior
- Returns the text content of the attachment as the step output.
- If the attachment does not exist for the given key, the step output is empty.
- The step requires a
source_connection_content_version_idin the run metadata.
Use Case Examples
Load a previous translation for comparison:
Step 1: Load Content Attachment (attachment_key: "translation_es")
Step 2: Prompt Call — Compare this translation with the original content
Build on a previous analysis:
Step 1: Load Content Attachment (attachment_key: "analysis")
Step 2: Insight — Using the previous analysis, identify any changes
Combining Metadata and Attachments
Metadata and attachments serve complementary purposes:
| Feature | Metadata (Write Metadata) | Attachments (Write/Load) |
|---|---|---|
| Size limit | 10 KB total | No practical limit |
| Filterable | Yes (retrieval filters, gates) | No (but can be indexed for search) |
| Searchable | No | Yes (when indexed: true) |
| Best for | Tags, scores, categories | Full text, reports, translations |
| Access method | {{metadata.<key>}} | Load Content Attachment step |
Example: Insight-driven content enrichment pipeline:
Step 1: Insight — Classify content and extract summary
Step 2: Extract JSON — Parse the JSON output
Step 3: Write Metadata (metadata_key: "category", content: "{{step.extract-json.output.category}}")
Step 4: Write Metadata (metadata_key: "sentiment", content: "{{step.extract-json.output.sentiment}}")
Step 5: Write Content Attachment (attachment_key: "detailed_summary", indexed: true)
This pipeline: classifies content (filterable via metadata), records sentiment (usable in gate conditions), and persists a searchable detailed summary (retrievable via attachment indexing).
Retry Step
Re-executes the workflow from a specified ancestor step up to a configurable number of times. When the retry step completes, the runner resets the target step and all its descendants back to pending and re-schedules the target. This continues until the maximum retry count is reached, at which point execution proceeds normally past the retry step.
Fields
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
target_step_id | string | Yes | — | The id of an ancestor step in the parent chain to re-execute from. Must be alphanumeric (including hyphens and underscores). |
max_retries | integer | Yes | — | Maximum number of times the target will be re-executed (1–10). |
How It Works
- The retry step is a non-composite leaf step — it cannot contain child steps.
- Each time the retry step completes, the runner checks how many times it has already completed.
- If the count is within
max_retries, the target step and all its descendants are reset to pending and the target is re-scheduled. - If the count exceeds
max_retries, execution continues normally — the retry has been exhausted.
Best Practice: Pair with a Gate
Without a gate, a retry step will unconditionally re-run its target up to max_retries times. To make retries conditional, place a gate step between the output-producing step and the retry:
retrieval (target for retry)
└─ prompt_call
└─ gate (evaluates output quality)
└─ retry (target_step_id: retrieval, max_retries: 3)
- If the gate's conditions pass (
on_match: "continue"), child execution proceeds — including the retry step, which will unconditionally re-run the target (up tomax_retriestimes). Useon_match: "stop"with success conditions instead (see below). - If the gate blocks (
on_match: "stop"and conditions match, oron_match: "continue"and conditions fail), child steps including retry are not reached, and execution stops.
To implement conditional retry effectively:
- Use
on_match: "stop"on the gate, with conditions that detect a good result (e.g., output is not empty, contains expected keywords). - When the output is satisfactory, the gate stops — blocking the retry step.
- When the output fails the conditions, the gate passes through, and the retry step triggers re-execution.
Use Case Examples
Retry a failed retrieval up to 3 times:
target_step_id: search-knowledge-base
max_retries: 3
Quality-gated retry for LLM output:
prompt_call (id: generate-report)
└─ gate (conditions: [{target: "input", operator: "$not_empty"}], on_match: "stop")
└─ retry (target_step_id: generate-report, max_retries: 2)
If the prompt call produces non-empty output, the gate stops and the retry is never reached. If the output is empty, the gate passes through and retry re-executes the prompt call.
Metadata Filters
The Retrieval and Filter steps support MongoDB-style metadata filters to narrow results based on document metadata fields.
Filter Operators
| Operator | Description | Example |
|---|---|---|
$eq | Equals | {"category": {"$eq": "news"}} |
$ne | Not equals | {"status": {"$ne": "draft"}} |
$lt | Less than | {"score": {"$lt": 0.5}} |
$lte | Less than or equal | {"priority": {"$lte": 3}} |
$gt | Greater than | {"word_count": {"$gt": 100}} |
$gte | Greater than or equal | {"published_date": {"$gte": "2026-01-01"}} |
$in | Value in list | {"category": {"$in": ["news", "blog"]}} |
$nin | Value not in list | {"status": {"$nin": ["draft", "archived"]}} |
$exists | Field exists | {"author": {"$exists": true}} |
$regex | Matches regex | {"title": {"$regex": "^Breaking"}} |
$not | Negation | {"status": {"$not": {"$eq": "draft"}}} |
Logical Operators
| Operator | Description |
|---|---|
$and | All conditions must match |
$or | At least one condition must match |
Filter Examples
Simple field match (implicit AND):
{
"category": { "$eq": "technology" },
"status": { "$ne": "draft" }
}
Using $or:
{
"$or": [
{ "category": { "$eq": "technology" } },
{ "category": { "$eq": "science" } }
]
}
Complex nested filter:
{
"$and": [
{ "published_date": { "$gte": "2026-01-01" } },
{
"$or": [
{ "category": { "$in": ["news", "analysis"] } },
{ "priority": { "$gt": 5 } }
]
},
{ "author": { "$exists": true } }
]
}
With substitution variables:
{
"category": { "$eq": "{{metadata.category}}" },
"published_date": { "$gte": "{{metadata.start_date}}" }
}
Step Execution Order
Steps execute as a directed acyclic graph (DAG). In the simplest case, steps run sequentially top-to-bottom. With child steps and parallel branches (via join/combinator), the execution order follows these rules:
- Root steps execute first, in order
- Child steps execute after their parent completes, receiving the parent's output as input
- Join steps signal completion to their target combinator
- Combinator steps wait for all connected join steps before executing
- A step only executes after all its dependencies have completed
Step Caching
The Prompt Call and Retrieval steps support result caching to reduce costs and improve performance:
| Field | Description |
|---|---|
extended_caching_days | Cache results for N days. Identical inputs return cached results. |
cache_all_minutes | Cache ALL requests (regardless of input) for N minutes (1–60). Useful for time-insensitive batch operations. |
Caching is particularly useful for:
- Retrieval steps that run on the same knowledge base frequently
- Prompt calls with deterministic outputs (e.g., classification tasks with
temperature: 0) - Reducing credit usage for repeated operations
AI Assistant
The AI assistant helps you configure steps by generating configurations from natural language descriptions. It understands your full agent workflow — including all steps and their relationships — and can suggest appropriate values.
Supported step types:
| Step Type | What the AI Assistant Generates |
|---|---|
| Prompt Call | Model, prompts, temperature, tools, JSON template, formatting — full prompt call configuration |
| Retrieval | Knowledge base, query, filters, reranker, time range, content type — full retrieval configuration |
| Transform | Regex patterns, substitutions, and comments for each rule |
| Gate | Conditions with targets, operators, values, match mode, and on_match |
| Combinator | Output template with step references, and content type |
| Text | Template with substitution variables, and content type |
| Call Agent | Target agent, pass-through settings for input/metadata, and content version |
To use the AI assistant, open a supported step and click the AI Assistant button. Describe what you want in natural language, and the assistant will generate or refine the configuration.
Next Steps
- Agent Triggers — Configure when and how agents run
- Agents Overview — Back to the agents overview
- Knowledge Bases — Connect data sources for retrieval steps
- Content Sources — Understand how content flows into knowledge bases
- API Examples — Code samples for working with agents