Output Steps
For common fields, string substitutions, metadata filters, caching, and execution order, see the Agent Steps Overview.
Display Result Step
Marks a step's output as the final visible result of the agent run. The output of this step is what users see in the UI and what is returned in the API response.
Fields
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
template | string | No | null | A template for the output. If null, the step's input is used directly. Supports substitutions. |
html_format | boolean | No | true | When true, the output is treated as HTML for rendering in the UI |
Display result steps cannot have child steps.
Use Case Examples
Simple pass-through:
No template, html_format: true — Displays the previous step's output as HTML.
Formatted output with template:
Template:
<div class="result">
<h2>Analysis Results</h2>
{{step.analysis.output}}
<p><em>Generated on {{date UTC}} by {{agent.name}}</em></p>
</div>
Plain text output:
Template: {{step.final.output}}
html_format: false — Returns plain text without HTML rendering.
Streaming Result Step
Streams LLM output tokens in real-time from a preceding prompt_call to the caller via Server-Sent Events (SSE). This enables chat-like interfaces where users see the response being generated token by token.
Must be a direct child of a prompt_call step. The prompt_call's model must support streaming (supports_streaming capability). Both simple format (plain text) and advanced/JSON template prompt call configurations support streaming.
Additional requirements:
- The agent must use a
dynamic_inputtrigger - Runs must use
priority: true(priority mode) - Only one
streaming_resultstep is allowed per agent definition
Fields
This step has no configurable fields — it inherits all behavior from its parent prompt_call step.
Streaming result steps cannot have child steps.
Guidance
- Place as a direct child of a
prompt_callstep - The agent trigger must be set to
dynamic_input - Runs must use priority mode (
priority: true) - The model selected in the parent
prompt_callmust support streaming - For side-effects (save to memory, send email, etc.), add those as sibling branches under the same
prompt_call— they receive the full buffered output after streaming completes - If governance policies with output blocking are active, streaming falls back to non-streaming delivery
- For the full streaming setup guide, see Agent Streaming
Use Case Example
Chat-style Q&A agent:
Trigger: dynamic_input (priority: true)
Step 1: Retrieval — Search knowledge base for relevant context
Step 2: Prompt Call — Answer the user's question using the retrieved context
Step 3: Streaming Result — Stream the response token-by-token to the user
Step 4: Add Memory — Save the assistant's reply to conversation history
The streaming result delivers tokens in real-time while the memory step saves the full buffered output after streaming completes.