Documentation

Output Steps

For common fields, string substitutions, metadata filters, caching, and execution order, see the Agent Steps Overview.


Display Result Step

Marks a step's output as the final visible result of the agent run. The output of this step is what users see in the UI and what is returned in the API response.

Fields

FieldTypeRequiredDefaultDescription
templatestringNonullA template for the output. If null, the step's input is used directly. Supports substitutions.
html_formatbooleanNotrueWhen true, the output is treated as HTML for rendering in the UI

Display result steps cannot have child steps.

Use Case Examples

Simple pass-through:

No template, html_format: true — Displays the previous step's output as HTML.

TriggerPrompt Callgenerate responseDisplay Resultpass-through
Figure 1.Simple pass-through — the prompt call's output flows directly into a display result step with no template transformation.

Formatted output with template:

Template:

<div class="result">
  <h2>Analysis Results</h2>
  {{step.analysis.output}}
  <p><em>Generated on {{date UTC}} by {{agent.name}}</em></p>
</div>

Plain text output:

Template: {{step.final.output}}

html_format: false — Returns plain text without HTML rendering.

TriggerInsightanalyze contentExtract Contentparse JSONDisplay ResultHTML template
Figure 2.Formatted display — an insight step analyzes content, then the display result step wraps it in an HTML template.
Display ResultComposeGovernanceDeliverTriggerPrompt Call
Figure 3.Display Result with governance — the display step shows the final output while a non-blocking governance evaluation audits the content in the background.

Streaming Result Step

Streams LLM output tokens in real-time from a preceding prompt_call to the caller via Server-Sent Events (SSE). This enables chat-like interfaces where users see the response being generated token by token.

Must be a direct child of a prompt_call step. The prompt_call's model must support streaming (supports_streaming capability). Both simple format (plain text) and advanced/JSON template prompt call configurations support streaming.

Additional requirements:

  • The agent must use a dynamic_input trigger
  • Runs must use priority: true (priority mode)
  • Only one streaming_result step is allowed per agent definition

Fields

This step has no configurable fields — it inherits all behavior from its parent prompt_call step.

Streaming result steps cannot have child steps.

Guidance

  • Place as a direct child of a prompt_call step
  • The agent trigger must be set to dynamic_input
  • Runs must use priority mode (priority: true)
  • The model selected in the parent prompt_call must support streaming
  • For side-effects (save to memory, send email, etc.), add those as sibling branches under the same prompt_call — they receive the full buffered output after streaming completes
  • If governance policies with output blocking are active, streaming falls back to non-streaming delivery
  • For the full streaming setup guide, see Agent Streaming

Use Case Example

Chat-style Q&A agent:

Trigger: dynamic_input (priority: true)
Step 1: Retrieval — Search knowledge base for relevant context
  Step 2: Prompt Call — Answer the user's question using the retrieved context
    Step 3: Streaming Result — Stream the response token-by-token to the user
    Step 4: Add Memory — Save the assistant's reply to conversation history

The streaming result delivers tokens in real-time while the memory step saves the full buffered output after streaming completes.

Triggerdynamic inputRetrievalsearch KBPrompt Callanswer with contextStreaming ResultSSE tokensAdd Memorysave reply
Figure 4.Streaming RAG chat — retrieval feeds a prompt call, the streaming step delivers tokens live, and memory saves the full reply.
Triggerdynamic inputPrompt CallStreaming ResultSSE tokens
Figure 5.Streaming Result — a retrieval-augmented chat agent streams the LLM response in real-time while saving to memory as a side-effect.

← Back to Agent Steps Overview