Dashboard
The Dashboard is your account-level observability hub. It provides a unified view of agent runs, content source pulls, and credit usage over configurable time windows — giving you the insights you need to keep your AI workflows healthy and efficient.
Overview
The Dashboard is organized into three tabs:
| Tab | Icon | What It Shows |
|---|---|---|
| Agents | 🤖 | Agent run metrics, failure analysis, and performance trends |
| Content Sources | 🔌 | Source pull metrics, success rates, and pull history |
| Credit Usage | 🪙 | Credit consumption charts and usage summaries |
When you open the Dashboard, it defaults to the Agents tab. Each tab displays data scoped to the time frame you select.
Time Frame Selector
The Time Frame Selector at the top of the Dashboard controls the date range for all data across every tab. It supports both relative and absolute ranges.
Relative ranges (preset):
| Option | Range |
|---|---|
| Last 24 hours | Past 24 hours |
| Last 7 days | Past 7 days |
| Last 30 days | Past 30 days |
| Last 90 days | Past 90 days |
| Last 6 months | Past 6 months |
| Last year | Past 12 months |
Absolute range: Pick a custom start and end date using the date picker.
The dashboard automatically determines the data granularity based on the selected range:
| Range Duration | Granularity |
|---|---|
| ≤ 31 days | Day — data grouped by day |
| ≤ 182 days (≈6 months) | Week — data grouped by week |
| > 182 days | Month — data grouped by month |
Example: Selecting "Last 7 days" shows daily data points in the charts, while "Last year" shows monthly aggregated data.
Agents Tab
The Agents tab provides comprehensive monitoring for all agent runs across your account. It combines high-level summary metrics with detailed failure analysis and performance tracking.
Summary Cards
Three summary cards at the top provide at-a-glance metrics for the selected time frame:
| Card | Description | Visual Indicator |
|---|---|---|
| Total Runs | Total number of agent runs started | Neutral |
| Completed | Runs that finished successfully | Green |
| Error Rate | Percentage of runs that failed | Red when > 10% |
Example: If 200 runs started, 180 completed, and 20 failed, you would see:
- Total Runs: 200
- Completed: 180
- Error Rate: 10% (displayed in red)
Traces Chart
A bar chart visualizes completed vs. failed runs over time. Each bar is divided into two segments:
- Green — Completed runs
- Red — Failed runs
The x-axis labels adapt to the selected granularity (day, week, or month). Hover over any bar to see exact counts.
Example: A daily chart for "Last 7 days" might show:
Mon: 32 completed, 2 failed
Tue: 28 completed, 0 failed
Wed: 35 completed, 5 failed ← spike visible as taller red segment
Thu: 30 completed, 1 failed
...
Recent Failures
Below the chart, a Recent Failures panel lists the most recent failed agent runs. Each entry shows:
- Agent name — Clickable link that navigates to the specific run detail
- Error message — Truncated preview of what went wrong
- Timestamp — When the failure occurred
Use this panel to quickly identify and investigate problems without navigating through each agent individually.
Example entries:
| Agent | Error | When |
|---|---|---|
| Daily News Summary | Rate limit exceeded for model gpt-4o | 2 hours ago |
| Customer FAQ Bot | Knowledge base "Support Docs" has no indexed content | 5 hours ago |
| Weekly Report Generator | S3 bucket write permission denied | 1 day ago |
Slowest Runs
The Slowest Runs panel highlights agent runs that significantly exceeded their historical p95 (95th percentile) duration. Each entry shows:
- Agent name — Link to the run detail
- Actual duration — How long the run took
- P95 benchmark — The typical p95 duration for that agent
- Multiplier badge — Shows how many times slower (e.g., "2.3× p95")
This helps you catch performance regressions before they become chronic issues.
Example: An agent that normally completes in under 12 seconds (p95) but took 45 seconds would display:
Content Analyzer 45s / p95: 12s [3.8× p95]
Agents Slowing Down
The Agents Slowing Down section surfaces agents whose recent average duration has increased significantly compared to their historical baseline. Each card shows:
- Agent name — Clickable link to the agent's runs page
- Recent average vs. baseline average duration
- Slowdown badge — Color-coded multiplier:
- Yellow — 1.5× to 2× slower than baseline
- Red — 2× or more slower than baseline
This proactive metric helps you detect gradual performance degradation across your agents.
Example:
Weekly Digest Generator [2.1× slower]
Recent avg: 42s → Baseline avg: 20s
Product Update Monitor [1.6× slower]
Recent avg: 8s → Baseline avg: 5s
Content Sources Tab
The Content Sources tab mirrors the Agents tab structure but focuses on source pull operations — the periodic fetches that keep your knowledge bases up to date.
Summary Cards
| Card | Description | Visual Indicator |
|---|---|---|
| Total Pulls | Number of source pulls initiated | Neutral |
| Completed | Pulls that finished successfully | Green |
| Error Rate | Percentage of pulls that failed | Red when > 10% |
Pull History Chart
A bar chart identical in design to the agent run chart, showing completed vs. failed pulls over time. Use this to spot patterns — for example, a source that fails every weekend might indicate a maintenance window on the upstream server.
Note: If no content source data is available for the selected time frame, the tab displays: "No content source data available".
Credit Usage Tab
The Credit Usage tab shows how your credits are being consumed over time.
Usage Bar Chart
A bar chart displays credit consumption per time period, with bars sized to the selected granularity (day, week, or month).
Account Credit Summary
An overlay provides a snapshot of your current credit balance and usage:
- Total credits used in the selected period
- Current balance (subscription + purchased credits)
- Subscription credits remaining this billing cycle
For more details on how credits work, see Credits & Usage.
Note: If no usage data exists for the selected period, the tab displays: "No credit usage data available yet".
Permissions
The Dashboard is available to users with editor, admin, or owner roles on the account. Users with the viewer role are redirected to the Agents list page instead.
| Role | Dashboard Access |
|---|---|
| Owner | ✅ Full access |
| Admin | ✅ Full access |
| Editor | ✅ Full access |
| Viewer | ❌ Redirected to Agents |
Common Use Cases
Daily Health Check
Start your day by opening the Dashboard with "Last 24 hours" selected:
- Check the error rate on the Agents tab — anything above 10% needs attention
- Review Recent Failures to identify recurring issues
- Scan the Content Sources tab for pull failures that could affect data freshness
- Glance at Credit Usage to ensure consumption is within expected range
Weekly Performance Review
Select "Last 7 days" and compare to the previous week:
- Look for upward trends in the error rate
- Check the Agents Slowing Down section for performance regressions
- Review the Slowest Runs to investigate outlier executions
- Compare credit usage to projected budgets
Incident Investigation
When you receive an alert, use the Dashboard to understand the broader context:
- Set the time frame to cover the period around the incident
- Check if the issue is isolated (single agent) or systemic (multiple agents failing)
- Correlate agent failures with content source pull failures
- Use Recent Failures to trace the root cause
Next Steps
- Alerts — Set up automated notifications for failures and performance issues
- Agents — Learn about creating and managing agents
- Content Sources — Configure the sources that feed your knowledge bases
- Credits & Usage — Understand the credit system and pricing