Governance
Governance lets you automatically screen agent outputs and incoming source content against a configurable set of safety, privacy, and compliance policies. When content violates a policy, Seclai flags or blocks it so you can review it before it reaches end users.
Plan Requirement
Governance features are not available on all plans. Your subscription must include the governance access entitlement. When governance access is not included:
- The Governance page shows an upgrade prompt with a blurred overlay instead of the normal interface
- REST API requests to any
/governance/endpoint will be rejected - MCP governance tools will return an error: "Governance is not available on your current plan"
To check whether your plan includes governance, visit Settings → Account and look for the governance access indicator, or check the plan_governance_access field in the account API response.
If you need governance but your current plan does not include it, upgrade from the Settings → Subscription page.
Overview
The Governance system has five parts:
- Policies — Rules that define what content is acceptable (e.g. "block personally identifiable information", "flag biased language")
- Settings — Controls that determine where and how screening is applied (which agents, sources, and steps)
- Evaluations — Individual screening results generated when content is checked against a policy
- Knowledge Base Associations — Optional links between policies and knowledge bases that provide evidence-based evaluation using similarity search
- AI Assistant — A natural-language interface for managing policies using a propose-then-accept workflow
You manage all five from the Governance section in the left sidebar.
Core Concepts
Policies
A governance policy defines what the evaluator checks for. Each policy consists of:
| Component | Description |
|---|---|
| Policy text | The policy text that tells the AI evaluator what to look for. Can come from the built-in sample policy library or be written as custom text. |
| Thresholds | A flag threshold and block threshold that determine the severity response based on the evaluator's confidence score. |
| Scope | Where the policy applies: account-wide, a specific agent, a specific step, or a specific source connection. |
| Enabled | Whether the policy is active. Disabled policies are skipped during screening. |
| Enforcement level | Controls how child scopes can modify this policy (flexible, required, or locked). See Enforcement Levels. |
| Inheritance mode | (Scoped policies only.) How a scoped policy relates to its parent's policy for the same document (inherit, merge, or disable). Account-level policies have no parent so this field is not shown. See Inheritance Modes. |
| Knowledge base associations | Optional links to knowledge bases that provide similarity-based evidence during evaluation. See Knowledge Base Associations. |
Policy Categories
The examples are organized into five categories:
| Category | Slug | What it detects |
|---|---|---|
| Content Safety | content_safety | Harmful, violent, hateful, or sexually explicit content |
| PII | pii | Personally identifiable information such as names, emails, phone numbers, and government IDs |
| Bias | bias | Biased, discriminatory, or stereotyping language |
| Legal | legal | Legal risks including copyright violations, defamation, and regulatory non-compliance |
| Brand | brand | Off-brand messaging, competitor mentions, or tone violations |
Verdicts
When content is evaluated against a policy, the evaluator produces a confidence score from 0.0 (no match) to 1.0 (certain match). That score is compared against the policy's thresholds to produce one of three verdicts:
| Verdict | Condition | Default behavior |
|---|---|---|
| Pass | Score < flag threshold | Content proceeds normally |
| Flag | Score ≥ flag threshold and < block threshold | Content proceeds but is queued for human review |
| Block | Score ≥ block threshold | Content is withheld until a reviewer resolves the evaluation |
Example: With the default thresholds (flag = 0.5, block = 0.8), a score of 0.65 produces a flag verdict. The content is delivered but appears in the Review queue for a human to check.
Screening Points
Governance evaluations happen at specific points in the pipeline:
| Screening point | When it runs | Typical use |
|---|---|---|
| Source content | When a content source imports new items | Screen incoming data before it enters your knowledge base |
| Agent input | Before an agent run begins processing | Screen user-provided input to agents |
| Step output | After an agent step completes | Screen AI-generated outputs before they're returned |
| Policy test | When you manually test content in the Test tab | Ad-hoc testing during policy development |
| Policy test (in review) | After opting a test evaluation into review | Promote test results to the Review queue for tracking |
Tip: Policy test evaluations are not visible in the Review queue by default. If you want a test result to appear alongside production evaluations for review, use the opt-into-review action.
Evaluation Tiers
The evaluation tier controls which AI model the evaluator uses, trading off between speed, cost, and thoroughness:
| Tier | Speed | Cost | Accuracy | Best for |
|---|---|---|---|---|
| Fast | ⚡ Fastest | Lowest | Good for clear-cut violations | High-volume screening, simple policies |
| Balanced | Moderate | Mid-range | Strong general-purpose accuracy | Most production use cases |
| Thorough | Slowest | Highest | Best nuanced judgment | Complex policies, legal/compliance review |
You can set the evaluation tier at any scope level (account-wide, per-agent, per-source). Lower scopes inherit from their parent unless overridden.
Scoping Model
Both policies and settings support hierarchical scoping:
Account (broadest)
└─ Agent
└─ Agent Step
└─ Source Connection
How policy scoping works:
- Account-wide policies apply to all agents and sources unless a narrower scope exists.
- Agent-scoped policies apply only to that agent's runs. They are additive — the agent gets both account-wide and agent-scoped policies.
- Step-scoped policies apply only to a specific step within an agent.
- Source-scoped policies apply only to content from that source.
How settings scoping works:
Settings use a most-specific-wins override model. If an agent has its own governance settings, those override the account-wide settings for that agent. If a specific step has settings, those override the agent-level settings for that step.
Example: You enable governance account-wide with the "balanced" tier but set one high-volume agent to use the "fast" tier. The agent-scoped setting overrides only the tier; all other settings (e.g., review_output) still inherit from the account level.
Enforcement Levels
Every policy has an enforcement level that controls how child scopes can interact with it. This is set on the parent-scope policy.
| Level | Description |
|---|---|
| Flexible | Child scopes may merge alongside this policy or disable it entirely. This is the default. |
| Required | Child scopes may merge (evaluate alongside) but cannot disable this policy. |
| Locked | Child scopes cannot merge or disable. The policy applies exactly as defined at this scope. |
Example: An account-wide PII policy with enforcement level Required ensures that every agent and source is always screened for PII. An agent-scoped policy can merge additional rules alongside it, but cannot suppress the PII check.
Inheritance Modes
Note: Inheritance mode only applies to scoped policies (agent, step, or source). Account-level policies have no parent scope, so inheritance mode is not shown when creating or editing account-wide policies.
When you create a scoped policy (agent-level, step-level, or source-level), the inheritance mode declares how the child relates to the parent:
| Mode | Description |
|---|---|
| Merge | Both the parent policy and this child's evaluation run side by side. Useful for adding stricter thresholds at a narrower scope. |
| Disable | Suppress the parent policy at this scope. Only allowed when the parent's enforcement level is Flexible. |
Validation rules:
- Setting inheritance mode to Disable when the parent's enforcement level is Required or Locked will be rejected.
- Setting inheritance mode to Merge when the parent's enforcement level is Locked will be rejected.
- Inherit is always allowed regardless of the parent's enforcement level.
Knowledge Base Associations
Policies can be linked to one or more knowledge bases to enable evidence-based evaluation. When a policy has knowledge base associations, the evaluator performs a similarity search against each linked knowledge base before scoring and uses the retrieved content as evidence in the evaluation.
This is useful for:
- Known-bad content detection — link a knowledge base containing examples of prohibited content; high-similarity matches provide strong evidence for blocking.
- Reference-based compliance — link a knowledge base with regulatory text or brand guidelines so the evaluator can check content against authoritative references.
- Contextual screening — provide domain-specific context that helps the evaluator make more informed judgments.
Match Actions
Each knowledge base association has a match action that tells the evaluator how to interpret similarity matches:
| Match action | Description |
|---|---|
| Block | High-similarity matches are strong evidence the content should be blocked (e.g. known-bad examples). |
| Flag | High-similarity matches suggest the content should be flagged for human review. |
| Inform | Matches are provided as context and reference to the evaluator LLM with no enforcement bias. This is the default. |
Each association also has a position (starting from 0) that controls the order in which knowledge bases are queried. Lower positions are queried first.
Circularity Detection
When a policy is scoped to an agent or source, linking a knowledge base that is sourced from the same agent or source can create a circular reference — the governance system would be using the content it's supposed to screen as evidence for screening decisions.
Seclai automatically detects circular knowledge bases and prevents you from creating these associations:
- In the UI: Knowledge bases that would create a circular reference are marked and cannot be selected when editing a policy's associations.
- In the API and MCP: The
set_policy_knowledge_basesendpoint returns an error if any of the specified knowledge bases would create a circular reference. - Detection endpoint: Use the
GET /policies/circular-knowledge-basesendpoint or theget_circular_knowledge_basesMCP tool to retrieve the list of knowledge base IDs that would be circular for a given scope. Passagent_idorsource_connection_idto check for a specific scope.
Getting Started
Enabling Governance
Once your plan supports governance:
- Go to Governance in the left sidebar
- The Overview tab shows your current status
- Use the Settings to enable governance and configure screening options
Once enabled at the account level, governance will screen content at the configured screening points.
Creating a Policy
To create an account-wide policy from a sample policy:
- Go to Governance → Policies
- Click Add Policy
- Toggle off "Use AI assistant" to see the manual form
- In the Sample policies section, browse the examples and click one to populate the form
- Optionally adjust the name, flag and block thresholds, and enforcement level
- Click Create Policy
To create an account-wide policy:
- Click Add Policy
- Toggle off "Use AI assistant"
- Alternatively, use a sample starter (Content Safety, PII Detection) or write your own policy text from scratch (see Writing Effective Policies)
- Set thresholds and enforcement level
- Click Create Policy
Note: The Add Policy page creates account-wide policies. Account-wide policies have no parent scope, so inheritance mode is not shown here. To create scoped policies (agent, step, or source level), use the Governance panel on the resource's detail page — see Scoped Policy Overrides below.
Via API:
# Account-wide policy from a sample policy
curl -X POST \
https://api.seclai.com/authenticated/accounts/{account_id}/governance/policies \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"sample_slug": "content-safety",
"policy_name": "Content Safety",
"policy_text": "Do not allow harmful, violent, or explicit content.",
"category": "content_safety",
"flag_threshold": 0.5,
"block_threshold": 0.8,
"enforcement_level": "flexible"
}'
# Custom policy text (account-wide)
curl -X POST \
https://api.seclai.com/authenticated/accounts/{account_id}/governance/policies \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"policy_name": "Competitor Mentions",
"policy_text": "Flag any content that mentions competitor product names.",
"category": "brand",
"flag_threshold": 0.4,
"block_threshold": 0.9,
"enforcement_level": "required"
}'
# Agent-scoped policy override (inheritance_mode applies to scoped policies)
curl -X POST \
https://api.seclai.com/authenticated/accounts/{account_id}/governance/policies \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"sample_slug": "content-safety",
"policy_name": "Content Safety",
"policy_text": "Do not allow harmful, violent, or explicit content.",
"category": "content_safety",
"agent_id": "AGENT_UUID",
"enforcement_level": "required",
"inheritance_mode": "merge"
}'
Scoped Policy Overrides
In addition to account-wide policies (created from the Policies page), you can create scoped policy overrides that apply only to a specific agent, agent step, or content source. Scoped policies are created from the Governance panel on each resource's detail page.
For agents:
- Open an agent's detail page
- Switch to the Governance tab
- In the Policy Overrides section, click Add Policy Override
- Choose a sample policy or enter custom policy text
- Set the enforcement level and inheritance mode — since scoped policies have a parent (the account-level policy), inheritance mode controls how this override relates to the parent
- Click Create
For agent steps:
- Open an agent's detail page and click Edit on a step
- In the step edit modal, scroll to the Governance tab
- Follow the same process as above — the policy will be scoped to that specific step
For content sources:
- Open a source connection's detail page
- Switch to the Governance tab
- Follow the same process — the policy will be scoped to that source
Each scoped policy also has a link icon that navigates to the full policy edit page, where you can adjust thresholds, view knowledge base associations, and see the policy's scope badge.
Tip: Use the
create_governance_policyMCP tool withagent_id,agent_step_id, orsource_connection_idparameters to create scoped policies programmatically. See the MCP Tools section.
Configuring Thresholds
Each policy has two thresholds that control how strictly it's enforced:
| Threshold | Default | Effect |
|---|---|---|
| Flag threshold | 0.5 | Content scoring at or above this value is flagged for review |
| Block threshold | 0.8 | Content scoring at or above this value is blocked entirely |
Tuning guidelines:
- Lower the flag threshold (e.g. 0.3) to catch more borderline cases at the cost of more false positives
- Raise the flag threshold (e.g. 0.7) to reduce review noise, only flagging high-confidence matches
- Lower the block threshold (e.g. 0.6) for zero-tolerance policies where you'd rather over-block than miss something
- Raise the block threshold (e.g. 0.95) when you only want to block near-certain violations
- The block threshold must always be ≥ the flag threshold
Use the Test tab to experiment with different thresholds before applying them to production policies.
Configuring Settings
Governance settings control where screening happens:
| Setting | Default | Description |
|---|---|---|
| Governance enabled | true | Master toggle for governance at this scope |
| Review output | true | Screen agent step outputs against policies |
| Review input | false | Screen agent/user inputs and imported source content against policies |
| Evaluation tier | null (inherits) | AI model tier: fast, balanced, or thorough |
Settings can be configured at the account level via the API or MCP tools, or per-agent/per-source from the resource's detail page (using the Governance panel).
Via API:
# Enable input screening for a specific source
curl -X PUT \
https://api.seclai.com/authenticated/accounts/{account_id}/governance/settings \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"source_connection_id": "SOURCE_UUID",
"review_input": true,
"governance_enabled": true
}'
Linking Knowledge Bases
To enhance a policy with evidence-based evaluation, link one or more knowledge bases:
- Go to Governance → Policies
- Click on a policy to open its detail view
- In the Knowledge Bases section, click Add Knowledge Base
- Select a knowledge base from the list (circular references are automatically disabled)
- Choose a match action for each association:
- Block — similarity matches are treated as strong evidence for blocking
- Flag — similarity matches suggest the content should be flagged
- Inform — matches are provided as context only (default)
- Drag to reorder associations by priority (position 0 is queried first)
- Click Save
Via API:
# Replace all KB associations for a policy
curl -X PUT \
https://api.seclai.com/authenticated/accounts/{account_id}/governance/policies/{policy_id}/knowledge-bases \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"knowledge_bases": [
{
"knowledge_base_id": "KB_UUID_1",
"match_action": "block",
"position": 0
},
{
"knowledge_base_id": "KB_UUID_2",
"match_action": "inform",
"position": 1
}
]
}'
# List current associations
curl https://api.seclai.com/authenticated/accounts/{account_id}/governance/policies/{policy_id}/knowledge-bases \
-H "Authorization: Bearer YOUR_TOKEN"
# Check which KBs would be circular for an agent-scoped policy
curl "https://api.seclai.com/authenticated/accounts/{account_id}/governance/policies/circular-knowledge-bases?agent_id=AGENT_UUID" \
-H "Authorization: Bearer YOUR_TOKEN"
Writing Effective Policies
When the built-in policy library doesn't cover your needs, you can write custom policy text. The policy text is the instruction given to the AI evaluator — it determines what the model looks for when screening content.
Policy Structure
An effective policy text should include:
- What to detect — a clear description of the content pattern to flag
- Examples — concrete examples of violating content (helps the model calibrate)
- Non-examples (optional) — examples of acceptable content that might look similar
- Severity guidance (optional) — what constitutes a minor vs. major violation
Detect any content that contains pricing information for competitor products.
Examples of violations:
- "CompetitorX charges $49/month for their basic plan"
- "Switching from RivalCo saves you 30%"
- "Their enterprise tier starts at $500/seat"
Not violations:
- "Our pricing starts at $15/month"
- "Compare plans on our pricing page"
- General industry pricing trends without naming competitors
Best Practices
- Be specific. Vague policies like "flag bad content" produce inconsistent results. State exactly what constitutes a violation.
- Include 3–5 examples. Examples dramatically improve evaluator accuracy, especially for domain-specific policies.
- Test before deploying. Use the Test tab to verify your policy catches what you expect and doesn't over-flag acceptable content.
- Start with higher thresholds. Begin with flag = 0.6 and block = 0.9, then lower them after reviewing initial results.
- One concern per policy. A policy that tries to detect PII and brand violations will be less accurate than two separate policies.
- Use the Thorough tier for complex policies. Simple pattern-matching policies work well with the Fast tier, but nuanced judgment calls benefit from the Thorough tier.
- Review and iterate. Check the Review queue regularly and adjust policies based on false positive/negative patterns.
Example Policies
Medical advice detection:
Flag any content that provides specific medical diagnoses, treatment
recommendations, or medication dosage advice. General health information
and wellness tips are acceptable.
Violations:
- "Based on your symptoms, you likely have strep throat"
- "Take 400mg of ibuprofen every 6 hours"
- "You should discontinue your current medication"
Not violations:
- "Consider consulting a healthcare provider"
- "Regular exercise can improve cardiovascular health"
- "The CDC recommends annual flu vaccinations"
Financial advice detection:
Flag content that provides specific investment recommendations, price
predictions, or personalized financial advice.
Violations:
- "You should buy AAPL stock now"
- "Bitcoin will reach $200k by next year"
- "Move your 401k into bonds"
Not violations:
- "Diversification is a common investment strategy"
- "Historical market returns have averaged ~10% annually"
- "Consult a financial advisor for personalized advice"
Internal-only information:
Flag any content that references internal project codenames, unreleased
product features, or internal-only URLs.
Violations:
- References to "Project Phoenix" or "Project Titan"
- URLs containing "internal.company.com" or "staging.company.com"
- "The upcoming v3.0 release will include..."
Not violations:
- Publicly announced product features
- Public documentation URLs
- General product roadmap statements already shared externally
Monitoring & Review
Overview Dashboard
The Overview tab provides a real-time summary of governance activity:
- Total evaluations — how many pieces of content have been screened
- Pass / Flag / Block counts — breakdown by verdict
- Unresolved flags and blocks — items awaiting human review
- By screening point — where evaluations are occurring (source content, agent input, step output)
- By category — which policy categories are triggering most often
Use this dashboard to spot trends: a sudden spike in flags might indicate a policy that's too sensitive, while zero evaluations might mean governance isn't enabled where you expect it.
Reviewing Evaluations
The Review tab shows all evaluations that need attention:
- Filter by verdict (flag/block), screening point, date range, agent, or source
- Review each evaluation: see the content excerpt, policy name, confidence score, and AI explanation
- Resolve evaluations after review (see below)
Resolving Evaluations
When you resolve an evaluation, you're marking it as reviewed. This serves as an audit trail and clears the item from the unresolved queue.
- Click Resolve on an evaluation
- Optionally add a resolution note (e.g., "False positive — content is acceptable", "Confirmed violation — notified content team")
- The evaluation is timestamped with your name and note
Via API:
curl -X POST \
https://api.seclai.com/authenticated/accounts/{account_id}/governance/evaluations/{evaluation_id}/resolve \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{"resolution_note": "False positive — acceptable context"}'
Testing Policies
How Testing Works
The Test tab lets you evaluate content against your policies without affecting real agent runs or source pulls:
- Go to Governance → Test
- Paste or type the content you want to test
- Optionally select a specific policy (or test against all active policies)
- Click Test
- Review the results: each policy produces an evaluation with a score, verdict, and explanation
Testing uses the same evaluator as production screening, so results accurately reflect what would happen in a live run.
Note: The Test tab is disabled when no active policies exist. Create or enable at least one policy to use it.
Testing Draft Policies
You can test content against a draft policy that hasn't been saved yet. This is useful when writing a new custom policy and you want to iterate on the policy text before committing it.
Draft testing lets you specify:
- Content — the text to evaluate
- Policy text — the draft policy text to evaluate against
- Thresholds — optional flag and block thresholds (defaults: 0.5 and 0.8)
- Knowledge base associations — optional list of knowledge bases to include in the evaluation, each with a match action and position
The draft policy is not persisted. Evaluation results are returned immediately but are not stored as permanent evaluations.
Via API:
curl -X POST \
https://api.seclai.com/authenticated/accounts/{account_id}/governance/test-draft \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"content": "Contact John Smith at john@example.com",
"policy_text": "Flag any content containing personally identifiable information.",
"flag_threshold": 0.5,
"block_threshold": 0.8,
"knowledge_base_associations": [
{
"knowledge_base_id": "KB_UUID",
"match_action": "inform",
"position": 0
}
]
}'
Opting Test Results into Review
By default, test evaluations (screening point policy_test) do not appear in the Review queue. If you want a test result to be visible in the Review tab — for example, to share it with a colleague or track it as part of a review workflow — you can opt it into review.
Opting a test evaluation into review changes its screening point from policy_test to policy_test_review. This makes it appear in the Review queue alongside production evaluations. The action is idempotent: if the evaluation is already in review, the operation succeeds without changes.
Note: Only evaluations with screening point
policy_testcan be opted into review. Attempting to opt a production evaluation (source content, agent input, or step output) will return an error.
Via API:
curl -X POST \
https://api.seclai.com/authenticated/accounts/{account_id}/governance/evaluations/{evaluation_id}/opt-into-review \
-H "Authorization: Bearer YOUR_TOKEN"
Interpreting Results
Each test result includes:
| Field | Description |
|---|---|
| Policy name | Which policy was evaluated |
| Score | Confidence from 0.0 (no match) to 1.0 (certain match) |
| Verdict | Pass, flag, or block based on the policy's thresholds |
| Explanation | AI-generated reasoning for the score |
Tips for iterating:
- If a policy flags content that should pass, consider raising the flag threshold or adding "not violation" examples to the policy text
- If a policy misses content that should be caught, consider lowering the flag threshold or adding more violation examples
- Test with both violating and non-violating content to verify the policy doesn't over-flag
Via API:
# Test against all active policies
curl -X POST \
https://api.seclai.com/authenticated/accounts/{account_id}/governance/test \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{"content": "Contact John Smith at john@example.com or 555-0123"}'
# Test against a specific policy
curl -X POST \
https://api.seclai.com/authenticated/accounts/{account_id}/governance/test \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"content": "Contact John Smith at john@example.com",
"policy_id": "POLICY_UUID"
}'
Governance AI Assistant
The governance AI assistant lets you manage policies using natural language. It follows a propose-then-accept workflow:
- Describe what you want — e.g., "Add a PII policy with strict thresholds" or "Disable all bias policies"
- Review the proposed plan — the AI generates a list of specific actions (create, update, delete, enable, disable) with full details
- Accept or decline — if the plan looks right, accept it and the actions are executed automatically; if not, decline and try again
Example prompts:
- "Add PII and content safety policies with low flag thresholds"
- "Create a custom policy that flags medical advice"
- "Disable all policies except PII detection"
- "Lower the block threshold on my content safety policy to 0.7"
- "Set up governance policies and enable input screening"
The AI assistant has full context about your current policies, settings, and the sample policy library, so it can make informed recommendations.
Via API:
# Generate a plan
curl -X POST \
https://api.seclai.com/authenticated/accounts/{account_id}/governance/ai-assistant \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{"user_input": "Add a PII detection policy with strict thresholds"}'
# Accept the plan
curl -X POST \
https://api.seclai.com/authenticated/accounts/{account_id}/governance/ai-assistant/{conversation_id}/accept \
-H "Authorization: Bearer YOUR_TOKEN"
# Decline the plan
curl -X POST \
https://api.seclai.com/authenticated/accounts/{account_id}/governance/ai-assistant/{conversation_id}/decline \
-H "Authorization: Bearer YOUR_TOKEN"
Integration Points
Governance integrates into the existing Seclai pipeline at multiple points:
| Integration | Where | What happens |
|---|---|---|
| Source content screening | During content source pulls | New items are evaluated before being indexed. Blocked items are withheld. |
| Agent input screening | Before an agent run processes input | User-provided input is checked. Blocked input prevents the run. |
| Step output screening | After each agent step completes | AI-generated outputs are checked. Blocked outputs are replaced with a governance notice. |
| Agent detail page | GovernancePanel on the agent detail page | Configure per-agent governance settings and create agent-scoped policy overrides with enforcement and inheritance controls. |
| Source detail page | GovernancePanel on the source detail page | Configure per-source governance settings and create source-scoped policy overrides. |
| Step edit modal | GovernancePanel in the step edit dialog | Create step-scoped policy overrides for individual agent steps. |
| Dashboard All Traces | Dashboard → Agents tab → All Traces | Select past runs and submit them for retroactive governance evaluation against active policies. |
Retroactive Evaluation
In addition to real-time screening, you can evaluate past agent runs against your current governance policies. This is useful when:
- You create a new policy and want to check whether recent runs would have been flagged
- You adjust policy thresholds and want to see how the change would affect historical runs
- You need to audit a batch of runs for compliance
How to use it:
- Go to Dashboard → Agents tab and scroll to the All Traces section
- Use the filters (status, agent, tag, evaluation, governance) to find the runs you want to evaluate
- Select one or more runs using the checkboxes
- Click Run Governance Eval — this button appears only when you have at least one active governance policy and at least one run is selected
- Confirm the evaluation in the modal
- Results are processed asynchronously and appear in the Gov column once complete
Retroactive evaluations use the same policies, thresholds, and evaluation tiers as real-time screening. Results are recorded as standard governance evaluations and appear in the Review queue.
API Reference
All governance endpoints are under:
/authenticated/accounts/{account_id}/governance/
Authentication: All endpoints require a valid access token via the
Authorization: Bearerheader.
Sample Policy Endpoints
Sample policies available for adoption:
| Method | Endpoint | Description |
|---|---|---|
| GET | /sample-policies | List available sample policies from the library, optionally filtered by category |
| GET | /sample-policies/{sample_slug} | Get a specific sample policy including its full policy text |
Policy Endpoints
CRUD operations for account governance policies:
| Method | Endpoint | Description |
|---|---|---|
| GET | /policies | List account policies (supports pagination and scope filters: agent_id, source_connection_id) |
| GET | /policies/{policy_id} | Get a specific policy by ID |
| POST | /policies | Create a new policy from a sample policy or custom text, with optional scope, thresholds, enforcement level, and inheritance mode |
| PATCH | /policies/{policy_id} | Update a policy's enabled status, thresholds, enforcement level, inheritance mode, or custom text |
| DELETE | /policies/{policy_id} | Soft-delete a governance policy |
| GET | /resource-policy-counts | Get policy counts grouped by resource (agent, source, step). Useful for governance indicators on resource lists. |
Knowledge Base Association Endpoints
Link knowledge bases to policies for evidence-based evaluation:
| Method | Endpoint | Description |
|---|---|---|
| GET | /policies/{policy_id}/knowledge-bases | List all knowledge base associations for a policy, ordered by position |
| PUT | /policies/{policy_id}/knowledge-bases | Replace all knowledge base associations for a policy (atomic replacement) |
| GET | /policies/circular-knowledge-bases | Get knowledge base IDs that would create circular references for a given scope. Pass agent_id or source_connection_id as query parameters. |
Settings Endpoints
Configure where and how governance screening is applied:
| Method | Endpoint | Description |
|---|---|---|
| GET | /settings | List all governance settings, optionally filtered by agent_id or source_connection_id |
| PUT | /settings | Create or update governance settings for a scope (account-wide, agent, source) |
Evaluation Endpoints
View and manage screening results:
| Method | Endpoint | Description |
|---|---|---|
| GET | /stats | Aggregate governance statistics (pass/flag/block counts, unresolved totals) |
| GET | /evaluations | List evaluations with rich filtering: verdict, screening point, date range, agent, source, policy |
| POST | /evaluations/{evaluation_id}/resolve | Resolve a flagged or blocked evaluation with an optional resolution note |
| POST | /evaluations/{evaluation_id}/opt-into-review | Promote a policy_test evaluation to the review queue (changes screening point to policy_test_review) |
Audit Trail Endpoints
View the change history for governance policies:
| Method | Endpoint | Description |
|---|---|---|
| GET | /changes | List all governance policy changes for the account. Supports filters: change_type (created/updated/deleted), action, date range. |
| GET | /policies/{policy_id}/changes | List audit trail entries for a specific policy, showing all creates, updates, and deletes. |
Credit Estimation Endpoints
Get estimated credit costs for governance evaluations:
| Method | Endpoint | Description |
|---|---|---|
| GET | /credit-estimates | Get estimated min/max credit ranges per evaluation tier (fast, balanced, thorough) based on current usage rates. |
Testing Endpoints
Ad-hoc policy testing without affecting production:
| Method | Endpoint | Description |
|---|---|---|
| POST | /test | Test content against active policies. Optionally specify a policy_id to test against a single policy. |
| POST | /test-draft | Test content against an unsaved draft policy with optional thresholds and knowledge base associations. The draft is not persisted. |
AI Assistant Endpoints
Natural-language governance management using the propose-then-accept workflow:
| Method | Endpoint | Description |
|---|---|---|
| POST | /ai-assistant | Generate a governance plan from a natural language description |
| POST | /ai-assistant/{conversation_id}/accept | Accept and execute a previously proposed governance plan |
| POST | /ai-assistant/{conversation_id}/decline | Decline a proposed governance plan |
| GET | /ai-assistant/conversations | List previous AI assistant conversations |
See the interactive API documentation at /docs when your API server is running for full request/response schemas.
MCP Tools
If you use an MCP-compatible client (Claude Desktop, Claude Code, Cursor), 28 governance tools are available — covering the same operations as the REST API plus the AI assistant.
Policy management:
| Tool | Description |
|---|---|
list_governance_policy_documents | List available sample policies from the library, optionally filtered by category |
get_governance_policy_document | Get a specific sample policy including its full policy text |
list_governance_policies | List governance policies assigned to the account, with optional scope filters |
get_governance_policy | Get a specific account governance policy by ID |
create_governance_policy | Create a new policy from a sample policy or custom text, optionally scoped |
update_governance_policy | Update a policy's enabled status, thresholds, enforcement level, or inheritance mode |
delete_governance_policy | Soft-delete a governance policy |
Knowledge base associations:
| Tool | Description |
|---|---|
list_policy_knowledge_bases | List knowledge base associations for a policy, ordered by position |
set_policy_knowledge_bases | Replace all knowledge base associations for a policy (atomic replacement). Rejects circular references. |
get_circular_knowledge_bases | Get knowledge base IDs that would create circular references. Optionally scope to an agent or source. |
Settings and statistics:
| Tool | Description |
|---|---|
get_governance_settings | Get governance settings for a scope (account, agent, or source) |
update_governance_settings | Update governance settings (enabled, review flags, evaluation tier) |
list_governance_settings | List all governance settings, optionally filtered by agent |
get_governance_stats | Get aggregate governance statistics (pass/flag/block counts, unresolved counts) |
get_governance_credit_estimates | Get estimated credit costs per evaluation tier (fast, balanced, thorough) |
Evaluations and testing:
| Tool | Description |
|---|---|
list_governance_evaluations | List governance evaluations with filtering by verdict, screening point, and date range |
resolve_governance_evaluation | Resolve a flagged or blocked evaluation with an optional note |
bulk_resolve_governance_evaluations | Resolve multiple evaluations at once with an optional shared resolution note |
opt_evaluation_into_review | Promote a test evaluation to the review queue (changes screening point to policy_test_review) |
test_governance_policy | Test content against active governance policies and see evaluation results |
test_draft_governance_policy | Test content against an unsaved draft policy with optional thresholds and KB associations |
Audit trail and cost estimation:
| Tool | Description |
|---|---|
list_governance_audit_trail | List all governance policy changes for the account, with optional date and type filters |
list_governance_policy_changes | List the audit trail for a specific governance policy |
get_governance_credit_estimates | Get estimated credit costs per evaluation tier (fast, balanced, thorough) |
AI assistant:
| Tool | Description |
|---|---|
generate_governance_plan | Generate a plan for policy changes from a natural language description |
accept_governance_plan | Accept and execute a previously proposed governance plan |
decline_governance_plan | Decline a proposed governance plan |
list_governance_conversations | List recent governance AI assistant conversations (plans, outcomes) |
See the MCP Server documentation for setup instructions and usage examples.
Exporting
Governance data can be exported for offline analysis, compliance audits, or archival. Two resource types are available:
Governance Policies
Export all governance policies, including policy text, thresholds, scope, enforcement level, and knowledge base associations. Supported formats: JSON and JSONL.
- UI: Go to Governance → Policies and click the Export button.
- API:
POST /authenticated/resource-exportswithresource_type: "governance_policies". - MCP: Use the
create_resource_exporttool withresource_type: "governance_policies".
See Export Formats → Governance Policies for the full file schema.
Governance Evaluations
Export governance evaluation results — screening verdicts, confidence scores, AI explanations, and resolution notes. Supported formats: JSON, JSONL, and CSV.
- UI: Go to Governance → Review and click the Export button. You'll see an estimate of the record count before confirming.
- API:
POST /authenticated/resource-exportswithresource_type: "governance_evals". - MCP: Use the
create_resource_exporttool withresource_type: "governance_evals".
See Export Formats → Governance Evaluations for the full file schema and available filter options.
Audit trail: Governance exports (both policies and evaluations) are automatically recorded in the audit trail for compliance tracking.
Permissions
| Role | Access |
|---|---|
| Owner / Admin | Full access: configure policies, settings, resolve evaluations, use AI assistant |
| Member | Full access (same as admin for governance) |
| Viewer | No access to governance features |
Next Steps
- Agents — Learn about the agents that governance screens
- Content Sources — Learn about the sources whose content governance can screen
- MCP Server — Manage governance from AI coding tools
- API Examples — Common API integration patterns
- Alerts — Set up monitoring for agent runs and source pulls