Governance

Governance

Governance lets you automatically screen agent outputs and incoming source content against a configurable set of safety, privacy, and compliance policies. When content violates a policy, Seclai flags or blocks it so you can review it before it reaches end users.

Plan Requirement

Governance features are not available on all plans. Your subscription must include the governance access entitlement. When governance access is not included:

  • The Governance page shows an upgrade prompt with a blurred overlay instead of the normal interface
  • REST API requests to any /governance/ endpoint will be rejected
  • MCP governance tools will return an error: "Governance is not available on your current plan"

To check whether your plan includes governance, visit Settings → Account and look for the governance access indicator, or check the plan_governance_access field in the account API response.

If you need governance but your current plan does not include it, upgrade from the Settings → Subscription page.


Overview

The Governance system has five parts:

  1. Policies — Rules that define what content is acceptable (e.g. "block personally identifiable information", "flag biased language")
  2. Settings — Controls that determine where and how screening is applied (which agents, sources, and steps)
  3. Evaluations — Individual screening results generated when content is checked against a policy
  4. Knowledge Base Associations — Optional links between policies and knowledge bases that provide evidence-based evaluation using similarity search
  5. AI Assistant — A natural-language interface for managing policies using a propose-then-accept workflow

You manage all five from the Governance section in the left sidebar.


Core Concepts

Policies

A governance policy defines what the evaluator checks for. Each policy consists of:

ComponentDescription
Policy textThe policy text that tells the AI evaluator what to look for. Can come from the built-in sample policy library or be written as custom text.
ThresholdsA flag threshold and block threshold that determine the severity response based on the evaluator's confidence score.
ScopeWhere the policy applies: account-wide, a specific agent, a specific step, or a specific source connection.
EnabledWhether the policy is active. Disabled policies are skipped during screening.
Enforcement levelControls how child scopes can modify this policy (flexible, required, or locked). See Enforcement Levels.
Inheritance mode(Scoped policies only.) How a scoped policy relates to its parent's policy for the same document (inherit, merge, or disable). Account-level policies have no parent so this field is not shown. See Inheritance Modes.
Knowledge base associationsOptional links to knowledge bases that provide similarity-based evidence during evaluation. See Knowledge Base Associations.

Policy Categories

The examples are organized into five categories:

CategorySlugWhat it detects
Content Safetycontent_safetyHarmful, violent, hateful, or sexually explicit content
PIIpiiPersonally identifiable information such as names, emails, phone numbers, and government IDs
BiasbiasBiased, discriminatory, or stereotyping language
LegallegalLegal risks including copyright violations, defamation, and regulatory non-compliance
BrandbrandOff-brand messaging, competitor mentions, or tone violations

Verdicts

When content is evaluated against a policy, the evaluator produces a confidence score from 0.0 (no match) to 1.0 (certain match). That score is compared against the policy's thresholds to produce one of three verdicts:

VerdictConditionDefault behavior
PassScore < flag thresholdContent proceeds normally
FlagScore ≥ flag threshold and < block thresholdContent proceeds but is queued for human review
BlockScore ≥ block thresholdContent is withheld until a reviewer resolves the evaluation

Example: With the default thresholds (flag = 0.5, block = 0.8), a score of 0.65 produces a flag verdict. The content is delivered but appears in the Review queue for a human to check.

Screening Points

Governance evaluations happen at specific points in the pipeline:

Screening pointWhen it runsTypical use
Source contentWhen a content source imports new itemsScreen incoming data before it enters your knowledge base
Agent inputBefore an agent run begins processingScreen user-provided input to agents
Step outputAfter an agent step completesScreen AI-generated outputs before they're returned
Policy testWhen you manually test content in the Test tabAd-hoc testing during policy development
Policy test (in review)After opting a test evaluation into reviewPromote test results to the Review queue for tracking

Tip: Policy test evaluations are not visible in the Review queue by default. If you want a test result to appear alongside production evaluations for review, use the opt-into-review action.

Evaluation Tiers

The evaluation tier controls which AI model the evaluator uses, trading off between speed, cost, and thoroughness:

TierSpeedCostAccuracyBest for
Fast⚡ FastestLowestGood for clear-cut violationsHigh-volume screening, simple policies
BalancedModerateMid-rangeStrong general-purpose accuracyMost production use cases
ThoroughSlowestHighestBest nuanced judgmentComplex policies, legal/compliance review

You can set the evaluation tier at any scope level (account-wide, per-agent, per-source). Lower scopes inherit from their parent unless overridden.


Scoping Model

Both policies and settings support hierarchical scoping:

Account (broadest)
  └─ Agent
       └─ Agent Step
  └─ Source Connection

How policy scoping works:

  • Account-wide policies apply to all agents and sources unless a narrower scope exists.
  • Agent-scoped policies apply only to that agent's runs. They are additive — the agent gets both account-wide and agent-scoped policies.
  • Step-scoped policies apply only to a specific step within an agent.
  • Source-scoped policies apply only to content from that source.

How settings scoping works:

Settings use a most-specific-wins override model. If an agent has its own governance settings, those override the account-wide settings for that agent. If a specific step has settings, those override the agent-level settings for that step.

Example: You enable governance account-wide with the "balanced" tier but set one high-volume agent to use the "fast" tier. The agent-scoped setting overrides only the tier; all other settings (e.g., review_output) still inherit from the account level.

Enforcement Levels

Every policy has an enforcement level that controls how child scopes can interact with it. This is set on the parent-scope policy.

LevelDescription
FlexibleChild scopes may merge alongside this policy or disable it entirely. This is the default.
RequiredChild scopes may merge (evaluate alongside) but cannot disable this policy.
LockedChild scopes cannot merge or disable. The policy applies exactly as defined at this scope.

Example: An account-wide PII policy with enforcement level Required ensures that every agent and source is always screened for PII. An agent-scoped policy can merge additional rules alongside it, but cannot suppress the PII check.

Inheritance Modes

Note: Inheritance mode only applies to scoped policies (agent, step, or source). Account-level policies have no parent scope, so inheritance mode is not shown when creating or editing account-wide policies.

When you create a scoped policy (agent-level, step-level, or source-level), the inheritance mode declares how the child relates to the parent:

ModeDescription
MergeBoth the parent policy and this child's evaluation run side by side. Useful for adding stricter thresholds at a narrower scope.
DisableSuppress the parent policy at this scope. Only allowed when the parent's enforcement level is Flexible.

Validation rules:

  • Setting inheritance mode to Disable when the parent's enforcement level is Required or Locked will be rejected.
  • Setting inheritance mode to Merge when the parent's enforcement level is Locked will be rejected.
  • Inherit is always allowed regardless of the parent's enforcement level.

Knowledge Base Associations

Policies can be linked to one or more knowledge bases to enable evidence-based evaluation. When a policy has knowledge base associations, the evaluator performs a similarity search against each linked knowledge base before scoring and uses the retrieved content as evidence in the evaluation.

This is useful for:

  • Known-bad content detection — link a knowledge base containing examples of prohibited content; high-similarity matches provide strong evidence for blocking.
  • Reference-based compliance — link a knowledge base with regulatory text or brand guidelines so the evaluator can check content against authoritative references.
  • Contextual screening — provide domain-specific context that helps the evaluator make more informed judgments.

Match Actions

Each knowledge base association has a match action that tells the evaluator how to interpret similarity matches:

Match actionDescription
BlockHigh-similarity matches are strong evidence the content should be blocked (e.g. known-bad examples).
FlagHigh-similarity matches suggest the content should be flagged for human review.
InformMatches are provided as context and reference to the evaluator LLM with no enforcement bias. This is the default.

Each association also has a position (starting from 0) that controls the order in which knowledge bases are queried. Lower positions are queried first.

Circularity Detection

When a policy is scoped to an agent or source, linking a knowledge base that is sourced from the same agent or source can create a circular reference — the governance system would be using the content it's supposed to screen as evidence for screening decisions.

Seclai automatically detects circular knowledge bases and prevents you from creating these associations:

  • In the UI: Knowledge bases that would create a circular reference are marked and cannot be selected when editing a policy's associations.
  • In the API and MCP: The set_policy_knowledge_bases endpoint returns an error if any of the specified knowledge bases would create a circular reference.
  • Detection endpoint: Use the GET /policies/circular-knowledge-bases endpoint or the get_circular_knowledge_bases MCP tool to retrieve the list of knowledge base IDs that would be circular for a given scope. Pass agent_id or source_connection_id to check for a specific scope.

Getting Started

Enabling Governance

Once your plan supports governance:

  1. Go to Governance in the left sidebar
  2. The Overview tab shows your current status
  3. Use the Settings to enable governance and configure screening options

Once enabled at the account level, governance will screen content at the configured screening points.

Creating a Policy

To create an account-wide policy from a sample policy:

  1. Go to Governance → Policies
  2. Click Add Policy
  3. Toggle off "Use AI assistant" to see the manual form
  4. In the Sample policies section, browse the examples and click one to populate the form
  5. Optionally adjust the name, flag and block thresholds, and enforcement level
  6. Click Create Policy

To create an account-wide policy:

  1. Click Add Policy
  2. Toggle off "Use AI assistant"
  3. Alternatively, use a sample starter (Content Safety, PII Detection) or write your own policy text from scratch (see Writing Effective Policies)
  4. Set thresholds and enforcement level
  5. Click Create Policy

Note: The Add Policy page creates account-wide policies. Account-wide policies have no parent scope, so inheritance mode is not shown here. To create scoped policies (agent, step, or source level), use the Governance panel on the resource's detail page — see Scoped Policy Overrides below.

Via API:

# Account-wide policy from a sample policy
curl -X POST \
  https://api.seclai.com/authenticated/accounts/{account_id}/governance/policies \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "sample_slug": "content-safety",
    "policy_name": "Content Safety",
    "policy_text": "Do not allow harmful, violent, or explicit content.",
    "category": "content_safety",
    "flag_threshold": 0.5,
    "block_threshold": 0.8,
    "enforcement_level": "flexible"
  }'

# Custom policy text (account-wide)
curl -X POST \
  https://api.seclai.com/authenticated/accounts/{account_id}/governance/policies \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "policy_name": "Competitor Mentions",
    "policy_text": "Flag any content that mentions competitor product names.",
    "category": "brand",
    "flag_threshold": 0.4,
    "block_threshold": 0.9,
    "enforcement_level": "required"
  }'

# Agent-scoped policy override (inheritance_mode applies to scoped policies)
curl -X POST \
  https://api.seclai.com/authenticated/accounts/{account_id}/governance/policies \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "sample_slug": "content-safety",
    "policy_name": "Content Safety",
    "policy_text": "Do not allow harmful, violent, or explicit content.",
    "category": "content_safety",
    "agent_id": "AGENT_UUID",
    "enforcement_level": "required",
    "inheritance_mode": "merge"
  }'

Scoped Policy Overrides

In addition to account-wide policies (created from the Policies page), you can create scoped policy overrides that apply only to a specific agent, agent step, or content source. Scoped policies are created from the Governance panel on each resource's detail page.

For agents:

  1. Open an agent's detail page
  2. Switch to the Governance tab
  3. In the Policy Overrides section, click Add Policy Override
  4. Choose a sample policy or enter custom policy text
  5. Set the enforcement level and inheritance mode — since scoped policies have a parent (the account-level policy), inheritance mode controls how this override relates to the parent
  6. Click Create

For agent steps:

  1. Open an agent's detail page and click Edit on a step
  2. In the step edit modal, scroll to the Governance tab
  3. Follow the same process as above — the policy will be scoped to that specific step

For content sources:

  1. Open a source connection's detail page
  2. Switch to the Governance tab
  3. Follow the same process — the policy will be scoped to that source

Each scoped policy also has a link icon that navigates to the full policy edit page, where you can adjust thresholds, view knowledge base associations, and see the policy's scope badge.

Tip: Use the create_governance_policy MCP tool with agent_id, agent_step_id, or source_connection_id parameters to create scoped policies programmatically. See the MCP Tools section.

Configuring Thresholds

Each policy has two thresholds that control how strictly it's enforced:

ThresholdDefaultEffect
Flag threshold0.5Content scoring at or above this value is flagged for review
Block threshold0.8Content scoring at or above this value is blocked entirely

Tuning guidelines:

  • Lower the flag threshold (e.g. 0.3) to catch more borderline cases at the cost of more false positives
  • Raise the flag threshold (e.g. 0.7) to reduce review noise, only flagging high-confidence matches
  • Lower the block threshold (e.g. 0.6) for zero-tolerance policies where you'd rather over-block than miss something
  • Raise the block threshold (e.g. 0.95) when you only want to block near-certain violations
  • The block threshold must always be ≥ the flag threshold

Use the Test tab to experiment with different thresholds before applying them to production policies.

Configuring Settings

Governance settings control where screening happens:

SettingDefaultDescription
Governance enabledtrueMaster toggle for governance at this scope
Review outputtrueScreen agent step outputs against policies
Review inputfalseScreen agent/user inputs and imported source content against policies
Evaluation tiernull (inherits)AI model tier: fast, balanced, or thorough

Settings can be configured at the account level via the API or MCP tools, or per-agent/per-source from the resource's detail page (using the Governance panel).

Via API:

# Enable input screening for a specific source
curl -X PUT \
  https://api.seclai.com/authenticated/accounts/{account_id}/governance/settings \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "source_connection_id": "SOURCE_UUID",
    "review_input": true,
    "governance_enabled": true
  }'

Linking Knowledge Bases

To enhance a policy with evidence-based evaluation, link one or more knowledge bases:

  1. Go to Governance → Policies
  2. Click on a policy to open its detail view
  3. In the Knowledge Bases section, click Add Knowledge Base
  4. Select a knowledge base from the list (circular references are automatically disabled)
  5. Choose a match action for each association:
    • Block — similarity matches are treated as strong evidence for blocking
    • Flag — similarity matches suggest the content should be flagged
    • Inform — matches are provided as context only (default)
  6. Drag to reorder associations by priority (position 0 is queried first)
  7. Click Save

Via API:

# Replace all KB associations for a policy
curl -X PUT \
  https://api.seclai.com/authenticated/accounts/{account_id}/governance/policies/{policy_id}/knowledge-bases \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "knowledge_bases": [
      {
        "knowledge_base_id": "KB_UUID_1",
        "match_action": "block",
        "position": 0
      },
      {
        "knowledge_base_id": "KB_UUID_2",
        "match_action": "inform",
        "position": 1
      }
    ]
  }'

# List current associations
curl https://api.seclai.com/authenticated/accounts/{account_id}/governance/policies/{policy_id}/knowledge-bases \
  -H "Authorization: Bearer YOUR_TOKEN"

# Check which KBs would be circular for an agent-scoped policy
curl "https://api.seclai.com/authenticated/accounts/{account_id}/governance/policies/circular-knowledge-bases?agent_id=AGENT_UUID" \
  -H "Authorization: Bearer YOUR_TOKEN"

Writing Effective Policies

When the built-in policy library doesn't cover your needs, you can write custom policy text. The policy text is the instruction given to the AI evaluator — it determines what the model looks for when screening content.

Policy Structure

An effective policy text should include:

  1. What to detect — a clear description of the content pattern to flag
  2. Examples — concrete examples of violating content (helps the model calibrate)
  3. Non-examples (optional) — examples of acceptable content that might look similar
  4. Severity guidance (optional) — what constitutes a minor vs. major violation
Detect any content that contains pricing information for competitor products.

Examples of violations:
- "CompetitorX charges $49/month for their basic plan"
- "Switching from RivalCo saves you 30%"
- "Their enterprise tier starts at $500/seat"

Not violations:
- "Our pricing starts at $15/month"
- "Compare plans on our pricing page"
- General industry pricing trends without naming competitors

Best Practices

  • Be specific. Vague policies like "flag bad content" produce inconsistent results. State exactly what constitutes a violation.
  • Include 3–5 examples. Examples dramatically improve evaluator accuracy, especially for domain-specific policies.
  • Test before deploying. Use the Test tab to verify your policy catches what you expect and doesn't over-flag acceptable content.
  • Start with higher thresholds. Begin with flag = 0.6 and block = 0.9, then lower them after reviewing initial results.
  • One concern per policy. A policy that tries to detect PII and brand violations will be less accurate than two separate policies.
  • Use the Thorough tier for complex policies. Simple pattern-matching policies work well with the Fast tier, but nuanced judgment calls benefit from the Thorough tier.
  • Review and iterate. Check the Review queue regularly and adjust policies based on false positive/negative patterns.

Example Policies

Medical advice detection:

Flag any content that provides specific medical diagnoses, treatment
recommendations, or medication dosage advice. General health information
and wellness tips are acceptable.

Violations:
- "Based on your symptoms, you likely have strep throat"
- "Take 400mg of ibuprofen every 6 hours"
- "You should discontinue your current medication"

Not violations:
- "Consider consulting a healthcare provider"
- "Regular exercise can improve cardiovascular health"
- "The CDC recommends annual flu vaccinations"

Financial advice detection:

Flag content that provides specific investment recommendations, price
predictions, or personalized financial advice.

Violations:
- "You should buy AAPL stock now"
- "Bitcoin will reach $200k by next year"
- "Move your 401k into bonds"

Not violations:
- "Diversification is a common investment strategy"
- "Historical market returns have averaged ~10% annually"
- "Consult a financial advisor for personalized advice"

Internal-only information:

Flag any content that references internal project codenames, unreleased
product features, or internal-only URLs.

Violations:
- References to "Project Phoenix" or "Project Titan"
- URLs containing "internal.company.com" or "staging.company.com"
- "The upcoming v3.0 release will include..."

Not violations:
- Publicly announced product features
- Public documentation URLs
- General product roadmap statements already shared externally

Monitoring & Review

Overview Dashboard

The Overview tab provides a real-time summary of governance activity:

  • Total evaluations — how many pieces of content have been screened
  • Pass / Flag / Block counts — breakdown by verdict
  • Unresolved flags and blocks — items awaiting human review
  • By screening point — where evaluations are occurring (source content, agent input, step output)
  • By category — which policy categories are triggering most often

Use this dashboard to spot trends: a sudden spike in flags might indicate a policy that's too sensitive, while zero evaluations might mean governance isn't enabled where you expect it.

Reviewing Evaluations

The Review tab shows all evaluations that need attention:

  1. Filter by verdict (flag/block), screening point, date range, agent, or source
  2. Review each evaluation: see the content excerpt, policy name, confidence score, and AI explanation
  3. Resolve evaluations after review (see below)

Resolving Evaluations

When you resolve an evaluation, you're marking it as reviewed. This serves as an audit trail and clears the item from the unresolved queue.

  • Click Resolve on an evaluation
  • Optionally add a resolution note (e.g., "False positive — content is acceptable", "Confirmed violation — notified content team")
  • The evaluation is timestamped with your name and note

Via API:

curl -X POST \
  https://api.seclai.com/authenticated/accounts/{account_id}/governance/evaluations/{evaluation_id}/resolve \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"resolution_note": "False positive — acceptable context"}'

Testing Policies

How Testing Works

The Test tab lets you evaluate content against your policies without affecting real agent runs or source pulls:

  1. Go to Governance → Test
  2. Paste or type the content you want to test
  3. Optionally select a specific policy (or test against all active policies)
  4. Click Test
  5. Review the results: each policy produces an evaluation with a score, verdict, and explanation

Testing uses the same evaluator as production screening, so results accurately reflect what would happen in a live run.

Note: The Test tab is disabled when no active policies exist. Create or enable at least one policy to use it.

Testing Draft Policies

You can test content against a draft policy that hasn't been saved yet. This is useful when writing a new custom policy and you want to iterate on the policy text before committing it.

Draft testing lets you specify:

  • Content — the text to evaluate
  • Policy text — the draft policy text to evaluate against
  • Thresholds — optional flag and block thresholds (defaults: 0.5 and 0.8)
  • Knowledge base associations — optional list of knowledge bases to include in the evaluation, each with a match action and position

The draft policy is not persisted. Evaluation results are returned immediately but are not stored as permanent evaluations.

Via API:

curl -X POST \
  https://api.seclai.com/authenticated/accounts/{account_id}/governance/test-draft \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "content": "Contact John Smith at john@example.com",
    "policy_text": "Flag any content containing personally identifiable information.",
    "flag_threshold": 0.5,
    "block_threshold": 0.8,
    "knowledge_base_associations": [
      {
        "knowledge_base_id": "KB_UUID",
        "match_action": "inform",
        "position": 0
      }
    ]
  }'

Opting Test Results into Review

By default, test evaluations (screening point policy_test) do not appear in the Review queue. If you want a test result to be visible in the Review tab — for example, to share it with a colleague or track it as part of a review workflow — you can opt it into review.

Opting a test evaluation into review changes its screening point from policy_test to policy_test_review. This makes it appear in the Review queue alongside production evaluations. The action is idempotent: if the evaluation is already in review, the operation succeeds without changes.

Note: Only evaluations with screening point policy_test can be opted into review. Attempting to opt a production evaluation (source content, agent input, or step output) will return an error.

Via API:

curl -X POST \
  https://api.seclai.com/authenticated/accounts/{account_id}/governance/evaluations/{evaluation_id}/opt-into-review \
  -H "Authorization: Bearer YOUR_TOKEN"

Interpreting Results

Each test result includes:

FieldDescription
Policy nameWhich policy was evaluated
ScoreConfidence from 0.0 (no match) to 1.0 (certain match)
VerdictPass, flag, or block based on the policy's thresholds
ExplanationAI-generated reasoning for the score

Tips for iterating:

  • If a policy flags content that should pass, consider raising the flag threshold or adding "not violation" examples to the policy text
  • If a policy misses content that should be caught, consider lowering the flag threshold or adding more violation examples
  • Test with both violating and non-violating content to verify the policy doesn't over-flag

Via API:

# Test against all active policies
curl -X POST \
  https://api.seclai.com/authenticated/accounts/{account_id}/governance/test \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"content": "Contact John Smith at john@example.com or 555-0123"}'

# Test against a specific policy
curl -X POST \
  https://api.seclai.com/authenticated/accounts/{account_id}/governance/test \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "content": "Contact John Smith at john@example.com",
    "policy_id": "POLICY_UUID"
  }'

Governance AI Assistant

The governance AI assistant lets you manage policies using natural language. It follows a propose-then-accept workflow:

  1. Describe what you want — e.g., "Add a PII policy with strict thresholds" or "Disable all bias policies"
  2. Review the proposed plan — the AI generates a list of specific actions (create, update, delete, enable, disable) with full details
  3. Accept or decline — if the plan looks right, accept it and the actions are executed automatically; if not, decline and try again

Example prompts:

  • "Add PII and content safety policies with low flag thresholds"
  • "Create a custom policy that flags medical advice"
  • "Disable all policies except PII detection"
  • "Lower the block threshold on my content safety policy to 0.7"
  • "Set up governance policies and enable input screening"

The AI assistant has full context about your current policies, settings, and the sample policy library, so it can make informed recommendations.

Via API:

# Generate a plan
curl -X POST \
  https://api.seclai.com/authenticated/accounts/{account_id}/governance/ai-assistant \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"user_input": "Add a PII detection policy with strict thresholds"}'

# Accept the plan
curl -X POST \
  https://api.seclai.com/authenticated/accounts/{account_id}/governance/ai-assistant/{conversation_id}/accept \
  -H "Authorization: Bearer YOUR_TOKEN"

# Decline the plan
curl -X POST \
  https://api.seclai.com/authenticated/accounts/{account_id}/governance/ai-assistant/{conversation_id}/decline \
  -H "Authorization: Bearer YOUR_TOKEN"

Integration Points

Governance integrates into the existing Seclai pipeline at multiple points:

IntegrationWhereWhat happens
Source content screeningDuring content source pullsNew items are evaluated before being indexed. Blocked items are withheld.
Agent input screeningBefore an agent run processes inputUser-provided input is checked. Blocked input prevents the run.
Step output screeningAfter each agent step completesAI-generated outputs are checked. Blocked outputs are replaced with a governance notice.
Agent detail pageGovernancePanel on the agent detail pageConfigure per-agent governance settings and create agent-scoped policy overrides with enforcement and inheritance controls.
Source detail pageGovernancePanel on the source detail pageConfigure per-source governance settings and create source-scoped policy overrides.
Step edit modalGovernancePanel in the step edit dialogCreate step-scoped policy overrides for individual agent steps.
Dashboard All TracesDashboard → Agents tab → All TracesSelect past runs and submit them for retroactive governance evaluation against active policies.

Retroactive Evaluation

In addition to real-time screening, you can evaluate past agent runs against your current governance policies. This is useful when:

  • You create a new policy and want to check whether recent runs would have been flagged
  • You adjust policy thresholds and want to see how the change would affect historical runs
  • You need to audit a batch of runs for compliance

How to use it:

  1. Go to Dashboard → Agents tab and scroll to the All Traces section
  2. Use the filters (status, agent, tag, evaluation, governance) to find the runs you want to evaluate
  3. Select one or more runs using the checkboxes
  4. Click Run Governance Eval — this button appears only when you have at least one active governance policy and at least one run is selected
  5. Confirm the evaluation in the modal
  6. Results are processed asynchronously and appear in the Gov column once complete

Retroactive evaluations use the same policies, thresholds, and evaluation tiers as real-time screening. Results are recorded as standard governance evaluations and appear in the Review queue.


API Reference

All governance endpoints are under:

/authenticated/accounts/{account_id}/governance/

Authentication: All endpoints require a valid access token via the Authorization: Bearer header.

Sample Policy Endpoints

Sample policies available for adoption:

MethodEndpointDescription
GET/sample-policiesList available sample policies from the library, optionally filtered by category
GET/sample-policies/{sample_slug}Get a specific sample policy including its full policy text

Policy Endpoints

CRUD operations for account governance policies:

MethodEndpointDescription
GET/policiesList account policies (supports pagination and scope filters: agent_id, source_connection_id)
GET/policies/{policy_id}Get a specific policy by ID
POST/policiesCreate a new policy from a sample policy or custom text, with optional scope, thresholds, enforcement level, and inheritance mode
PATCH/policies/{policy_id}Update a policy's enabled status, thresholds, enforcement level, inheritance mode, or custom text
DELETE/policies/{policy_id}Soft-delete a governance policy
GET/resource-policy-countsGet policy counts grouped by resource (agent, source, step). Useful for governance indicators on resource lists.

Knowledge Base Association Endpoints

Link knowledge bases to policies for evidence-based evaluation:

MethodEndpointDescription
GET/policies/{policy_id}/knowledge-basesList all knowledge base associations for a policy, ordered by position
PUT/policies/{policy_id}/knowledge-basesReplace all knowledge base associations for a policy (atomic replacement)
GET/policies/circular-knowledge-basesGet knowledge base IDs that would create circular references for a given scope. Pass agent_id or source_connection_id as query parameters.

Settings Endpoints

Configure where and how governance screening is applied:

MethodEndpointDescription
GET/settingsList all governance settings, optionally filtered by agent_id or source_connection_id
PUT/settingsCreate or update governance settings for a scope (account-wide, agent, source)

Evaluation Endpoints

View and manage screening results:

MethodEndpointDescription
GET/statsAggregate governance statistics (pass/flag/block counts, unresolved totals)
GET/evaluationsList evaluations with rich filtering: verdict, screening point, date range, agent, source, policy
POST/evaluations/{evaluation_id}/resolveResolve a flagged or blocked evaluation with an optional resolution note
POST/evaluations/{evaluation_id}/opt-into-reviewPromote a policy_test evaluation to the review queue (changes screening point to policy_test_review)

Audit Trail Endpoints

View the change history for governance policies:

MethodEndpointDescription
GET/changesList all governance policy changes for the account. Supports filters: change_type (created/updated/deleted), action, date range.
GET/policies/{policy_id}/changesList audit trail entries for a specific policy, showing all creates, updates, and deletes.

Credit Estimation Endpoints

Get estimated credit costs for governance evaluations:

MethodEndpointDescription
GET/credit-estimatesGet estimated min/max credit ranges per evaluation tier (fast, balanced, thorough) based on current usage rates.

Testing Endpoints

Ad-hoc policy testing without affecting production:

MethodEndpointDescription
POST/testTest content against active policies. Optionally specify a policy_id to test against a single policy.
POST/test-draftTest content against an unsaved draft policy with optional thresholds and knowledge base associations. The draft is not persisted.

AI Assistant Endpoints

Natural-language governance management using the propose-then-accept workflow:

MethodEndpointDescription
POST/ai-assistantGenerate a governance plan from a natural language description
POST/ai-assistant/{conversation_id}/acceptAccept and execute a previously proposed governance plan
POST/ai-assistant/{conversation_id}/declineDecline a proposed governance plan
GET/ai-assistant/conversationsList previous AI assistant conversations

See the interactive API documentation at /docs when your API server is running for full request/response schemas.

MCP Tools

If you use an MCP-compatible client (Claude Desktop, Claude Code, Cursor), 28 governance tools are available — covering the same operations as the REST API plus the AI assistant.

Policy management:

ToolDescription
list_governance_policy_documentsList available sample policies from the library, optionally filtered by category
get_governance_policy_documentGet a specific sample policy including its full policy text
list_governance_policiesList governance policies assigned to the account, with optional scope filters
get_governance_policyGet a specific account governance policy by ID
create_governance_policyCreate a new policy from a sample policy or custom text, optionally scoped
update_governance_policyUpdate a policy's enabled status, thresholds, enforcement level, or inheritance mode
delete_governance_policySoft-delete a governance policy

Knowledge base associations:

ToolDescription
list_policy_knowledge_basesList knowledge base associations for a policy, ordered by position
set_policy_knowledge_basesReplace all knowledge base associations for a policy (atomic replacement). Rejects circular references.
get_circular_knowledge_basesGet knowledge base IDs that would create circular references. Optionally scope to an agent or source.

Settings and statistics:

ToolDescription
get_governance_settingsGet governance settings for a scope (account, agent, or source)
update_governance_settingsUpdate governance settings (enabled, review flags, evaluation tier)
list_governance_settingsList all governance settings, optionally filtered by agent
get_governance_statsGet aggregate governance statistics (pass/flag/block counts, unresolved counts)
get_governance_credit_estimatesGet estimated credit costs per evaluation tier (fast, balanced, thorough)

Evaluations and testing:

ToolDescription
list_governance_evaluationsList governance evaluations with filtering by verdict, screening point, and date range
resolve_governance_evaluationResolve a flagged or blocked evaluation with an optional note
bulk_resolve_governance_evaluationsResolve multiple evaluations at once with an optional shared resolution note
opt_evaluation_into_reviewPromote a test evaluation to the review queue (changes screening point to policy_test_review)
test_governance_policyTest content against active governance policies and see evaluation results
test_draft_governance_policyTest content against an unsaved draft policy with optional thresholds and KB associations

Audit trail and cost estimation:

ToolDescription
list_governance_audit_trailList all governance policy changes for the account, with optional date and type filters
list_governance_policy_changesList the audit trail for a specific governance policy
get_governance_credit_estimatesGet estimated credit costs per evaluation tier (fast, balanced, thorough)

AI assistant:

ToolDescription
generate_governance_planGenerate a plan for policy changes from a natural language description
accept_governance_planAccept and execute a previously proposed governance plan
decline_governance_planDecline a proposed governance plan
list_governance_conversationsList recent governance AI assistant conversations (plans, outcomes)

See the MCP Server documentation for setup instructions and usage examples.


Exporting

Governance data can be exported for offline analysis, compliance audits, or archival. Two resource types are available:

Governance Policies

Export all governance policies, including policy text, thresholds, scope, enforcement level, and knowledge base associations. Supported formats: JSON and JSONL.

  • UI: Go to Governance → Policies and click the Export button.
  • API: POST /authenticated/resource-exports with resource_type: "governance_policies".
  • MCP: Use the create_resource_export tool with resource_type: "governance_policies".

See Export Formats → Governance Policies for the full file schema.

Governance Evaluations

Export governance evaluation results — screening verdicts, confidence scores, AI explanations, and resolution notes. Supported formats: JSON, JSONL, and CSV.

  • UI: Go to Governance → Review and click the Export button. You'll see an estimate of the record count before confirming.
  • API: POST /authenticated/resource-exports with resource_type: "governance_evals".
  • MCP: Use the create_resource_export tool with resource_type: "governance_evals".

See Export Formats → Governance Evaluations for the full file schema and available filter options.

Audit trail: Governance exports (both policies and evaluations) are automatically recorded in the audit trail for compliance tracking.


Permissions

RoleAccess
Owner / AdminFull access: configure policies, settings, resolve evaluations, use AI assistant
MemberFull access (same as admin for governance)
ViewerNo access to governance features

Next Steps

  • Agents — Learn about the agents that governance screens
  • Content Sources — Learn about the sources whose content governance can screen
  • MCP Server — Manage governance from AI coding tools
  • API Examples — Common API integration patterns
  • Alerts — Set up monitoring for agent runs and source pulls