Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.egisai.co/llms.txt

Use this file to discover all available pages before exploring further.

Every call routed through the SDK ends in exactly one verdict. Verdicts are surfaced in three places: the in-process call result, the audit event sent to the dashboard, and (when applicable) on PolicyDecision returned from the public policy primitives.

The three verdicts

allow

The call is forwarded to the provider unmodified.

sanitize

Sensitive values are masked locally, then the cleaned payload is forwarded. The raw value never reaches the provider.

block

The call is refused. The provider is never invoked.

Verdict precedence

When several rules fire on the same call, the final verdict is the most restrictive one. The precedence is:
block > sanitize > allow
Records of every rule that fired are preserved on the audit event, even when the final verdict is dominated by a more restrictive match. This makes historical review accurate.

allow

The call is forwarded to the provider as-is. An audit event is still recorded, including the model, latency, and token usage. No transformation is performed.

sanitize

sanitize happens when one or more local rules detect sensitive content that can be masked rather than blocked. The SDK:
  1. Replaces matched values with a mask (e.g. [REDACTED-EMAIL]) on a copy of the payload.
  2. Forwards the cleaned payload to the provider.
  3. Records the post-sanitization snapshot to the audit event.
Audit previews always reflect the sanitized text, never the raw value. When reviewing a request on the dashboard, you see exactly what shipped to the model.
If a Phase 2 rule (intent / judge-style) needs to look at the prompt later in the same evaluation, it sees the cleaned text — never the raw original.

block

block halts the call before it reaches the provider. The behavior on the caller side depends on the on_block setting:
on_blockWhat your code sees
"raise" (default)A PermissionError is raised. Your existing exception handlers can catch it.
"stub"A framework-shaped refusal object is returned. The shape matches what the provider would have returned, with content describing that the call was refused by policy.
Both modes record the same audit event on the dashboard.

When to choose stub

Choose "stub" when your application:
  • Cannot safely propagate exceptions (for example a long-running agent loop that should not crash on a single bad turn).
  • Already knows how to handle non-OK responses from the provider gracefully.
Choose "raise" when you’d rather see the failure loudly and surface it to a user-visible error path. See Blocking behavior for examples.

Inspecting a verdict programmatically

Most users never have to look at the verdict directly — the dashboard surfaces everything that’s needed for review. If you want to apply your own rules to text outside the patched call paths, the public evaluate_policies() and evaluate_output_policies() functions return a PolicyDecision:
from egisai import (
    evaluate_policies,
    PolicyContext,
    PolicyRule,
)

decision = evaluate_policies(
    policies=[...],          # list[PolicyRule]
    context=PolicyContext(
        tenant="my-org",
        model="gpt-4.1",
        prompt_text="Hello world",
        prompt_chars=11,
        stream=False,
    ),
)

print(decision.verdict)            # "allow" | "sanitize" | "block"
print(decision.reason_code)        # short code describing what fired
print(decision.matched_policy)     # primary matching rule name
for record in decision.matched_policies:
    print(record.name, record.verdict)
See the PolicyDecision reference for every field.

What’s next

Blocking behavior

Pick the right on_block mode for your application.

Policies

The categories of rules that produce these verdicts.