Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.egisai.co/llms.txt

Use this file to discover all available pages before exploring further.

The egisai package exposes a small, frozen set of dataclasses for callers that want to use the policy engine outside the patched provider SDKs. They are all plain Python dataclasses — no I/O, no hidden state — so they are safe to log, diff, and pickle.
from egisai import (
    PolicyRule,
    PolicyContext,
    OutputPolicyContext,
    PolicyDecision,
    evaluate_policies,
    evaluate_output_policies,
)

PolicyRule

@dataclass(frozen=True)
class PolicyRule:
    id: str | None
    name: str
    type: str
    tenant: str | None
    config: dict[str, Any]
    agent_ids: tuple[str, ...] = ()
One active rule. The type field selects the evaluator inside the engine — common values include pii_scan, deny_regex, allow_model, max_prompt_chars, and semantic_guard. The config dict carries the type-specific options. agent_ids scopes the rule to specific agents — an empty tuple means “applies to every agent in the tenant”.

PolicyContext

@dataclass(frozen=True)
class PolicyContext:
    tenant: str
    model: str
    prompt_text: str
    prompt_chars: int
    stream: bool
Inputs for evaluating input-side policies — i.e. before the model call runs. Construct one when you want to evaluate rules against arbitrary text outside the patched call paths.
FieldDescription
tenantThe tenant ID for which to apply the rules.
modelThe model name your call targets (e.g. "gpt-4.1"). Used by allow_model rules.
prompt_textThe text the rules should evaluate.
prompt_charsThe character count of prompt_text. Used by max_prompt_chars rules.
streamWhether the call is streaming. Reported on the audit event but does not affect evaluation.

OutputPolicyContext

@dataclass(frozen=True)
class OutputPolicyContext:
    tenant: str
    model: str
    text: str
    tool_names: list[str]
    tool_calls: list[dict[str, str]]
    mcp_targets: list[str]
    stream: bool
Inputs for evaluating output-side policies — i.e. after the model has responded. Captures the structured response so output-side rules can inspect both the assistant text and any tool / connector calls.
FieldDescription
tenantThe tenant ID for which to apply the rules.
modelThe model name that produced the output.
textThe assistant’s text reply, if any.
tool_namesList of tool names requested by the model.
tool_callsStructured list of tool invocations.
mcp_targetsConnector targets requested by the model.
streamWhether the call was streaming.

PolicyDecision

@dataclass(frozen=True)
class PolicyDecision:
    verdict: str                          # "allow" | "sanitize" | "block"
    reason_code: str | None
    message: str | None
    matched_policy: str | None
    matched_policies: tuple[MatchedPolicyRecord, ...] = ()
    sanitize_kinds: list[str]             # populated only on "sanitize"
    sanitize_mask_char: str = "#"
Outcome of evaluating a list of PolicyRule objects against one PolicyContext (input side) or OutputPolicyContext (output side).
FieldDescription
verdict"allow", "sanitize", or "block". The most restrictive verdict across all matches wins; precedence is block > sanitize > allow.
reason_codeShort code identifying what fired (None on allow).
messageHuman-readable explanation (None on allow).
matched_policyName of the primary rule that produced the verdict.
matched_policiesOrdered tuple of every rule that fired during evaluation, even when a more restrictive verdict dominates.
sanitize_kindsCategories of sensitive content that were masked (only meaningful when verdict == "sanitize").
sanitize_mask_charMask character used for sanitization.
PolicyDecision exposes three convenience constructors, mostly used internally:
ConstructorReturns
PolicyDecision.allow()An allow verdict with no matched rules.
PolicyDecision.deny(reason_code, message, matched_policy, ...)A block verdict.
PolicyDecision.sanitize(kinds, reason_code, message, matched_policy, ...)A sanitize verdict.

MatchedPolicyRecord

Each entry in PolicyDecision.matched_policies:
@dataclass(frozen=True)
class MatchedPolicyRecord:
    name: str
    type: str
    verdict: str                          # what this single rule would have said
    reason_code: str
    message: str
    sanitize_kinds: tuple[str, ...] = ()
    sanitize_mask_char: str = "#"
This makes it possible to render an audit row that lists every rule that contributed to the final outcome — even rules whose individual verdicts were overridden by a more restrictive one.

evaluate_policies

def evaluate_policies(
    policies: list[PolicyRule],
    context: PolicyContext,
) -> PolicyDecision: ...
Run the supplied input-side rules against context and return the resulting PolicyDecision. No I/O — purely deterministic for the local rule kinds.

evaluate_output_policies

def evaluate_output_policies(
    policies: list[PolicyRule],
    context: OutputPolicyContext,
) -> PolicyDecision: ...
Run the supplied output-side rules against context and return the resulting PolicyDecision. Use this when you have a structured response (text + tool calls) that you want to evaluate before forwarding it to a downstream consumer.

Example

from egisai import (
    PolicyContext,
    PolicyRule,
    evaluate_policies,
)

rule = PolicyRule(
    id="rule_demo",
    name="forbid-greetings",
    type="deny_regex",
    tenant="org_demo",
    config={"pattern": r"hello", "flags": ["IGNORECASE"]},
)

context = PolicyContext(
    tenant="org_demo",
    model="gpt-4.1",
    prompt_text="Hello world!",
    prompt_chars=12,
    stream=False,
)

decision = evaluate_policies([rule], context)
print(decision.verdict)        # "block"
print(decision.matched_policy) # "forbid-greetings"

What’s next

Verdicts

What allow / sanitize / block mean for your call.

Policies

Categories of rules behind the verdicts.