The egisai package exposes a small, frozen set of dataclasses for callers that want to use the policy engine outside the patched provider SDKs. They are all plain Python dataclasses — no I/O, no hidden state — so they are safe to log, diff, and pickle.Documentation Index
Fetch the complete documentation index at: https://docs.egisai.co/llms.txt
Use this file to discover all available pages before exploring further.
PolicyRule
type field selects the evaluator inside the engine —
common values include pii_scan, deny_regex, allow_model,
max_prompt_chars, and semantic_guard. The config dict carries the
type-specific options.
agent_ids scopes the rule to specific agents — an empty tuple means “applies
to every agent in the tenant”.
PolicyContext
| Field | Description |
|---|---|
tenant | The tenant ID for which to apply the rules. |
model | The model name your call targets (e.g. "gpt-4.1"). Used by allow_model rules. |
prompt_text | The text the rules should evaluate. |
prompt_chars | The character count of prompt_text. Used by max_prompt_chars rules. |
stream | Whether the call is streaming. Reported on the audit event but does not affect evaluation. |
OutputPolicyContext
| Field | Description |
|---|---|
tenant | The tenant ID for which to apply the rules. |
model | The model name that produced the output. |
text | The assistant’s text reply, if any. |
tool_names | List of tool names requested by the model. |
tool_calls | Structured list of tool invocations. |
mcp_targets | Connector targets requested by the model. |
stream | Whether the call was streaming. |
PolicyDecision
PolicyRule objects against one
PolicyContext (input side) or OutputPolicyContext (output side).
| Field | Description |
|---|---|
verdict | "allow", "sanitize", or "block". The most restrictive verdict across all matches wins; precedence is block > sanitize > allow. |
reason_code | Short code identifying what fired (None on allow). |
message | Human-readable explanation (None on allow). |
matched_policy | Name of the primary rule that produced the verdict. |
matched_policies | Ordered tuple of every rule that fired during evaluation, even when a more restrictive verdict dominates. |
sanitize_kinds | Categories of sensitive content that were masked (only meaningful when verdict == "sanitize"). |
sanitize_mask_char | Mask character used for sanitization. |
PolicyDecision exposes three convenience constructors, mostly used internally:
| Constructor | Returns |
|---|---|
PolicyDecision.allow() | An allow verdict with no matched rules. |
PolicyDecision.deny(reason_code, message, matched_policy, ...) | A block verdict. |
PolicyDecision.sanitize(kinds, reason_code, message, matched_policy, ...) | A sanitize verdict. |
MatchedPolicyRecord
Each entry inPolicyDecision.matched_policies:
evaluate_policies
context and return the resulting
PolicyDecision. No I/O — purely deterministic for the local rule kinds.
evaluate_output_policies
context and return the resulting
PolicyDecision. Use this when you have a structured response (text + tool
calls) that you want to evaluate before forwarding it to a downstream consumer.
Example
What’s next
Verdicts
What allow / sanitize / block mean for your call.
Policies
Categories of rules behind the verdicts.