Every governed call is evaluated in two clearly-separated phases:Documentation Index
Fetch the complete documentation index at: https://docs.egisai.co/llms.txt
Use this file to discover all available pages before exploring further.
- Pre-model — fires against the prompt before the provider is called.
- Post-model — fires against the response after the provider returns.
phase that selects which side it runs on.
Operators choose the phase in the dashboard when they create or edit a
rule; the SDK enforces the choice at evaluation time.
Why two phases
Some risks live on the prompt — a user pasting a Social Security number, or a script accidentally leaking an API key. Some live on the response — a model emitting a tool call to delete files, or returning a connector target that’s outside your allow-list. Splitting evaluation in two lets each rule run where it can actually do its job, without the SDK having to guess. It also lets the audit trail show exactly which side fired, which is what auditors care about during review.When each phase runs
Pre-model phase
Runs before the upstream model is invoked. Local deterministic checks
run first (PII detection, regex denylists, prompt-size caps, model
allow-lists). LLM-backed intent checks run after, and only on a
sanitized copy if a deterministic rule asked for one. If a rule
blocks, the provider is never called and the post-model phase is
skipped.
Post-model phase
Runs after the model responds, with the same two-phase split
as the prompt side: local deterministic checks first (PII
detection, regex denylists, response-size caps, model allow-lists,
tool / shell / MCP rules), LLM-backed
semantic_guard after — and
only when no local rule already blocked. If a rule blocks, the SDK
suppresses the response before it reaches your code and the LLM
judge is never consulted on that response.The deterministic-first ordering is the same security contract on
both sides: once a local rule has refused the call, the LLM judge is
not invoked — no network call, no token spend, no chance of the
prompt or response reaching an external model. Operator-set rule
priorities reorder rules within a phase but never across the
Phase 1 / Phase 2 split.
Phase × type matrix
Every rule type accepts every phase. The dashboard offers all three choices for any rule, and the engine evaluates each rule on whichever side it has meaningful signals for. The table below shows which combinations actually fire and which silently no-op.| Type | pre_model | post_model | Notes |
|---|---|---|---|
allow_model (model allow-list) | ✓ | ✓ | Same model-name check on either side. |
pii_scan (PII detection) | ✓ | ✓ | On the response, action="sanitize" is coerced to block — the SDK can’t safely rewrite provider response payloads. |
deny_regex (regex deny) | ✓ | ✓ | Reason code is prompt_blocked on the prompt, output_blocked on the response. |
deny_output_regex (regex deny, response-style) | ✓ | ✓ | Mirror of deny_regex; both names are interchangeable now. |
max_prompt_chars (size cap) | ✓ | ✓ | Caps prompt size on the prompt side, response size on the response side (output_too_large reason). |
semantic_guard (LLM-backed intent judge) | ✓ | ✓ | The judge is invoked with the prompt or the response. |
deny_tool_call (tool name deny) | (no-op) | ✓ | Tool definitions aren’t carried into the prompt-side context yet, so a pre_model choice silently no-ops. |
deny_bash_command (shell command deny) | (no-op) | ✓ | Same — needs response-side tool calls. |
deny_mcp_call (MCP target deny) | (no-op) | ✓ | Same — needs response-side connector targets. |
both, it runs on each side independently
and contributes one match to whichever phases fired. pii_scan set
to both will, for example, sanitize the prompt and block any PII
the model echoes back in its response.
Audit shape
The audit event records each phase’s decision independently:verdict is the dominant outcome (precedence
block > sanitize > allow). The two nested blocks describe what each
phase actually saw.
When the pre-model phase blocks, the model is never called, so
response_decision is null on the audit row. The dashboard renders
that as a single pre-model decision card, with no post-model card.
Backward compatibility
Older SDK and platform versions that pre-date the split treat every rule as if it wereboth. When such a rule lands on a 0.12.4+ SDK, the SDK
parses a missing or unrecognised phase field as both, preserving the
old behavior. When such an audit event lands on the new dashboard, the
backend back-fills prompt_verdict from the legacy verdict column so
historical rows still render in the new column layout.
Choosing the right phase
A few rules of thumb:- If the rule looks at the prompt only (an inbound PII scan, a
prompt-size cap, an intent check on what the user asked for),
choose
pre_model. - If the rule looks at the response only (a tool name, a shell
command, a connector target, an outbound PII scan), choose
post_model. - If you want the rule to enforce on both sides — for example,
scanning for PII in both the prompt and the response — choose
both. Most text-content rules support this naturally. - Tool / bash / MCP rules belong on
post_model; onpre_modelthey silently no-op because the SDK doesn’t carry tool definitions into the prompt-side context yet.
What’s next
Verdicts
Allow, sanitize, and block — what each one means in detail.
Policies
Categories of rules and where they’re configured.