This page is intended for application owners, security reviewers, and architects evaluating the SDK. It summarises the relevant aspects of how the SDK handles sensitive content and links to the canonical security policy in the public source repository.Documentation Index
Fetch the complete documentation index at: https://docs.egisai.co/llms.txt
Use this file to discover all available pages before exploring further.
Data handling principles
Local-first sensitive checks
Pattern-based PII detection runs entirely inside your process. Raw
sensitive values are not transmitted to third-party LLMs as part of
governance.
Sanitize before judge
When a judge-style policy needs to inspect prompt text, it sees the
sanitized copy — never the original. Phase 1 always runs before Phase 2.
Audit reflects what shipped
Audit previews are sampled from the post-sanitization payload, so the
dashboard shows exactly what reached the model — not the raw input.
Fail closed on PII
If the local PII engine errors mid-evaluation, the call is treated as if
sensitive content was detected. We err on the side of over-blocking.
What never leaves your process as part of governance
- The raw text of payloads matched by local PII rules, before sanitization.
- API keys you pass to provider SDKs (those go to the provider directly via the provider SDK, not via the EgisAI SDK).
- Anything covered by your operator-configured local rules.
- The post-sanitization preview of the prompt and response (what shipped to the model).
- The verdict, matched rules, latency, and token usage.
- The agent and user/session metadata you attach via
set_context().
Network footprint
| Endpoint | Purpose |
|---|---|
| Your configured EgisAI control plane | Policy fetch, audit delivery, agent registration. |
| The provider you call (OpenAI / Anthropic / Google / your own) | The actual LLM call. |
Secrets handling
- Treat
EGISAI_API_KEYand any provider keys as secrets. Use environment variables, a secrets manager, or your platform’s configuration store. - The SDK redacts API keys when echoing them in startup logs.
set_context(user_id=...)is a free-form string — you control whether it contains an end-user PII identifier or an opaque token.
Retention and review
Audit data is retained according to your contract and is reviewable on the dashboard. For long-term archival, ask your account contact about export options.Reporting vulnerabilities
Please do not use public GitHub issues for security-sensitive matters. The canonical report channel is in SECURITY.md in the source repository. We follow a coordinated disclosure process and will work with you on a reasonable timeline.Architecture review summary
A short summary suitable for architecture reviews:- Governance evaluates prompts with respect to your organization’s policies before upstream invocation where applicable.
- Sensitive-content handling is architected so that raw regulated values are not sent to third-party LLMs as part of the policy enforcement workflows described here.
- Audit metadata is delivered asynchronously so that latency to EgisAI does not sit on the critical path of any provider call.
What’s next
How it works
Step-by-step view of the call path.
Releases
Version history and verifying installed wheels.