Documentation Index
Fetch the complete documentation index at: https://docs.egisai.co/llms.txt
Use this file to discover all available pages before exploring further.
The SDK transparently governs calls made through the official openai Python
package. After egisai.init() runs, every supported method on OpenAI /
AsyncOpenAI clients is policy-checked, audited, and (where applicable)
sanitized before reaching the provider.
Supported surface
| Method | Sync | Async | Streaming |
|---|
chat.completions.create | ✅ | ✅ | ✅ |
responses.create | ✅ | ✅ | ✅ |
Streaming responses are wrapped in a generator that audits the assembled
output without materialising it for your code — your existing for chunk in stream: loop continues to work.
Minimum version
openai>=1.40 is recommended. Older releases do not expose the modern
Chat/Responses API surface the patcher targets.
Install
pip install "egisai[openai]"
Use
import egisai
import openai
egisai.init(app="customer-support-bot", env="production")
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-4.1",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"},
],
)
print(response.choices[0].message.content)
The call is intercepted before it leaves your process, evaluated against your
active policies, and only then forwarded to OpenAI.
Async usage
import asyncio
import egisai
import openai
egisai.init(app="async-bot", env="production")
async def main():
client = openai.AsyncOpenAI()
response = await client.chat.completions.create(
model="gpt-4.1",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)
asyncio.run(main())
Streaming
client = openai.OpenAI()
stream = client.chat.completions.create(
model="gpt-4.1",
messages=[{"role": "user", "content": "Tell me a story."}],
stream=True,
)
for chunk in stream:
delta = chunk.choices[0].delta.content
if delta:
print(delta, end="", flush=True)
When the model returns tool calls, output-side policies (for example tool/
connector restrictions) are evaluated against the structured response. If a
tool call is blocked, the verdict is recorded with the offending tool name.
response = client.chat.completions.create(
model="gpt-4.1",
messages=[{"role": "user", "content": "Run the cleanup task."}],
tools=[
{
"type": "function",
"function": {
"name": "delete_database",
"description": "Drop the production database.",
"parameters": {"type": "object", "properties": {}},
},
}
],
)
When a call is blocked
By default a blocked call raises PermissionError. Switch modes at init if
you’d rather receive a refusal-shaped response object:
egisai.init(..., on_block="stub")
Read Blocking behavior for the full picture.
Notes
- The SDK detects vision parts in
messages and ignores image content for
text-based PII checks. Text fields elsewhere in the payload are still
evaluated.
- The patcher runs only when the
openai package is importable; if you
uninstall it, the integration silently disables itself.
What’s next