Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.egisai.co/llms.txt

Use this file to discover all available pages before exploring further.

This page walks you through your first governed model call in under five minutes. By the end, you’ll have:
  • The egisai SDK installed.
  • A running Python process whose OpenAI / Anthropic / Google calls are evaluated against your organization’s policies.
  • A request visible on the EgisAI dashboard.

Prerequisites

Python 3.11 or newer.
An EgisAI account.
An SDK API key (Dashboard → API KeysCreate key). Keys begin with egis_live_.
An LLM provider key (OPENAI_API_KEY, ANTHROPIC_API_KEY, or GOOGLE_API_KEY).

1. Install the SDK

The [all] extra pulls in every supported integration. If you only use one provider, pick the matching extra to keep your install small.
pip install "egisai[all]"
Only frameworks that are actually importable in your environment are patched at runtime. You can install egisai first and add provider SDKs later — the next process restart picks them up.

2. Set environment variables

export EGISAI_API_KEY="egis_live_…"
export OPENAI_API_KEY="sk-…"        # or your provider of choice
Treat EGISAI_API_KEY like any other secret. Use a secrets manager or your platform’s environment configuration — never commit keys to source control.

3. Initialize once at startup

Call egisai.init() as early as possible — typically right after loading configuration and before any module imports your LLM client.
main.py
import egisai
import openai

egisai.init(
    app="customer-support-bot",
    env="production",
)

client = openai.OpenAI()
response = client.chat.completions.create(
    model="gpt-4.1",
    messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)
When you run the script you’ll see a one-line banner on stderr confirming the SDK is active:
✓ [egisai] active — app=customer-support-bot env=production on_block=raise integrations=[openai] policies=N
The banner reports the number of policies your org has enabled. If it says policies=0, head to the dashboard and create one — without active policies, every call passes through unconditionally.

4. Confirm the call on the dashboard

Open Dashboard → Requests. Within a second or two you’ll see your call with:
  • The application (Agent) you set via app=.
  • The provider and model.
  • The verdict (allow / sanitize / block) and any rule that fired.
  • Latency and token attribution.

5. (Optional) Try a policy

To see governance in action, create a simple policy in the dashboard — for example a regex deny-list rule that blocks the word forbidden. Re-run your script with that string in the prompt; the call should be refused. By default egisai.init(...) raises PermissionError when a policy blocks the call. If you’d rather receive a framework-shaped refusal object, switch modes:
egisai.init(..., on_block="stub")
See Blocking behavior for the trade-offs between the two modes.

What’s next

How it works

Understand the call path, evaluation phases, and audit pipeline.

Configuration

Every option egisai.init() accepts and what it controls.

Integrations

Provider-specific notes for OpenAI, Anthropic, Google, and HTTP clients.

Multi-agent context

Run several distinct agents from the same process with set_context().