Skip to main content
1

Install the SDK

pip install pikarc
2

Get your API key

Sign up at app.pikarc.dev and copy your API key from Settings > API Key. The key format is lg_<prefix>_<secret>.
3

Wrap your model call

from pikarc import AsyncPikarc, PikarcBlockedError
from openai import AsyncOpenAI

openai = AsyncOpenAI()
guard = AsyncPikarc(
    api_key="lg_xxxx_your_api_key",
    base_url="http://localhost:8000",
)

async def chat(user_id: str, message: str) -> str:
    try:
        async with guard.run(user_id=user_id) as run:
            response = await run.model_call(
                fn=lambda: openai.chat.completions.create(
                    model="gpt-4o",
                    messages=[{"role": "user", "content": message}],
                ),
                model="gpt-4o",
                input_data={"messages": [{"role": "user", "content": message}]},
                token_extractor=lambda r: (
                    r.usage.prompt_tokens,
                    r.usage.completion_tokens,
                ),
            )
            return response.choices[0].message.content or ""

    except PikarcBlockedError as e:
        return f"Blocked: {e.reason}"

    finally:
        await guard.close()
4

See it in the dashboard

Open your dashboard to see the run, step timeline, token usage, and cost breakdown.

What just happened?

  1. guard.run() called POST /v1/runs/ — Pikarc checked the kill switch, budgets, and concurrency limits. The run was ALLOWED.
  2. run.model_call() called POST /v1/runs/{id}/steps — Pikarc evaluated guardrails again before the model call.
  3. After OpenAI responded, the SDK called PATCH /v1/runs/{id}/steps/{step_id} to report token usage and duration.
  4. When the async with block exited, the SDK called POST /v1/runs/{id}/end to mark the run as COMPLETED.
If any guardrail check had failed (budget exceeded, kill switch active, etc.), PikarcBlockedError would have been raised before the model call executed.

Next steps