Spanlens Docs

LLM observability — cost tracking, agent tracing, PII + prompt-injection detection, and model recommendations for OpenAI / Anthropic / Gemini calls.

Get started in 30 seconds

npx @spanlens/cli init
bash

Common questions

Does Spanlens add latency to my requests?

Typical overhead is 10–50ms per call — a thin pass-through proxy. Your requests flow to OpenAI / Anthropic / Gemini; responses stream back. Logging is fire-and-forget via Vercel's waitUntil, so it never blocks the response.

Is my provider key safe?

Yes. Provider keys are AES-256-GCM encrypted at rest in your Supabase. They're only decrypted in memory when forwarding a request, never logged. For extra control, self-host.

Can I use Spanlens with my existing Langfuse / Helicone setup?

Yes — Spanlens is a drop-in replacement at the baseURL level. You can keep both running side-by-side during migration.

What providers are supported?

OpenAI, Anthropic, and Google Gemini — including streaming responses. We match the upstream API 1:1, so any SDK that talks to those providers works.