Spanlens Docs
LLM observability — cost tracking, agent tracing, PII + prompt-injection detection, and model recommendations for OpenAI / Anthropic / Gemini calls.
Quick start
30-second wizard setup or manual integration in 2 lines of code.
@spanlens/sdk
TypeScript SDK reference — createOpenAI, observe, span helpers, trace API.
Direct proxy (any language)
Use Python, Ruby, Go, or raw HTTP — just swap the base URL.
Self-hosting
Run Spanlens on your own infra with one Docker command. Your data stays yours.
Get started in 30 seconds
npx @spanlens/cli initbashCommon questions
Does Spanlens add latency to my requests?
Typical overhead is 10–50ms per call — a thin pass-through proxy. Your requests flow to OpenAI / Anthropic / Gemini; responses stream back. Logging is fire-and-forget via Vercel's waitUntil, so it never blocks the response.
Is my provider key safe?
Yes. Provider keys are AES-256-GCM encrypted at rest in your Supabase. They're only decrypted in memory when forwarding a request, never logged. For extra control, self-host.
Can I use Spanlens with my existing Langfuse / Helicone setup?
Yes — Spanlens is a drop-in replacement at the baseURL level. You can keep both running side-by-side during migration.
What providers are supported?
OpenAI, Anthropic, and Google Gemini — including streaming responses. We match the upstream API 1:1, so any SDK that talks to those providers works.