Quick start
Two paths depending on your stack. Both take about 30 seconds and produce the same result — your LLM calls flow through Spanlens and show up in your dashboard.
Prerequisites
- A Spanlens account
- A Project + API key (created in /projects)
- Your provider key(s) registered in /settings — OpenAI, Anthropic, Gemini
Path A — CLI wizard (Next.js, recommended)
In your Next.js project root:
npx @spanlens/cli initbashThe wizard will:
- Detect your framework + package manager
- Ask for your Spanlens API key (one-time paste)
- Write
SPANLENS_API_KEYto.env.local - Install
@spanlens/sdkwith your package manager - Scan your codebase for
new OpenAI({...})calls and rewrite each intocreateOpenAI()
Then just:
- Add
SPANLENS_API_KEYto your production env (Vercel / Railway / Fly) - Redeploy
Preview the changes before applying: npx @spanlens/cli init --dry-run
Path B — Manual (any TypeScript / JavaScript project)
Step 1 — Install the SDK
npm install @spanlens/sdk
# or
pnpm add @spanlens/sdkbashStep 2 — Add environment variable
Copy your Spanlens API key from the dashboard and add it to your env file:
# .env.local
SPANLENS_API_KEY=sl_live_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxenvStep 3 — Use the pre-configured client helpers
OpenAI
import { createOpenAI } from '@spanlens/sdk/openai'
const openai = createOpenAI()
const res = await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Hi' }],
})tsAnthropic
import { createAnthropic } from '@spanlens/sdk/anthropic'
const anthropic = createAnthropic()
const msg = await anthropic.messages.create({
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages: [{ role: 'user', content: 'Hi' }],
})tsGemini
import { createGemini } from '@spanlens/sdk/gemini'
const genAI = createGemini()
const model = genAI.getGenerativeModel({ model: 'gemini-1.5-flash' })
const result = await model.generateContent('Hi')tsVerify it works
Run any LLM call through the configured client, then visit /requests. A new row should appear within a few seconds with:
- The model actually used (OpenAI returns dated variants like
gpt-4o-mini-2024-07-18) - Prompt / completion / total tokens
- Cost in USD
- Latency in ms
- Full request + response bodies (up to 10KB)
Troubleshooting
Request not showing up in /requests
- Confirm
SPANLENS_API_KEYis set in both.env.localAND your deployment env - After adding env vars to Vercel, redeploy — new env values don't apply to existing deployments
- Check the Network tab — your request should hit
spanlens-server.vercel.app/proxy/*, notapi.openai.comdirectly
Getting “401 Incorrect API key”
You probably replaced apiKey but forgot to set baseURL. Use createOpenAI() — it sets both for you.
Getting mock data instead of real LLM responses
Some apps fall back to mock responses when SPANLENS_API_KEY is missing. Double-check the env var is actually present at runtime: console.log(!!process.env.SPANLENS_API_KEY).
Next: see the SDK reference for agent tracing and advanced usage, or direct proxy for non-Node environments.