Quick start

Two paths depending on your stack. Both take about 30 seconds and produce the same result — your LLM calls flow through Spanlens and show up in your dashboard.

Prerequisites

  1. A Spanlens account
  2. A Project + API key (created in /projects)
  3. Your provider key(s) registered in /settings — OpenAI, Anthropic, Gemini

Path A — CLI wizard (Next.js, recommended)

In your Next.js project root:

npx @spanlens/cli init
bash

The wizard will:

  1. Detect your framework + package manager
  2. Ask for your Spanlens API key (one-time paste)
  3. Write SPANLENS_API_KEY to .env.local
  4. Install @spanlens/sdk with your package manager
  5. Scan your codebase for new OpenAI({...}) calls and rewrite each into createOpenAI()

Then just:

  1. Add SPANLENS_API_KEY to your production env (Vercel / Railway / Fly)
  2. Redeploy

Preview the changes before applying: npx @spanlens/cli init --dry-run

Path B — Manual (any TypeScript / JavaScript project)

Step 1 — Install the SDK

npm install @spanlens/sdk
# or
pnpm add @spanlens/sdk
bash

Step 2 — Add environment variable

Copy your Spanlens API key from the dashboard and add it to your env file:

# .env.local
SPANLENS_API_KEY=sl_live_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
env

Step 3 — Use the pre-configured client helpers

OpenAI

import { createOpenAI } from '@spanlens/sdk/openai'

const openai = createOpenAI()

const res = await openai.chat.completions.create({
  model: 'gpt-4o-mini',
  messages: [{ role: 'user', content: 'Hi' }],
})
ts

Anthropic

import { createAnthropic } from '@spanlens/sdk/anthropic'

const anthropic = createAnthropic()

const msg = await anthropic.messages.create({
  model: 'claude-3-5-sonnet-20241022',
  max_tokens: 1024,
  messages: [{ role: 'user', content: 'Hi' }],
})
ts

Gemini

import { createGemini } from '@spanlens/sdk/gemini'

const genAI = createGemini()
const model = genAI.getGenerativeModel({ model: 'gemini-1.5-flash' })
const result = await model.generateContent('Hi')
ts

Verify it works

Run any LLM call through the configured client, then visit /requests. A new row should appear within a few seconds with:

  • The model actually used (OpenAI returns dated variants like gpt-4o-mini-2024-07-18)
  • Prompt / completion / total tokens
  • Cost in USD
  • Latency in ms
  • Full request + response bodies (up to 10KB)

Troubleshooting

Request not showing up in /requests

  1. Confirm SPANLENS_API_KEY is set in both .env.local AND your deployment env
  2. After adding env vars to Vercel, redeploy — new env values don't apply to existing deployments
  3. Check the Network tab — your request should hit spanlens-server.vercel.app/proxy/*, not api.openai.com directly

Getting “401 Incorrect API key”

You probably replaced apiKey but forgot to set baseURL. Use createOpenAI() — it sets both for you.

Getting mock data instead of real LLM responses

Some apps fall back to mock responses when SPANLENS_API_KEY is missing. Double-check the env var is actually present at runtime: console.log(!!process.env.SPANLENS_API_KEY).


Next: see the SDK reference for agent tracing and advanced usage, or direct proxy for non-Node environments.