OpenTelemetry (OTLP)

Spanlens accepts traces from any OpenTelemetry SDK that emits OTLP/HTTP JSON using the gen_ai semantic conventions. No code rewrite required — configure your existing OTel exporter and spans flow directly into the waterfall dashboard.

Endpoint

POST https://server.spanlens.io/v1/traces
Content-Type: application/json
Authorization: Bearer sl_live_<your-key>
text

The path follows the OTLP spec exactly: POST /v1/traces. Only JSON encoding is supported (Protobuf is not).

Authentication

Use your Spanlens project API key (sl_live_…) as a Bearer token. Most OTel SDKs let you inject custom headers into the exporter:

# Python — opentelemetry-exporter-otlp-proto-http
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter

exporter = OTLPSpanExporter(
    endpoint="https://server.spanlens.io",
    headers={"Authorization": "Bearer sl_live_YOUR_KEY"},
)
python

Required attributes

Spanlens maps spans using the OpenTelemetry GenAI Semantic Conventions. At minimum, attach these attributes to get useful data in the dashboard:

AttributeTypeDescription
gen_ai.operation.namestringchat, text_completion, execute_tool, embeddings, retrieval, or generate_content
gen_ai.provider.namestringopenai, anthropic, gemini, …
gen_ai.request.modelstringModel name used for the request (e.g. gpt-4o)
gen_ai.usage.input_tokensintPrompt / input token count
gen_ai.usage.output_tokensintCompletion / output token count
gen_ai.input.messagesstringSerialised input message array (optional, shown in span detail)
gen_ai.output.messagesstringSerialised output / response (optional)

Span type mapping

Spanlens infers the span type from gen_ai.operation.name:

gen_ai.operation.nameSpanlens span_type
chat, text_completion, generate_contentllm
execute_tooltool
embeddingsembedding
retrievalretrieval
(anything else)custom

Quick start — Python (opentelemetry-sdk)

from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter

# Configure the OTLP exporter
exporter = OTLPSpanExporter(
    endpoint="https://server.spanlens.io",
    headers={"Authorization": "Bearer sl_live_YOUR_KEY"},
)

provider = TracerProvider()
provider.add_span_processor(BatchSpanProcessor(exporter))
trace.set_tracer_provider(provider)

tracer = trace.get_tracer("my-agent")

# Create a trace
with tracer.start_as_current_span("answer-question") as span:
    span.set_attribute("gen_ai.operation.name", "chat")
    span.set_attribute("gen_ai.provider.name", "openai")
    span.set_attribute("gen_ai.request.model", "gpt-4o")
    span.set_attribute("gen_ai.usage.input_tokens", 120)
    span.set_attribute("gen_ai.usage.output_tokens", 80)
    span.set_attribute("gen_ai.input.messages", '[{"role":"user","content":"Hello"}]')
    # ... call your LLM here
python

Quick start — Python with openai-agents SDK

The OpenAI Agents SDK emits gen_ai spans automatically. Point it at Spanlens:

from agents.tracing import set_trace_processor
from agents.tracing.otlp import OTLPTraceProcessor

set_trace_processor(
    OTLPTraceProcessor(
        endpoint="https://server.spanlens.io/v1/traces",
        headers={"Authorization": "Bearer sl_live_YOUR_KEY"},
    )
)
python

Quick start — Node.js

import { NodeSDK } from '@opentelemetry/sdk-node'
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http'

const sdk = new NodeSDK({
  traceExporter: new OTLPTraceExporter({
    url: 'https://server.spanlens.io/v1/traces',
    headers: { Authorization: 'Bearer sl_live_YOUR_KEY' },
  }),
})

sdk.start()
ts

Response codes

CodeBodyMeaning
200{}All spans accepted
200{"partialSuccess":{"rejectedSpans":N}}Some spans could not be persisted (N is the count)
400{"error":"..."}Invalid JSON body
401{"error":"Invalid API key"}Missing or invalid sl_live_* key
415{"error":"..."}Protobuf encoding not supported — use application/json

Notes

  • Trace IDs are external. OTel trace IDs (32-char hex) are stored separately from Spanlens' internal UUIDs. Duplicate exports of the same OTel trace are idempotent — the trace row is upserted on (organization_id, external_trace_id).
  • Parent-child linking. Span parent relationships are resolved after each batch import via an internal SQL function that maps external_parent_span_id → parent_span_id (UUID). The Gantt waterfall shows the full tree.
  • Cost calculation. If gen_ai.provider.name and gen_ai.request.model match a known entry in Spanlens' model price table, cost is calculated automatically — no extra configuration required.
  • Protobuf not supported. Configure your OTel SDK to use HTTP/JSON encoding (application/json). In Python this is opentelemetry-exporter-otlp-proto-http with OTEL_EXPORTER_OTLP_PROTOCOL=http/json.

Related: Traces overview, @spanlens/sdk (native JS/TS SDK), /traces dashboard.