Prompts Playground
Like a SQL query console for prompts — select a version, adjust model, temperature, and variables, then click Run to get an immediate result with cost and token counts. Verify how a prompt actually behaves before deploying it to production.
How to use
- Click a prompt name on /prompts.
- Select the Playground sub-tab.
- Choose the version to run from the dropdown.
- Set Model, Temperature, and Max Tokens.
- If the prompt content contains
{{variableName}}placeholders, a Variables input form appears automatically. Fill in the values. - Click Run.
- The result panel shows the response text, token counts, cost, and latency.
Variable interpolation
Placeholders in the format {{variableName}} in the prompt body are replaced at run time by the corresponding value from the variables object. For example:
You are a {{language}} expert. Please answer {{userName}}'s question.textWith language: "TypeScript" and userName: "Alice", the text actually sent to the model is:
You are a TypeScript expert. Please answer Alice's question.textPlaceholders present in the template but missing from the variables input are returned in the missingVars array in the response. Those slots are replaced with an empty string and the run proceeds.
Supported providers
The Playground currently supports:
- OpenAI — GPT model family
- Anthropic — Claude model family
Runs use your own provider key stored in Spanlens. The cost of each run is billed directly to your provider account — Spanlens does not cover it.
Run parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
promptVersionId | string (UUID) | — | ID of the prompt version to run (required) |
providerKeyId | string (UUID) | — | Provider key to use (required) |
model | string | — | Model to run (e.g. gpt-4o-mini, claude-3-5-haiku-20241022) |
temperature | number | 0.7 | 0–2. Lower is more deterministic; higher is more creative. |
maxTokens | integer | 1024 | 1–8192. Maximum tokens in the response. |
variables | object | {} | Values to substitute for {{key}} placeholders in the prompt |
Response structure
| Field | Type | Description |
|---|---|---|
responseText | string | The model's generated response text |
model | string | Actual model used (including provider-returned dated variant) |
promptTokens | integer | Input token count |
completionTokens | integer | Output token count |
totalTokens | integer | Input + output total |
costUsd | number | null | Estimated cost for this run (USD). Null if the model is not in the price table. |
latencyMs | integer | Time from first request to response complete (ms) |
missingVars | string[] | Placeholder names present in the template but absent from variables. Empty array means all variables were supplied. |
Rate limit
The Playground endpoint is capped at 20 requests per user per 60 seconds. Exceeding this returns 429 Too Many Requests. For automated pipelines, use Experiments instead.
Notes
- Playground runs are not saved to the
requeststable. They do not appear on the Requests page or in the Prompts Calls tab, and have no effect on production metrics. - Cost is billed to your provider key. It does not count against your Spanlens plan usage.
- If no provider key is registered, the run will fail. Go to Provider Keys to add one first.
API
POST /api/v1/prompts-playground/runbashAuth: JWT (Authorization: Bearer $SPANLENS_JWT)
Request example
curl https://spanlens-server.vercel.app/api/v1/prompts-playground/run \
-H "Authorization: Bearer $SPANLENS_JWT" \
-H "Content-Type: application/json" \
-d '{
"promptVersionId": "ae1c3c1e-99eb-4f2a-b821-000000000001",
"providerKeyId": "b2d9f3a0-1234-5678-abcd-000000000002",
"model": "gpt-4o-mini",
"temperature": 0.5,
"maxTokens": 512,
"variables": {
"language": "TypeScript",
"userName": "Alice"
}
}'bashResponse example
{
"responseText": "TypeScript is a statically typed superset of JavaScript...",
"model": "gpt-4o-mini-2024-07-18",
"promptTokens": 48,
"completionTokens": 132,
"totalTokens": 180,
"costUsd": 0.000054,
"latencyMs": 812,
"missingVars": []
}jsonResponse with missing variables
{
"responseText": "Hello, . How can I help you today?",
"model": "gpt-4o-mini-2024-07-18",
"promptTokens": 42,
"completionTokens": 89,
"totalTokens": 131,
"costUsd": 0.000039,
"latencyMs": 654,
"missingVars": ["userName"]
}jsonWhen missingVars is non-empty, those placeholders were replaced with an empty string. Fill in the missing values in the Variables form and re-run.
Related: Prompts (version management + A/B comparison), Experiments (offline dataset comparison), Evals (LLM-as-judge quality scoring), /prompts dashboard.