Prompts Playground

Like a SQL query console for prompts — select a version, adjust model, temperature, and variables, then click Run to get an immediate result with cost and token counts. Verify how a prompt actually behaves before deploying it to production.

How to use

  1. Click a prompt name on /prompts.
  2. Select the Playground sub-tab.
  3. Choose the version to run from the dropdown.
  4. Set Model, Temperature, and Max Tokens.
  5. If the prompt content contains {{variableName}} placeholders, a Variables input form appears automatically. Fill in the values.
  6. Click Run.
  7. The result panel shows the response text, token counts, cost, and latency.

Variable interpolation

Placeholders in the format {{variableName}} in the prompt body are replaced at run time by the corresponding value from the variables object. For example:

You are a {{language}} expert. Please answer {{userName}}'s question.
text

With language: "TypeScript" and userName: "Alice", the text actually sent to the model is:

You are a TypeScript expert. Please answer Alice's question.
text

Placeholders present in the template but missing from the variables input are returned in the missingVars array in the response. Those slots are replaced with an empty string and the run proceeds.

Supported providers

The Playground currently supports:

  • OpenAI — GPT model family
  • Anthropic — Claude model family

Runs use your own provider key stored in Spanlens. The cost of each run is billed directly to your provider account — Spanlens does not cover it.

Run parameters

ParameterTypeDefaultDescription
promptVersionIdstring (UUID)ID of the prompt version to run (required)
providerKeyIdstring (UUID)Provider key to use (required)
modelstringModel to run (e.g. gpt-4o-mini, claude-3-5-haiku-20241022)
temperaturenumber0.70–2. Lower is more deterministic; higher is more creative.
maxTokensinteger10241–8192. Maximum tokens in the response.
variablesobject{}Values to substitute for {{key}} placeholders in the prompt

Response structure

FieldTypeDescription
responseTextstringThe model's generated response text
modelstringActual model used (including provider-returned dated variant)
promptTokensintegerInput token count
completionTokensintegerOutput token count
totalTokensintegerInput + output total
costUsdnumber | nullEstimated cost for this run (USD). Null if the model is not in the price table.
latencyMsintegerTime from first request to response complete (ms)
missingVarsstring[]Placeholder names present in the template but absent from variables. Empty array means all variables were supplied.

Rate limit

The Playground endpoint is capped at 20 requests per user per 60 seconds. Exceeding this returns 429 Too Many Requests. For automated pipelines, use Experiments instead.

Notes

  • Playground runs are not saved to the requests table. They do not appear on the Requests page or in the Prompts Calls tab, and have no effect on production metrics.
  • Cost is billed to your provider key. It does not count against your Spanlens plan usage.
  • If no provider key is registered, the run will fail. Go to Provider Keys to add one first.

API

POST /api/v1/prompts-playground/run
bash

Auth: JWT (Authorization: Bearer $SPANLENS_JWT)

Request example

curl https://spanlens-server.vercel.app/api/v1/prompts-playground/run \
  -H "Authorization: Bearer $SPANLENS_JWT" \
  -H "Content-Type: application/json" \
  -d '{
    "promptVersionId": "ae1c3c1e-99eb-4f2a-b821-000000000001",
    "providerKeyId":   "b2d9f3a0-1234-5678-abcd-000000000002",
    "model": "gpt-4o-mini",
    "temperature": 0.5,
    "maxTokens": 512,
    "variables": {
      "language": "TypeScript",
      "userName": "Alice"
    }
  }'
bash

Response example

{
  "responseText": "TypeScript is a statically typed superset of JavaScript...",
  "model": "gpt-4o-mini-2024-07-18",
  "promptTokens": 48,
  "completionTokens": 132,
  "totalTokens": 180,
  "costUsd": 0.000054,
  "latencyMs": 812,
  "missingVars": []
}
json

Response with missing variables

{
  "responseText": "Hello, . How can I help you today?",
  "model": "gpt-4o-mini-2024-07-18",
  "promptTokens": 42,
  "completionTokens": 89,
  "totalTokens": 131,
  "costUsd": 0.000039,
  "latencyMs": 654,
  "missingVars": ["userName"]
}
json

When missingVars is non-empty, those placeholders were replaced with an empty string. Fill in the missing values in the Variables form and re-run.


Related: Prompts (version management + A/B comparison), Experiments (offline dataset comparison), Evals (LLM-as-judge quality scoring), /prompts dashboard.