Workspace API key
supernova-lab
sn_live_supernova_lab_9f3e1a27c4b8d5One key per workspace. Every call is tagged so usage, spend, and billing roll up to the right customer automatically.
Integration
Wrap your LLM or agent client once. Every request, response, tool call, and token routes through Supernova so you see exactly where spend goes — no prompts to rewrite, no infra to run.
Workspace API key
sn_live_supernova_lab_9f3e1a27c4b8d5One key per workspace. Every call is tagged so usage, spend, and billing roll up to the right customer automatically.
Step 1
pip install supernovaPick your runtime — Python, TypeScript, or Go. More languages on request.
Step 2
export SUPERNOVA_API_KEY="sn_live_supernova_lab_9f3e1a27c4b8d5"Stored once as an environment variable. Rotate any time from this page.
Step 3
client = supernova.wrap(your_llm_client)Works with OpenAI, Anthropic, LangChain, Vercel AI SDK, and any HTTP client.
Sample integration
import os
from openai import OpenAI
from supernova import Supernova
supernova = Supernova(api_key=os.environ["SUPERNOVA_API_KEY"])
# Wrap any LLM or agent client once — every call is captured.
client = supernova.wrap(OpenAI())
response = client.chat.completions.create(
model="gpt-4.1",
messages=[{"role": "user", "content": "Summarize this week's support tickets."}],
)Supported integrations
Chat Completions, Responses API, tools, streaming, function calling.
Messages API including extended thinking, tool use, and prompt caching.
Agents, chains, retrievers — instrumented through one callback.
Drop-in wrapper for generateText, streamText, and tool calls.
Drop-in fetch wrapper for in-house models or internal proxies.
CrewAI, AutoGen, Mastra — anything that routes through one SDK.
What Supernova sees
Verify
After the first wrapped call, your workspace lights up with live traces. If nothing appears within 60 seconds, double-check that SUPERNOVA_API_KEY matches the key above.