SupernovaAgentic Workflow Analysis and Optimization
Last import · completed

Metric scope

Workflow metrics are projected; evidence stays tied to analyzed threads

Projected workflow runs63 / moThis workflow represented 19 of 300 analyzed threads.
Analyzed workflow sample19 threadsFindings, recommendations, and evidence cards are still anchored to the normalized workflow sample.
Projection factor3.3xApplied to this workflow's spend, savings, runs, and token totals. Confidence: medium.
Source pool996 sessionsThe full source-pool population used by the dashboard projection.

derived-flight-delay-compensation · Based on 19 threads · medium confidence

Flight Delay Compensation

Derived primarily from user-authored prompts across a 300-thread slice. Full-slice prompt clustering ran on every thread, and Claude consolidated the major workflow types from cluster exemplars because the slice exceeds the non-sampling threshold.

Projected spend / mo$1.10sample $0.33
Projected savings / mo$0.27sample $0.08 · Could cut spend by ~25%
Projected runs / mo63sample 19
Projected total tokens284.8Kavg 4.5K per run
Projected input / output54.2K / 230.5K

This workflow

Token and spend trend

8 hour buckets
Input tokensOutput tokens

Model mix

Tokens and spend by model

4 models

Tokens by model

input + output
Kimi-K252.1K tokens
98 calls34.6% of total
Claude Opus 4.539.2K tokens
67 calls26.1% of total
gemini-3-pro-preview30.4K tokens
85 calls20.2% of total
GPT-5.228.6K tokens
89 calls19% of total

Spend by model

estimated cost
Claude Opus 4.5$0.77
68.7% of total
GPT-5.2$0.22
19.8% of total
Kimi-K2$0.09
8.5% of total
gemini-3-pro-preview$0.03
3% of total

Opportunities

2 opportunities for this workflow

$0.27 projected
Tool misuse

Failed benchmark outcomes are still paying the full workflow cost

The imported outcome labels show a high failure rate after the workflow has already spent tokens and tool calls, which points to missing early exits or weak preflight checks.

$0.13projected / month savedsample $0.04/mo
high riskmedium confidence
Recommended changes
  • Compare passing and failing traces for this workflow and add an early gate before the expensive tool loop starts.
  • Use the imported outcome label as an evaluation dimension so regressions are ranked by wasted spend, not just by raw failure count.
Evidence (1)
StepImported failing outcome
Imported benchmark outcome ended with failure
Tool misuse

Tool loops are dense enough to need batching or early stopping

tool dominates repeated tool activity, so the workflow is likely doing incremental calls where batching, caching, or tighter stop conditions would reduce churn.

$0.13projected / month savedsample $0.04/mo
medium riskmedium confidence
Recommended changes
  • Batch or cache repeated tool calls where the inputs overlap across adjacent steps.
  • Add a per-run tool budget and stop condition so failed runs do not keep exploring after the likely answer is already unreachable.

Prompt composition

Input token breakdown

16.3K tokens
user16.3K · 100%

Tool signals

How this workflow runs

Retries0

How often steps had to re-run.

Delegated subtasks0

Tasks handed off to sub-agents during the workflow.

Documents retrieved0

Total documents pulled in across all tool calls.

Median step latency0 ms

Typical time each step takes to finish.

Stage order

Typical workflow path

7 steps
  1. RespondKimi-K2

    Respond step in the workflow.

    Latency unavailable313 tok avg
  2. Loop×50

    Loop: plan → tool — repeats 50 times.

    Latency unavailable13.4K tok avg
    1. 1
      PlanKimi-K2

      Plan the next steps in the workflow.

      Latency unavailable116 tok avg
    2. 2
      Tooltool

      Tool step in the workflow.

      Latency unavailable152 tok avg
  3. RespondKimi-K2

    Respond step in the workflow.

    Latency unavailable313 tok avg
  4. Loop×15

    Loop: plan → tool — repeats 15 times.

    Latency unavailable4K tok avg
    1. 1
      PlanKimi-K2

      Plan the next steps in the workflow.

      Latency unavailable116 tok avg
    2. 2
      Tooltool

      Tool step in the workflow.

      Latency unavailable152 tok avg
  5. RespondKimi-K2

    Respond step in the workflow.

    Latency unavailable313 tok avg
  6. Tooltool

    Tool step in the workflow.

    Latency unavailable152 tok avg
  7. Verify

    Verify step in the workflow.

    Latency unavailable119 tok avg

Threads

Pick a thread to see what happened

19 threads
Cost per run$0.05
Monthly runs1
Monthly cost$0.05
Operation path35 named tool/models
  1. 1
    RespondRespond
    claude-opus-4-5claude-opus-4-5
    7 tok · $0.00
  2. 2
    RespondRespond
    claude-opus-4-5claude-opus-4-5
    130 tok · $0.00
  3. 3
    PlanPlan
    claude-opus-4-5claude-opus-4-5
    53 tok · $0.00
  4. 4
    toolTool
    Run tooltool
    300 tok
  5. 5
    PlanPlan
    claude-opus-4-5claude-opus-4-5
    108 tok · $0.00
  6. 6
    toolTool
    Run tooltool
    202 tok
  7. 7
    PlanPlan
    claude-opus-4-5claude-opus-4-5
    77 tok · $0.00
  8. 8
    toolTool
    Run tooltool
    219 tok
  9. 9
    PlanPlan
    claude-opus-4-5claude-opus-4-5
    180 tok · $0.00
  10. 10
    toolTool
    Run tooltool
    14 tok
  11. 11
    PlanPlan
    claude-opus-4-5claude-opus-4-5
    113 tok · $0.00
  12. 12
    toolTool
    Run tooltool
    1.3K tok
  13. 13
    PlanPlan
    claude-opus-4-5claude-opus-4-5
    102 tok · $0.00
  14. 14
    toolTool
    Run tooltool
    29 tok
  15. 15
    RespondRespond
    claude-opus-4-5claude-opus-4-5
    294 tok · $0.01
  16. 16
    PlanPlan
    claude-opus-4-5claude-opus-4-5
    108 tok · $0.00
  17. 17
    toolTool
    Run tooltool
    6 tok
  18. 18
    RespondRespond
    claude-opus-4-5claude-opus-4-5
    216 tok · $0.00
  19. 19
    PlanPlan
    claude-opus-4-5claude-opus-4-5
    171 tok · $0.00
  20. 20
    toolTool
    Run tooltool
    11 tok
  21. 21
    RespondRespond
    claude-opus-4-5claude-opus-4-5
    290 tok · $0.01
  22. 22
    PlanPlan
    claude-opus-4-5claude-opus-4-5
    41 tok · $0.00
  23. 23
    toolTool
    Run tooltool
    8 tok
  24. 24
    PlanPlan
    claude-opus-4-5claude-opus-4-5
    38 tok · $0.00
  25. 25
    toolTool
    Run tooltool
    5 tok
  26. 26
    PlanPlan
    claude-opus-4-5claude-opus-4-5
    138 tok · $0.00
  27. 27
    toolTool
    Run tooltool
    6 tok
  28. 28
    PlanPlan
    claude-opus-4-5claude-opus-4-5
    56 tok · $0.00
  29. 29
    toolTool
    Run tooltool
    56 tok
  30. 30
    PlanPlan
    claude-opus-4-5claude-opus-4-5
    58 tok · $0.00
  31. 31
    toolTool
    Run tooltool
    15 tok
  32. 32
    PlanPlan
    claude-opus-4-5claude-opus-4-5
    138 tok · $0.00
  33. 33
    toolTool
    Run tooltool
    41 tok
  34. 34
    RespondRespond
    claude-opus-4-5claude-opus-4-5
    160 tok · $0.00
  35. 35
    dataset evaluationTool
    Run dataset_evaluationdataset_evaluation
    100 tok
  36. 36
    Imported benchmark outcomeVerify
    Imported benchmark outcome
    124 tok

The old plan/tool string was the normalized span order. Rows above use imported operation records; when a tool name is missing, the source only provided the normalized stage and operation label.

Snapshots
full_transcriptSnapshot 1 · imported

Hi! How can I help you today? Hi, I’m calling because I’m really frustrated about my last flight—it was delayed for hours and it messed up all my plans. Can you help me with this?…