Updated 17 min read

Spec to Production: My AI Workflow Skill

Ship production-quality code with AI coding agents. A 10-action workflow skill: focus, plan, spec-review, spike, ship, fix, review, done. Agent-agnostic.

I've been shipping production-quality features faster than ever. Not because I write more code—but because I barely write any.

The difference isn't the AI model. It's the process around it.

The Problem

Most developers use AI coding agents like a chat window. Ask for code, paste it in, ask for fixes, lose context, repeat. It ships—sometimes. But the quality is inconsistent, the process is chaotic, and nothing carries over between sessions.

Without a workflow
  • No clear definition of done
  • No systematic code review
  • No quality gates unless you remember
  • No way to resume after context loss
  • Learnings disappear every session
With the workflow skill
  • Spec defines done before code exists
  • 9-perspective code review, automatic
  • 6 quality gates: lint, typecheck, build, test, E2E, TDD proof
  • E2E-first testing with mock avoidance hierarchy
  • Dedicated scientific debugging with prevention rules

I wanted a system where I define what to build, and the agent handles the entire implementation loop until the code is production-ready.

So I built one.

The Workflow Skill

A structured development lifecycle for AI coding agents. 10 actions, from idea to shipped code:

ActionWhat It Does
focusScan the codebase for what to work on next, prioritized by impact
planCreate a spec with acceptance criteria and codebase impact analysis
spec-reviewAdversarial challenge of your spec before implementation starts
spikeTime-boxed exploration for unknowns, go/no-go decision at the end
shipImplement, test, review—loop until all ACs pass and gates are green
fixScientific debugging with hypothesis-driven investigation and anti-cascade TDD
reviewMulti-perspective code review (9 perspectives, risk-scaled)
doneFinal validation, retro, memory update, archive
dropAbandon gracefully, preserve learnings for next time
workflowShow current state and suggest next action

The core insight: the spec is the source of truth. Everything flows from it—implementation, testing, review, validation.

The Full Dev Workflow

1. Focus: What Should I Work On?

Don't know what to build next? Ask the codebase:

~/my-project
~ /workflow what should I focus on?

The agent dispatches parallel scans across your entire project—checking code quality, missing tests, security gaps, performance issues, accessibility problems. Results come back scored by impact, effort, and risk-if-ignored.

It produces a prioritized task list and creates specs in specs/backlog/ for the top items.

Instead of deciding what to work on, you let the codebase tell you.

2. Plan: Define What "Done" Means

You don't start with code. You start with a spec.

~/my-project
~ /workflow plan add rate limiting to the API

The agent reads your codebase first—existing patterns, related code, potential conflicts. Then it writes a spec with:

  • User journey: Who does what, and why (ACTOR/GOAL/PRECONDITION/POSTCONDITION)
  • Acceptance criteria (ACs): GIVEN/WHEN/THEN format—testable, objective
  • Scope items: Exactly what gets built, traceable to ACs
  • Codebase impact: Files affected, dependencies, breaking changes

The spec passes through 13 validation rules before implementation starts. Too big (>8 hours)? It gets split. Vague acceptance criteria? The agent pushes back.

No spec passes the gate if "done" isn't clearly defined.

3. Spec-Review: Challenge Before You Build

Before writing a single line of code, challenge the plan:

~/my-project
~ /workflow review @specs/active/rate-limiting.md

The agent runs adversarial analysis on your spec:

  • Missing edge cases
  • Scope that doesn't trace back to acceptance criteria
  • Assumptions that could be wrong
  • Blind spots in the codebase impact analysis

Catch problems in the spec—not in production.

4. Spike: Explore Before You Commit

Not sure about the approach? Run a time-boxed exploration:

~/my-project
~ /workflow spike Redis vs in-memory for rate limiting

Hard time limit (default 1h, max 4h). The output is a GO/NO-GO decision—not code. Spike code is throwaway, deleted before proceeding. The learning gets logged.

This prevents committing to an approach before you know it works.

5. Ship: The Implementation Loop

Once the spec is solid:

~/my-project
~ /workflow ship @specs/active/rate-limiting.md

The agent enters a loop:

You're not pair programming. You're delegating. The agent handles the loop. You review the output.

Two modes adapt to context:

  • One-shot: No spec needed for quick fixes—creates inline spec, implements, validates
  • Loop: Active spec exists—iterates until all acceptance criteria pass

Testing follows an E2E-first protocol. Every acceptance criterion maps to an E2E test by default. Unit tests are reserved for pure functions only. A mock avoidance hierarchy enforces real systems over test doubles:

PriorityStrategy
1 (best)Real system (test DB, sandbox API)
2Docker/container
3In-memory equivalent
4 (last resort)Mock (only for third-party APIs without sandbox)

Mocking your own code—services, database, HTTP endpoints, auth—is never allowed. Every mock requires a justification comment explaining why a real system isn't available.

TDD is enforced structurally: every test must fail first (RED_CONFIRMED), then pass after implementation (GREEN_CONFIRMED). The skill tracks this in an E2E scenario registry—no shortcuts.

6. Fix: Scientific Debugging

Bug fixes have their own dedicated action:

~/my-project
~ /workflow fix users can't reset their password

The agent classifies the bug (simple, complex, or highly complex) and adapts its investigation:

  • Simple bugs (clear cause, single file): skip investigation, go straight to fix
  • Complex bugs: sequential hypothesis testing, highest probability first
  • Highly complex bugs: parallel investigation from multiple perspectives—code path tracer, state inspector, history investigator, concurrency analyzer

Every root cause must pass a 3-point validation before any code changes:

  1. Explains: Does it account for ALL observed symptoms?
  2. Predicts: Can you name the exact change that will fix it?
  3. Reproduces: Can you make the bug appear and disappear at will?

Then Anti-Cascade TDD kicks in:

  1. Baseline: Record the full test suite state
  2. Red: Write an E2E regression test that fails (RED_CONFIRMED)
  3. Green: Implement the fix, regression test passes (GREEN_CONFIRMED)
  4. Diff: Re-run the full suite—compare to baseline. Any new failures = the fix introduced regressions
  5. Scan: Check the codebase for sibling bugs (same pattern elsewhere)
  6. Learn: Update the spec if related, propose prevention rules for your agent config

The LEARN phase is what makes this compound: after fixing a bug, the agent proposes max 2 prevention rules so the same class of error doesn't recur. Rules are always proposed—never auto-applied—and target your agent's config files.

7. Quality Gates: Automatic Validation

Every edit batch triggers a quick pass:

GateScope
LintChanged files
TypecheckChanged files

Before marking done, a full pass runs all 6 gates:

GateScope
LintChanged files
TypecheckFull project
BuildFull project
TestRelated tests
E2E registryAll acceptance criteria mapped to E2E tests
TDD proofEvery GREEN_CONFIRMED has prior RED_CONFIRMED

Coverage runs as an advisory gate—warns on drops over 5% but never blocks.

The skill auto-detects your tooling. Biome or ESLint? Vitest or Jest? pnpm, yarn, or bun? It reads your config files and figures it out.

No special setup. Production-quality validation from day one.

8. Review: Nine Perspectives, Scaled by Risk

Code review runs automatically during the ship loop—and on demand:

~/my-project
~ /workflow review the code
PerspectiveQuestionWhen
CorrectnessDoes it do the right thing?Always
SecurityIs it safe?Always
ReliabilityDoes it handle failure?Always
PerformanceIs it fast enough?Always
DXIs it pleasant to maintain?Always
ScalabilityShared state, multi-instance?Conditional
ObservabilityCan you debug in production?Conditional
TestabilityComplex logic covered?Conditional
AccessibilityKeyboard, screen reader, contrast?Conditional

Review depth scales with risk:

ScopeDepth
1-2 files, low riskQuick (5 perspectives)
3-5 filesStandard
6+ files or high riskDeep (all 9)
Deploy context detectedProduction mode

9. Done: Validate, Learn, Archive

When everything passes:

~/my-project
~ /workflow done @specs/active/rate-limiting.md

Final validation:

  • All acceptance criteria pass
  • Quality gates green
  • New behavior has new tests
  • No blocking review issues

Then a retro runs automatically:

  • Estimate vs actual time
  • What worked, what didn't
  • Patterns learned

The agent proposes memory updates—coding patterns, project conventions, anti-patterns discovered. These get written to your agent config so the next session starts smarter.

Spec archives to specs/shipped/. History logged. Knowledge retained.

10. Drop: Abandon Without Losing

Sometimes a feature doesn't work out:

~/my-project
~ /workflow drop @specs/active/rate-limiting.md

Captures why it was abandoned. Preserves reusable pieces. Documents "if revisited" lessons. Archives to specs/dropped/.

No silent abandonment. Every dropped feature teaches the next one.

Why This Builds Better Software

Spec-Driven Quality

The spec defines what production-ready means before code exists. Acceptance criteria are testable—each one maps to an E2E test with TDD proof (RED_CONFIRMED before GREEN_CONFIRMED). Scope items trace to ACs. The agent validates against the spec—not against vibes.

This means quality is structural, not accidental.

Same-Day Shipping

Everything is scoped to what ships today. Features over 8 hours get split. The tiering system enforces it:

SizeCeremony
< 5 LOCNone—just do it
< 30 LOCInline comment spec
< 100 LOCMini template
100+ LOCFull spec with state machines

No two-week sprints. No ceremony overhead. Ideas ship the same day they're conceived.

Human Controls Deployment

The skill never runs git push or deploy commands. The agent handles code quality. You handle production.

This separation matters for trust. I delegate the build loop because I know the agent won't touch anything irreversible without asking.

Session Resilience

Context gets lost. It happens. The spec file enables perfect resume:

~/my-project
~ /workflow ship @specs/active/rate-limiting.md

The agent reads the spec, checks current state, picks up exactly where it left off. No "remind me what we were building" moments.

Agent-Agnostic

Not locked to any single tool. The same skill works with Claude Code, Codex, OpenCode, Cursor, Windsurf, Aider—any agent that reads SKILL.md files.

The AI tooling landscape shifts fast. The workflow stays portable.

Get Started

The workflow skill is available at skills.sh/bntvllnt/agent-skills/workflow:

~/my-project
~ npx skills add bntvllnt/agent-skills --skill workflow

Or copy directly from the repo:

~/my-project
~ git clone https://github.com/bntvllnt/agent-skills
~ cp -r agent-skills/workflow .claude/skills/

The Flow

No config required. The skill auto-detects your project's tooling.

Trade-offs

This isn't magic. Real trade-offs:

Overhead for small changes: A one-line typo fix doesn't need a spec. The skill detects trivial changes and skips ceremony—but sometimes you just want to edit and commit.

Learning curve: The spec format and actions take time to internalize. First week feels slower. After that, faster than before.

Agent quality varies: The loop is only as good as the agent's implementation. Complex algorithms and domain-specific code still need careful human review.

Token usage: Multi-perspective review and iterative fixing consume tokens. Worth it for production code. Overkill for throwaway scripts.

Why I Built This

Momentum matters more than perfection.

I used to lose half my energy to process—where was I? What was I building? Did I test that edge case? Now the spec holds all state. Quality gates run automatically. The agent reviews its own code from 9 perspectives, enforces E2E-first TDD, and even learns from bugs to prevent the same class of error from recurring.

The result: better software, shipped faster, from day one.

Ship → observe → adjust. Every day.

If that resonates, the skill is at skills.sh/bntvllnt/agent-skills/workflow. The source is on GitHub.

Glossary

Acceptance Criteria (ACs) — Testable conditions that define when a feature is "done." Written in GIVEN/WHEN/THEN format. Example: GIVEN a user sends 100 requests in 1 minute, WHEN they send request 101, THEN they receive a 429 status with a Retry-After header. Every scope item traces back to at least one AC. If you can't write an AC for it, it's not in scope.

Scope Items — The specific implementation tasks that fulfill ACs. Each scope item maps to one or more ACs, creating bidirectional traceability.

Quality Gates — Automated validation checks that must pass before code is considered done. Quick pass (lint, typecheck on changed files) runs after each edit. Full pass (lint, typecheck, build, test, E2E registry, TDD proof) runs before completion. Coverage is advisory.

E2E-First Testing — Default to end-to-end tests. Unit tests are reserved for pure functions with no I/O or side effects. A mock avoidance hierarchy enforces real systems (test DB, sandbox API) over test doubles. Mocking your own code is never allowed.

Anti-Cascade TDD — Bug fix protocol that prevents fixes from introducing new failures. Baseline the full test suite, write a failing regression test (RED), implement the fix (GREEN), then compare the full suite to baseline (DIFF). Any new failures mean the fix introduced regressions.

SKILL.md — The standard file format for defining agent skills. Any AI coding agent that reads SKILL.md files can load and execute the workflow skill.


More posts on building software in the Building category.

BNTVLLNT Profile

BNTVLLNT

> I build AI-native software execution systems