Software Is Dying. AI Agents Are Killing It.
Traditional software is being hollowed out by AI agents. Not hype—structural economics. What survives, what dies, and what to do about it.
Software is being hollowed out from the middle. Not all of it. Not tomorrow. But the direction is clear, and the mechanism is structural—not hype.
I build with AI agents daily. Not as a curiosity—as my primary development method. What I've seen from the inside is different from what most articles describe from the outside.
The Mechanism
The disruption isn't "AI replaces everything." It's a specific structural attack on the per-seat licensing model that has been the economic foundation of SaaS since the early 2000s.
Three forces at work:
Seat compression. A single AI agent performs tasks that previously required dozens of software licenses. One agent replaces 10-50 seats of customer relationship tools, project management boards, support desks. The UI layer—where most SaaS value was captured—becomes unnecessary.
Workflow replication. AI agents don't use software. They replicate what the software does. An agent doesn't need a support desk interface to resolve a ticket. It needs access to the knowledge base and the ability to send a response. The dashboard is overhead.
Outcome-based economics. The pricing shift moves from "access to a tool" to "work completed." Major vendors now charge per autonomous action—a fundamentally different model from per-seat monthly fees.
- Pay for access, whether used or not
- Revenue scales with headcount
- UI is the value layer
- Vendor builds features to justify renewals
- Lock-in through workflow dependency
- Pay for work completed
- Revenue scales with output
- Result is the value layer
- Agent executes the workflow directly
- Portability through open protocols
The numbers are not abstract. ~$2 trillion in software market cap evaporated in early 2026. Major software stocks dropped 25%+. Analysts project that by 2028, pure seat-based pricing will be obsolete, with 70% of vendors refactoring pricing around consumption or outcomes.
This isn't a market correction. It's a repricing based on structural change.
What's Actually Dying
Not all software. The attack targets the application layer—the UIs, dashboards, and workflow tools that sit between humans and data.
The most vulnerable software shares four traits: high task repetition, low contextual knowledge requirements, standardized processes, and exposed APIs. Support ticketing, invoice processing, task boards, list building, time-entry approvals—these are the first casualties.
The pattern from previous waves confirms this. Cloud didn't kill on-prem—it transformed delivery. Mobile didn't kill desktop—it dominated consumption while desktops kept productivity. SaaS didn't kill installed software—it moved it to the browser.
AI agents are different in one critical way: previous shifts changed the delivery model or the interface. This one attacks the need for the software itself. An agent doesn't need a different version of your project management tool—it might not need the tool at all.
But previous transitions took 5-10 years. This one is accelerating faster—anyone with deep systems understanding and agent fluency can rebuild entire applications right now. Generalist developers who understand how software works across all layers have the edge. Hybrid models will dominate the transition, and the "killed" technology always survives in some form. But the window to adapt is shorter than any previous shift.
What Survives
The agent revolution attacks the application layer. It does not attack—and actually increases demand for—the layers beneath it.
| Layer | Why It Survives |
|---|---|
| Protocols | Agents need standards to communicate (MCP, A2A, HTTP, gRPC) |
| Databases | Agents need state, memory, truth—persistent data stores grow |
| Compute | Agents consume massive compute—cloud, GPUs, edge infrastructure expand |
| Identity and auth | Agents need permissions, audit trails, zero-trust—security grows |
| Observability | Agent behavior must be monitored, debugged, traced—new category |
| Developer tools | Agents need environments, CI/CD, testing pipelines—tooling persists |
The insight: interfaces contract, infrastructure expands. Every platform shift kills UI layers and grows the plumbing underneath. The application surface area shrinks; the complexity beneath gets denser.
Software whose primary value is proprietary data has a defensible moat. Software whose value is workflow orchestration (UI + logic) does not. The data layer survives. The presentation layer compresses.
The Protocol Moment
Two protocols are emerging as the foundational standards—the HTTP of agents.
Model Context Protocol (MCP) standardizes how AI agents connect to external tools, databases, and APIs. Launched in late 2024, adopted across the industry. Thousands of MCP servers available, donated to an open foundation.
Agent-to-Agent Protocol (A2A) enables agent-to-agent communication, task delegation, and capability discovery. "Agent Cards" (JSON format) let agents discover each other's capabilities. Open source, backed by a broad coalition of industry organizations.
The stack is crystallizing:
A new primitive is emerging alongside protocols: agent skills—composable, open-source capabilities that encode entire workflows into reusable packages. Skills are to agents what libraries are to code: building blocks that compress complex multi-step processes into single invocations. The skill ecosystem will define how agents acquire capabilities at scale.
Open standards are winning. Proprietary agent ecosystems will get marginalized—same playbook as the internet. Build on the open protocols.
Software That Builds Software
Concrete examples already exist.
OpenAI built an internal product with zero manually written code. 1,500 pull requests, all generated by coding agents. The team's job shifted from writing code to "designing environments, specifying intent, and building feedback loops."
Sakana AI built the Darwin Gödel Machine—a self-modifying agent that iteratively rewrites its own codebase to improve its coding ability. Performance more than doubled on industry benchmarks. Improvements in one programming language transferred to unrelated languages—suggesting discovery of fundamental design principles, not task-specific tricks.
I've seen this firsthand. I built a workflow where I define what needs to happen, and AI agents handle the entire implementation loop—writing code, running tests, reviewing from multiple perspectives, fixing issues, iterating until production-ready. The agent doesn't need my hands on a keyboard. It needs a clear spec and access to the codebase.
When my laptop died, I moved to a tablet with SSH access to remote servers. Three days later, I beat my all-time commit record—from a tablet. The device became irrelevant. The compute moved to the server. The intelligence moved to the agents. I just pointed them in the right direction.
Andrej Karpathy's framework captures this: Software 1.0 was explicit instructions. Software 2.0 was neural net weights (learned behavior). Software 3.0 is natural language—prompts are the code. The transition from 2.0 to 3.0 is underway.
The Identity Shift
The developer role is transforming. Not disappearing—evolving through three stages:
- Autocomplete era — AI as productivity tool for mundane tasks
- Conductor phase — humans guide single sessions iteratively
- Orchestrator model — humans manage multiple autonomous agents in parallel, reviewing outputs asynchronously
The third stage is underway. The competitive advantage is no longer writing code. It's decomposing problems, selecting the right approach, and validating outputs.
- Direct code writing
- Line-by-line code review
- Syntax memorization
- Component-focused expertise
- Manual testing execution
- Problem decomposition and task design
- Output validation through deterministic tests
- Systems-level thinking and architecture
- Security oversight and governance
- Budget management (compute costs are real)
The ability to produce functional code could land you an entry-level job previously. With code generation commoditized, that's table stakes. What differentiates: decomposing ambiguous problems, validating agent output against real-world constraints, and architecting systems that agents can effectively operate. AI amplifies strong engineering. It exposes weak engineering. There is no middle ground.
What senior engineers differentiate on: product thinking, architecture applied to agentic systems, advanced testing strategies, and the judgment to know when the agent is wrong. This is the most important shift in the article. Everything else is economics. This is identity.
The Counter-Arguments (Strongest Ones)
The "agents kill software" thesis is directionally correct but temporally overhyped. The honest counter-arguments:
Reliability is not solved. General-purpose autonomous agents achieve ~14% success rate on complex multi-step tasks. Humans achieve 78% on the same tasks. Hallucinations are a permanent property of language models—not a bug to be fixed. This gap is real.
Cost is prohibitive at scale. Running agents around the clock costs real money. Proof-of-concept phases alone range hundreds of thousands to millions. For many workflows, the agent is more expensive than the human seats it replaces.
The "agent" label is meaningless marketing. The term gets slapped on everything from simple scripts to sophisticated workflows. The promises made—replacing the white-collar workforce, age of abundance—have not materialized at the scale predicted.
AI regresses to the mean. AI produces average solutions. If a company wants novel, differentiated experiences, it won't find them by replacing developers with agents. Creativity and taste remain human advantages.
Enterprise trust doesn't exist yet. Regulated industries cannot deploy autonomous agents without explainability, audit trails, and liability frameworks that do not yet exist. Governance infrastructure is years behind capability.
These counter-arguments are real. They don't invalidate the direction—they calibrate the timeline.
Principles That Will Age Well
Specific tools will change. These principles won't:
Interfaces contract, infrastructure expands
Bet on protocols, databases, identity, and compute—not on specific agent products. The application layer compresses in every platform shift. The plumbing underneath gets more complex and more valuable.
Standards win, proprietary locks lose
The agent ecosystem is following the internet's playbook: open protocols create massive ecosystems. Build on standards, not platforms. Agent skills—composable, open-source workflow packages—are the new building blocks. The ecosystem that wins will be the one with the richest skill library.
Data moats beat feature moats
Software value migrates from "what can the tool do" to "what data does the tool have." Companies with proprietary, high-quality, domain-specific data are defensible. Feature-based differentiation is not.
Outcome-based everything
The pricing model shift from per-seat to per-outcome is permanent. It will extend beyond software into every service industry. Charge for results, not access.
Human-in-the-loop is the steady state
Full autonomy is not imminent. The most productive configuration through at least 2030 is human orchestrator + AI agents. The human provides judgment, context, and accountability. The agent provides speed, breadth, and endurance. As I wrote after ten years of building web apps: momentum beats perfection. AI agents execute that principle at a velocity humans alone never could.
Three durable skills
- Systems thinking — understanding how components interact, fail, and scale
- Judgment under ambiguity — knowing when the agent is wrong, when to override, when to trust
- Problem decomposition — breaking ill-defined problems into agent-executable tasks
These skills cannot be automated because they require understanding the problem space, not just the solution space.
Separation of concerns endures
Event-driven architecture, fault tolerance, modularity—these principles are more important in agent-native systems, not less. AI has changed how software is built. It hasn't changed what makes software good.
Trust is the bottleneck
The fundamental constraint on agent adoption is not capability—it is trust. Explainability, auditability, and reliability are prerequisites for enterprise deployment. The companies that solve trust infrastructure will capture enormous value.
Where I Stand
I'm building an autonomous AI operating system. Not a dashboard for configuring agents—a self-operating system where a Master Agent runs, scores, and continuously improves a fleet of agents autonomously. Non-technical users interact through an adaptive canvas that surfaces what matters. The infrastructure is invisible. I'm also building open-source agent skills that encode entire development workflows into reusable capabilities.
I'm not a spectator making predictions. I'm a builder betting my time on this thesis.
What I've learned from the inside:
- AI agents don't write code for you. They enforce your standards, run your quality gates, and operate your build loop. You decide what to build. Agents ensure how it's built meets your bar.
- The first principles matter more than the specific tools. Question, delete, simplify, accelerate—automate last. Traditional software automated early. AI agents force you back to the fundamentals.
- The constraint was never the local machine. It was clarity of intent. When I define the problem precisely, agents solve it reliably. When I'm vague, they produce noise. The bottleneck is thinking, not computing.
- Minimalism applies to compute the same way it applies to possessions. Own the interface, rent the power. The less you own, the faster you move.
- Today, compute is delegated—remote servers, cloud providers, centralized infrastructure. That's the pragmatic choice right now. But the endgame should be local and private. Hardware will catch up. Models will shrink. The same agents that run on rented GPUs today will run on your own device tomorrow. Privacy matters, and centralized compute is a tradeoff, not a destination. We'll get there.
Software isn't disappearing. It's being reorganized. The UI wrappers compress. The infrastructure expands. The role shifts from code writer to systems architect. The pricing shifts from access to outcomes. The power shifts from tool vendors to whoever controls the data and the protocols.
The direction is clear. The timeline is slower than the hype suggests and faster than incumbents hope.
Build on standards. Own the data. Think in systems. The rest follows.
If you want more like this, see the Building category.
