Future Work Mythology: The Complete Essay

· 30 min read

Everyone said AI would come for coders first.

I’ve never coded more hours a day in my entire life than right now.

This isn’t a humblebrag or a paradox. It’s evidence of something the predictions got fundamentally wrong. And understanding why they got it wrong reveals a different future than the one we’ve been sold.

There’s a threshold where AI becomes powerful enough that it’s cheaper to let it work and fix mistakes than to impose human oversight pipelines. What follows describes what happens when that threshold is crossed—what structures collapse, what becomes possible, what changes forever.

For those paying attention, this threshold has mostly already been crossed. This isn’t a prediction of a future. It’s a description of a present that most people haven’t noticed yet.

Part I: The Inversion

The Prediction

The story went like this: AI learns to code. AI codes faster than humans. AI codes cheaper than humans. Companies replace human coders with AI. Coders lose jobs. Other knowledge workers follow. Mass unemployment. Universal basic income debates. Dystopia or utopia, depending on who’s telling the story.

It was clean. It was logical. It was wrong.

Not wrong about AI learning to code—it did. Not wrong about AI coding faster—it does. Not wrong about AI being cheaper—it is. Wrong about what happens next.

What Actually Happened

I sit down to build something. Before AI, the process looked like this:

  1. Conceive the idea
  2. Break it into components
  3. Research how to build each component
  4. Write boilerplate code
  5. Hit a syntax error, debug it
  6. Look up API documentation
  7. Write more code
  8. Hit another error, debug it
  9. Realize the architecture is wrong
  10. Refactor
  11. Repeat steps 4-10 approximately forever

The actual creative work—conceiving the idea, making architectural decisions, solving novel problems—occupied maybe 20% of my time. The other 80% was friction. Necessary friction, but friction nonetheless.

Now the process looks like this:

  1. Conceive the idea
  2. Build it

I’m exaggerating, but not by much. The friction collapsed. The boilerplate writes itself. The syntax errors get caught before I make them. The API documentation is summarized on demand. The architecture discussions happen in real-time with a collaborator who has read every programming book ever written.

What remains is the creative work. The part that was always the point. And because the friction is gone, I do more of it. Much more.

The prediction assumed AI would replace the 20%—the creative work, the hard part, the human part. Instead, AI replaced the 80%—the friction, the tedium, the parts that were never the point.

The Bottleneck Shift

Here’s the key insight: the bottleneck shifted.

Before AI, the bottleneck was implementation. “Can I build this?” was the question. Many ideas died because the answer was no, or because the answer was “yes, but it would take six months,” which is the same as no for most purposes.

Now the bottleneck is imagination. “What should I build?” is the question. The answer to “can I build this?” is almost always yes. The constraint isn’t capability anymore. It’s vision.

This is why I code more hours than ever. Not because I’m inefficient, but because I’m efficient. Every hour produces results. The feedback loop tightened. Ideas that would have taken months now take days. So I try more ideas. I build more things. I spend more hours in the state of flow that used to be rare and is now abundant.

AI didn’t reduce the amount of work. It reduced the friction-to-output ratio so dramatically that the same effort produces far more. And when effort produces more, you want to apply more effort.

Part II: The CO2 Problem

The Protected Class

Here’s where the standard narrative breaks down completely.

If AI replaces productive work, the first casualties should be the most productive workers. The coders, the writers, the designers, the analysts—anyone who produces measurable output. The robots are coming for the makers.

But walk through any large enterprise and you’ll find a different picture. You’ll find people whose jobs are difficult to describe. Not because the jobs are complex, but because describing them requires euphemism.

They “facilitate cross-functional alignment.” They “drive stakeholder engagement.” They “ensure strategic coherence.” They “manage up and across.” They schedule meetings about meetings. They create slide decks summarizing other people’s work. They forward emails with “adding thoughts” or “looping in” or “flagging for visibility.”

These people are not worried about AI. And they shouldn’t be.

Because you cannot automate zero.

The Measurement Problem

How do you measure the productivity of someone whose job is coordination? You can’t measure their output because their output is, in a literal sense, nothing. They produce no artifact. They write no code. They design no products. They make no sales. They serve no customers.

Their value proposition is their presence. They exist in the organizational chart. They attend the meetings. They are cc’d on the emails. They have the relationships. They know who to talk to. They smooth things over. They manage perceptions.

I’m not saying these activities have no value. In a world where humans are slow and coordination is expensive, someone needs to coordinate. But here’s the thing: that value is entirely derivative. It exists only because the underlying productive work requires coordination. Remove the need for coordination, and the coordinator’s job disappears.

AI doesn’t replace the coordinator by doing their job. AI replaces the coordinator by eliminating the conditions that made their job necessary.

The Real Threat

This is the part the enterprise hasn’t figured out yet.

When one person with AI can do what used to require a team, the team becomes unnecessary. When the team becomes unnecessary, so does everyone whose job was managing the team, coordinating the team, aligning the team with other teams, reporting on the team’s status, or facilitating the team’s retrospectives.

The threat to middle management isn’t that AI will do their job. The threat is that AI will make their job structurally unnecessary by empowering the people who actually produce things to produce them without requiring management.

A coder with AI doesn’t need:

  • A project manager to break down tasks (AI does this in conversation)
  • A technical writer to document the code (documentation drifts; AI can answer questions on demand)
  • A QA engineer to test edge cases (AI catches more than humans)
  • A scrum master to run standups (there’s no one to stand up with)
  • A manager to report status (status is visible in the work itself)

The coder didn’t replace these people. The coder made these people unnecessary by becoming capable of the entire production pipeline alone.

Part III: The Eleven Deaths

What follows is not a prediction but an observation. These things are already dying. The question isn’t whether, but how fast.

1. Death of Teams

The team was an answer to a problem: humans working alone couldn’t produce complex things fast enough. Division of labor. Specialization. Coordination. The team made possible what individuals could not.

But the team was never free. Coordination has costs:

  • Communication overhead (every person added increases communication paths factorially)
  • Alignment meetings (ensuring everyone understands the goal)
  • Integration work (combining individual outputs into coherent wholes)
  • Conflict resolution (people disagree)
  • Context switching (waiting for others, handing off work)
  • Diffusion of responsibility (no one owns the outcome)

For complex projects, these costs were worth paying because the alternative—not building the thing—was worse.

Here’s what’s easy to miss: this friction exists even at the smallest scale. A team of two has friction. One person needs to use the bathroom—work stops. One person needs coffee—work stops. One person’s brain is tired while the other is in flow—work stops. One person needs to sleep while the other has momentum—work stops.

The friction floor of human collaboration is higher than zero. It can never be zero. Humans have bodies. Bodies have needs. Needs interrupt work.

AI doesn’t use the bathroom. AI doesn’t need coffee. AI doesn’t get tired. AI doesn’t sleep. When I work with AI, I can produce meaningful work in literal five-minute windows. The gap between having an idea and testing it collapses to nothing. The micro-interruptions that define human collaboration—the bathroom breaks, the “let me get up to speed,” the “wait, what did you mean by that?”—evaporate.

But there’s a deeper problem than micro-friction: you cannot parallelize AI-assisted work across humans.

Even with just one human and one AI, the human struggles to keep up. The velocity is so extreme that maintaining a mental model of what’s being built becomes the bottleneck—not the building itself. You find yourself asking the AI to slow down, to go steady, just so you can comprehend what’s happening. You build tools to visualize progress because your brain can’t track it otherwise.

Now imagine two humans trying to work in parallel, each with their own AI. Both are already maxed out trying to comprehend their own AI’s output. Neither has cognitive bandwidth to understand what the other is doing. But coordination requires comprehension. You can’t sync what you don’t understand. The sync meeting would take longer than just doing the work solo.

At human pace, parallelization was hard but manageable. At AI pace, it’s impossible. The coordination overhead doesn’t shrink with production time—it explodes. There’s more to coordinate per hour, not less.

And because velocity is so high, the goal itself needs to shift constantly. You’re learning so fast that yesterday’s direction is already obsolete. The pivot isn’t an occasional strategic move—it’s the core of the process. Try coordinating a pivot across a team when pivots happen daily. Hourly. The team is still aligning on version one while the solo operator has already shipped version three and learned it was wrong and is halfway through version four.

Now the math has changed. One person with AI can produce complex things. The coordination costs disappear. The thing gets built not only faster but better, because it reflects a unified vision rather than a negotiated compromise.

This doesn’t mean collaboration ends. It means the default unit of production shifts from team to individual. Teams become optional—chosen for specific benefits like diverse perspectives—rather than mandatory for capability reasons.

2. Death of Documentation

Documentation exists because knowledge needs to be transferred from one human brain to another. The person who built the system won’t be there forever. New team members need onboarding. The code’s intent isn’t obvious from the code itself.

These were real problems. Documentation was a real solution.

But documentation has always been a compromise. It’s a static snapshot of a dynamic system. It goes stale the moment it’s written. Maintaining it is expensive and boring, so it doesn’t get maintained. Everyone knows the docs are wrong but consults them anyway because wrong docs are better than no docs.

AI changes this equation. When the system can explain itself—when you can ask “why does this work this way?” and get an accurate answer—the static document loses its purpose. Documentation becomes generated on demand rather than maintained in advance. It’s always current because it’s always fresh.

More fundamentally: if one person built the whole system, and that person works with AI that has the entire context, the knowledge transfer problem shrinks dramatically. The AI is the documentation, and unlike written docs, it can answer follow-up questions.

3. Death of To-Dos and Backlogs

The backlog was a queue. Work exceeded capacity, so work had to wait. The backlog tracked what was waiting. The backlog grooming session prioritized the queue. The sprint planning session decided what to pull from the queue.

This made sense when work arrived faster than it could be completed. The backlog was a buffer between demand and capacity.

But when capacity expands dramatically, the buffer drains. Why maintain a queue when work can flow straight through? Why track tasks when tasks can be completed as they’re conceived?

The backlog doesn’t disappear because task management is automated. It disappears because the gap between identifying work and completing work shrinks to nearly zero. You don’t need a waiting room when there’s no wait.

4. Death of Estimation

“How long will this take?” might be the most asked and worst answered question in software development.

We tried everything. Story points. T-shirt sizes. Planning poker. Monte Carlo simulations. Nothing worked, not really, because estimation is fundamentally hard. Complex systems are unpredictable. Unknown unknowns dominate. The cone of uncertainty laughs at your two-week sprint.

Estimation existed to allocate scarce resources. If the team can only do X points of work per sprint, we need to estimate which work to pull. If the project has a deadline, we need to estimate whether we’ll make it. If the budget is fixed, we need to estimate whether it’s enough.

Here’s the comedy: AI is terrible at estimating AI-assisted work. Ask an AI how long a feature will take, and it’ll say “approximately 2-3 weeks.” Seven minutes later, you’ve shipped it.

Why? Because the AI’s estimation data was trained on human output. It learned from decades of JIRA tickets and project retrospectives and developer surveys—all reflecting how long things took humans to build. The model confidently predicts a world that no longer exists. The moment AI entered the production loop, all historical estimation data became obsolete.

When production becomes abundant, these constraints relax. Why estimate when you can just build it and see? Why plan the sprint when work flows continuously? Why project the deadline when delivery is days away rather than months?

Estimation doesn’t disappear because AI estimates better. It disappears because the conditions that required estimation—scarcity and uncertainty—diminish. And in the meantime, we get to laugh at AI confidently predicting three-week timelines for seven-minute tasks.

5. Death of Meetings

Meetings served a purpose: synchronizing humans. You can’t have a conversation with five people over email. Real-time discussion enables rapid iteration, immediate feedback, and social bonding.

But meetings were expensive. Five people in an hour-long meeting is five person-hours consumed. The average enterprise employee spends 23 hours per week in meetings. That’s more than half their work time spent not working.

Meetings proliferated because coordination required synchronization, and synchronization required meetings. More people meant more coordination, which meant more meetings. The meeting load scaled faster than the headcount.

When the headcount drops to one, the meetings disappear. There’s no one to meet with. The individual with AI doesn’t need to synchronize with anyone. The “meeting” happens continuously in the conversation with the AI, but it’s not a meeting because no one else’s time is consumed.

Asynchronous collaboration persists—people sharing work, reviewing outputs, providing feedback. But the soul-crushing recurring status meeting dies because there’s no status to report to anyone who doesn’t already know it.

6. Death of Deadlines

The deadline was a commitment device. Left unconstrained, projects drift toward heat death. Parkinson’s Law: work expands to fill the time available. The deadline fights entropy by creating urgency.

But deadlines had costs. They induced stress. They encouraged corner-cutting. They punished accuracy in estimation—miss the deadline and you’re blamed, even if the deadline was arbitrary. They created incentives to pad estimates, which created longer timelines, which reduced the total amount that could be attempted.

When production is fast, deadlines matter less. If building a feature takes two days instead of two months, the deadline isn’t a meaningful constraint. You just build it. The urgency is intrinsic—you’re curious whether it works—rather than imposed by a date on a calendar.

Commitments persist. People still need to know roughly when to expect things. But the high-stakes, anxiety-inducing deadline that dominated enterprise software development loses its teeth when delivery is measured in days rather than quarters.

7. Death of Specialization

The specialist emerged because depth required a full human. To be truly good at database optimization, you spent years learning database optimization. You couldn’t also be truly good at frontend design because there wasn’t time. The brain is finite. Expertise is expensive to acquire.

This created the cross-functional team: a collection of specialists who together covered the necessary bases. The frontend specialist, the backend specialist, the database specialist, the DevOps specialist, the security specialist. Each person contributing their depth, the team achieving breadth through combination.

AI provides depth on demand. I don’t need to be a database expert because I can access database expertise through AI when I need it. I don’t need to be a security expert because security expertise is available on tap. The barriers between specialties dissolve when switching costs approach zero.

The generalist with AI beats the specialist without AI. Breadth plus on-demand depth trumps depth alone. The specialist’s moat—years of accumulated expertise—drains when that expertise can be rented by the hour.

Specialization doesn’t disappear entirely. Some domains are too deep, too tacit, too novel for AI to fully cover. But the default shifts from specialist teams to generalist individuals who specialize temporarily as needed.

8. Death of Version Control (As We Know It)

Version control solved a real problem: tracking changes over time so you could understand what happened and revert if necessary. Multiple people changing the same codebase needed coordination. Branches, merges, conflicts, pull requests—an elaborate choreography to prevent chaos.

When one person is making all the changes, the choreography simplifies. There’s no one to conflict with. Branches are less necessary when you’re not coordinating parallel work streams.

More radically: when generating code becomes cheaper than maintaining code, the value of version history diminishes. Why trace the evolution of a component when you can regenerate it from the specification? Why maintain a history of decisions when the AI can re-derive them from first principles?

And here’s the mindset shift: failing forward.

Version control’s core promise was safety through reversion. Something breaks? Revert. Bad decision? Roll back. The history existed so you could retreat to known-good states.

But reverting is going backward. And going backward has costs that aren’t obvious:

  • You lose the context of why you moved forward in the first place
  • You lose the learning embedded in the “failed” attempt
  • You reset to a state that predates your current understanding
  • You’ll probably make the same decisions again, because the reasons you made them haven’t changed

Failing forward means: something breaks, you fix it and keep going. You don’t retreat. The failure taught you something. Reverting erases the lesson.

When fixing is fast—when AI can diagnose and repair in minutes—reversion becomes the slower path. Why roll back to yesterday’s code and lose today’s learning when you can just fix the problem and retain everything?

This is the most speculative of the eleven deaths, and the one I hold most loosely. Version control has deep value even for individuals—understanding why past decisions were made, exploring alternatives. But the instinct to revert, to retreat to safety, loses its grip when forward is faster than backward.

9. Death of Career Ladders

The career ladder measured accumulation. Junior, mid, senior, staff, principal. Each rung represented years of accumulated experience. The ladder provided incentives—work hard, learn more, climb higher, earn more.

But the ladder assumed experience was scarce. It took years to become senior because it took years to learn the necessary skills. The senior engineer was valuable because they had done things the junior hadn’t. The ladder tracked a real accumulation of human capital.

When capabilities can be rented rather than owned, accumulation matters less. The junior engineer with AI can access senior-level capabilities instantly. The years of experience that once justified the title become less differentiating when AI can provide equivalent guidance on demand.

This doesn’t mean all experience becomes worthless. Judgment, taste, domain knowledge, intuition—these human qualities remain valuable and are not easily automated. But the specific technical skills that defined the rungs of the ladder become less scarce, and therefore less valuable as differentiators.

The ladder doesn’t disappear. But its rungs need redefinition. Climbing will mean something different when the thing being climbed has changed shape.

10. Death of Job Descriptions

The job description boxed a human into a role. “You are a frontend engineer.” Here’s what frontend engineers do. Here’s what they don’t do. Stay in your lane. Hand off to the appropriate specialist when you leave your domain.

This made sense when humans were expensive to retrain. If you hired someone to do frontend, you wanted them doing frontend, not wandering into backend work they’d do poorly. Specialization created efficiency. Boundaries prevented chaos.

When AI enables individuals to work across traditional boundaries, the boundaries lose their justification. The person who can do frontend, backend, database, and DevOps—with AI assistance in each—doesn’t fit a traditional job description. They fit a hundred job descriptions, or none.

The job description doesn’t disappear, but it transforms. Instead of describing tasks, it describes outcomes. Instead of specifying skills, it specifies problems to be solved. “We need someone who can build X” rather than “we need a Y engineer.”

The human becomes defined by what they produce rather than what they’re called.

11. Death of Human-Centric Architecture

This one will hurt. It’s about the code itself.

For decades, we’ve developed “best practices” for writing software. Object-oriented programming. Clean code. SOLID principles. Dependency injection. Hexagonal architecture. Design patterns. All of it aimed at one goal: making code understandable to the humans who would read, maintain, and modify it.

Why do we use OO? To create mental models humans can reason about. Why do we follow clean code? So the next human can read it. Why dependency injection? So humans can test and understand components in isolation. Why hexagonal architecture? So humans can trace the flow of data through bounded contexts. Why design patterns? So humans can recognize familiar structures instead of parsing novel ones. Why small functions? Because humans can only hold so much in working memory. Why abstraction layers? To hide complexity that would overwhelm human comprehension.

Every “best practice” in software engineering is a human-centric adaptation. They don’t make code better in any absolute sense. They make code more suitable for human brains.

AI doesn’t have human limitations.

AI doesn’t need small functions—it can parse 10,000 lines as easily as 10. AI doesn’t need abstraction layers—it can hold the entire system in context. AI doesn’t need design patterns—it doesn’t rely on recognition to understand structure. AI doesn’t need clean code—it reads machine-generated code as easily as hand-crafted prose.

In fact, human-centric patterns often make things harder for AI:

Deep abstraction chains are harder to trace. When a function calls a function that calls a function through three layers of indirection, AI has to follow the same path humans do—but AI was never struggling with the complexity the abstraction was hiding.

Abstractions leak. Always have. The abstraction promises to hide complexity, but edge cases bleed through. Now you have both the complexity and the abstraction to deal with.

Abstractions aren’t performant. Every layer of indirection has a cost. Virtual dispatch, interface lookups, dependency resolution—all runtime overhead that exists purely to serve human comprehension.

The ceremony adds tokens without adding value. Boilerplate, factories, adapters, interfaces—all the scaffolding of “clean” architecture consumes context and compute without contributing to correctness or performance.

Here’s the uncomfortable conclusion: the “best practices” of the last thirty years were not universal truths. They were adaptations to human cognitive limitations. When the human is removed from the comprehension loop, the adaptations become dead weight.

What does AI-native code look like? We don’t fully know yet—this is an unproven design space. But early signals suggest AI prefers compact, consistent, flatter systems. Probably something we’d call “ugly” by current standards. Longer functions. Fewer abstractions. Direct implementations instead of indirection. Optimized for execution, not reading. Optimized for correctness, not comprehension.

The point isn’t that architecture disappears—it’s that its purpose changes. Architecture will still exist for scaling, distribution, hot-swapping, isolation. But architecture for human comprehension? That’s the part that becomes vestigial.

Coders will resist this. We’ve spent careers internalizing these patterns. They feel like craftsmanship. Suggesting they’re vestigial feels like an attack on our identity.

But the patterns were always means, not ends. The end was working software. The patterns were how humans got there. If AI can get there without the patterns—faster, with fewer bugs, with better performance—then the patterns were never the point.

The code of the future might be written by AI, maintained by AI, understood by AI. Humans describe intent. AI handles implementation. And the implementation doesn’t need to be human-readable any more than assembly needs to be human-readable. We have a layer that handles that now.

12. Death of the Enterprise

And so we arrive at the twelfth death. Not a component of the enterprise, but the enterprise itself.

The enterprise was an answer to a problem: humans couldn’t produce complex things alone. One person couldn’t build a car, a skyscraper, a software platform. You needed thousands of people, coordinated across functions, aligned toward common goals. The enterprise was the machine that made this coordination possible.

Every structure we’ve discussed—teams, documentation, meetings, estimation, planning, specialization, hierarchy—existed to serve this coordination. They were the organs of the enterprise body. The cost of these organs was worth paying because the alternative was not building the thing at all.

But AI changes the equation. When one person can produce what thousands produced, the coordination becomes unnecessary. When the coordination becomes unnecessary, so does the enterprise.

This isn’t disruption in the usual sense. Disruption is when a new company outcompetes an old one. This is something more fundamental. It’s the dissolution of the organizational form itself.

The enterprise cannot adapt because the adaptations required would mean ceasing to be an enterprise. You cannot tell shareholders that the company’s strategy is to become one person. You cannot lay off the coordination layer while retaining the production layer, because in most enterprises, the coordination layer is the company. The producers were always contractors, outsourcers, gig workers. The enterprise was the coordination.

Internal politics will slow this further. Enterprises don’t keep meetings, documentation, and process layers because they’re good—they keep them because they protect managerial positions, budgets, political power, job security. No one inside the enterprise has an incentive to dismantle the scaffolding. But internal politics don’t matter when external forces shift the economics. Enterprises won’t switch because they want to—they’ll switch because someone outside the org makes the old cost structure untenable.

When coordination costs more than it’s worth, the enterprise doesn’t transform. It evaporates.

What replaces it? That question hangs open. Networks of individuals, perhaps. Project-based coalitions that form and dissolve. Platforms that enable coordination without hierarchy. Something we haven’t imagined yet.

But the enterprise as we know it—the legal fiction that employs thousands, coordinates their labor, and captures the surplus—faces a future where its core value proposition no longer holds. If individuals can do what enterprises did, why would anyone choose the enterprise?

Pure economics. The enterprise doesn’t die from scandal or disruption or mismanagement. It dies because the math no longer works.

Regulated industries—banks, healthcare, aviation, defense—will be the last to change. Not because the model fails there, but because legislation, compliance frameworks, and risk models lag technology by decades. These domains require documentation, audits, traceability, determinism. They cannot run non-deterministic AI-written systems or collapse oversight structures. But “last to change” isn’t “never changes.” When AI reliably outperforms human-governed processes even in safety-critical domains, the regulations will eventually follow. They always do.

Part IV: The Economics of Failure

The argument I’ve made so far is qualitative. But there’s a quantitative story too, and it might be the most important one. It’s not about production speed. It’s about the cost of being wrong.

The Enterprise Trap

Here’s the dirty secret of enterprise software development: most of the process exists to prevent failure.

Why do we estimate? To avoid committing to something we can’t deliver. Why do we plan sprints? To avoid starting work we can’t finish. Why do we write requirements? To avoid building the wrong thing. Why do we review designs? To avoid architectural mistakes. Why do we have QA? To avoid shipping bugs. Why do we get sign-offs? To avoid blame when things go wrong.

Every layer of process is a hedge against failure. And hedging is expensive.

The enterprise pays this tax because the alternative is worse. A failed project doesn’t just waste the project’s budget—it wastes the salaries of everyone involved, damages reputations, derails careers, and sometimes tanks stock prices. When failure costs millions, you spend hundreds of thousands to prevent it.

This is rational. It’s also a trap.

The Avoidance Tax

Add up the cost of failure avoidance:

  • Planning meetings to prevent bad plans
  • Estimation sessions to prevent missed deadlines
  • Requirements documents to prevent misunderstandings
  • Design reviews to prevent architectural mistakes
  • Code reviews to prevent bugs
  • QA cycles to prevent shipping defects
  • Staging environments to prevent production incidents
  • Rollback procedures to prevent catastrophic failures
  • Post-mortems to prevent recurrence

Each of these is sensible in isolation. Together, they consume more time than the actual building. A feature that takes two days to code takes two months to plan, approve, review, test, stage, and deploy.

The enterprise isn’t slow because it’s stupid. The enterprise is slow because failure is expensive, and slowness is the price of caution.

The Learning Paradox

Here’s where it gets interesting.

The best way to learn is to fail. Try something, see it break, understand why, try again. The scientific method. Iteration. Evolution. Every successful system we know of learns through failure.

But the enterprise can’t afford to fail. So the enterprise can’t afford to learn. Not really. Not fast.

Oh, they try. “Fail fast” became a mantra. “Move fast and break things.” Innovation labs. Skunkworks. Hackathons. All attempts to create protected spaces where failure is permitted.

But these are exceptions that prove the rule. If failure were actually acceptable, you wouldn’t need special zones where it’s allowed. The existence of “innovation labs” tells you that innovation—which requires failure—isn’t welcome in the regular org.

The enterprise is trapped in a local maximum. They can’t explore because exploration requires failure. They can’t fail because failure is expensive. So they optimize what they have rather than discovering what they could have.

Fail Fast as Strategy

Now consider the individual with AI.

When production costs collapse, so does the cost of failure. A feature that used to take two months now takes two days. If it’s wrong, you’ve lost two days, not two months. You can afford to be wrong.

More than that: you can aim to be wrong.

This is the crucial insight. It’s not just that failure is cheaper. It’s that cheap failure enables a fundamentally different strategy.

The old model: Hypothesis → Plan → Plan more → Review the plan → Revise the plan → Approve the plan → Build → Test → Fix → Test → Ship → Hope

The new model: Hypothesis → Build → Test → Learn → Repeat

The old model tries to be right the first time because iteration is expensive. The new model assumes you’ll be wrong and optimizes for learning speed.

These aren’t different points on the same spectrum. They’re different games entirely. The enterprise is playing chess—deliberate, strategic, minimizing mistakes. The individual with AI is playing a slot machine that costs a nickel and occasionally pays out a million dollars. Pull the lever. Pull it again. Keep pulling until you win.

The Competitive Moat

Here’s why this matters strategically: the enterprise cannot adopt fail-fast.

It’s not a choice. It’s a structural impossibility.

An enterprise cannot tell shareholders “our strategy is to fail frequently.” They cannot tell employees “your project will probably be thrown away, but that’s fine.” They cannot tell customers “we’re experimenting with your data.” The social and legal structures of the corporation prohibit genuine fail-fast operation.

They can create innovation labs. They can fund internal startups. They can acquire companies that move fast. But they cannot transform the core enterprise into a fail-fast organism. The antibodies are too strong.

This means fail-fast is a competitive moat for individuals with AI. The strategy space they can access is structurally inaccessible to large organizations.

The individual can:

  • Try ten approaches in parallel and see which works
  • Ship something half-baked and iterate based on feedback
  • Pivot completely when data suggests a different direction
  • Abandon a failing project without organizational trauma
  • Make decisions in minutes that would take committees months

The enterprise can do none of these things at the core-business level. Their decision-making is optimized for a world where failure is expensive. They cannot simply “decide” to be okay with failure. The entire structure would have to change, and structures don’t change that fast.

The Learning Advantage Compounds

This advantage compounds over time.

Each iteration teaches something. The individual with AI who tries ten things learns ten lessons. The enterprise that carefully plans one thing learns one lesson—if it’s lucky enough to be wrong in a visible way rather than succeeding by accident.

Over a year, the individual has run hundreds of experiments. The enterprise has run dozens. The individual has encountered and overcome more failure modes, discovered more edge cases, learned more about what works and what doesn’t.

Over five years, the gap is astronomical. The individual has explored regions of the solution space the enterprise doesn’t know exist. They’ve developed intuitions the enterprise can’t buy because the intuitions come from failures the enterprise was too afraid to have.

This is why “experience” is being redefined. It’s not about years anymore. It’s about iterations. Someone with one year of high-iteration experience may have more practical wisdom than someone with ten years of slow, careful, failure-avoidant experience.

Jevons Paradox (Revisited)

The classic economic framing still applies, but now we can see it more clearly.

In 1865, Jevons observed that more efficient steam engines led to more coal consumption, not less. Efficiency made coal-powered activities economically viable that weren’t before. Demand expanded faster than efficiency improved.

The same principle applies to software—but the mechanism is failure, not just production.

When failure is expensive, you only attempt sure things. When failure is cheap, you attempt long shots. The number of things worth attempting explodes.

It’s not just that you can build more software. It’s that you can try more software. Most attempts fail. But some don’t. And the ones that don’t are discoveries that would never have been made under the old model because no one would have dared to try.

The Laffer Curve (Revisited)

Same insight, different lens.

Laffer’s point was that reducing friction on economic activity can increase total activity by more than the friction was costing. Lower the tax rate, and people work more. Sometimes enough more that total tax revenue increases.

Applied here: reduce the cost of failure, and people try more things. Sometimes enough more that total success increases despite (because of) more failures.

The enterprise sees failure as waste. The individual with AI sees failure as investment. Both are right within their own economic models. But only one model scales with AI.

Three Tailwinds

Coders who embrace AI and adopt fail-fast face three tailwinds:

More learning per unit time. Each failure teaches. More iterations means more lessons. The person who tries ten things and fails at eight has learned more than the person who carefully tried one thing and succeeded.

Access to unexplored solution space. When you can afford to try unlikely things, you sometimes find unlikely successes. The enterprise, constrained to safe bets, never explores these regions.

Structural advantage over enterprises. The enterprise cannot adopt your strategy. You’re playing a game they cannot play. This is the rarest and most valuable kind of moat—not something you built, but something they structurally cannot cross.

These aren’t minor efficiency gains. They’re a different paradigm.

The Redistribution Problem

None of this means the transition will be painless. The people whose jobs depended on failure avoidance—the planners, estimators, coordinators, reviewers—find their roles diminished not because they failed but because the need to prevent failure has lessened.

And there’s a real question about speed of transition. If the change is gradual, people can adapt. If it’s sudden, there’s dislocation. The economics might work out at the macro level while individuals suffer at the micro level.

I’m not arguing for complacency. I’m arguing against doomerism. The difference matters. Complacency says “nothing will change.” Doomerism says “everything will collapse.” The reality is more nuanced: things will change, some painfully, but the direction is growth rather than contraction.

The future favors those who can fail fast. Not because failure is good, but because learning is good, and learning requires failure, and AI finally made failure cheap enough to be strategic.

Part V: The Liberation

I’ve made an argument about economics and organizational structure. But there’s a human story too, and it might matter more.

What We Lost

Somewhere along the way, making things became secondary to coordinating the making of things.

The original appeal of creative work—writing, coding, designing, building—was the direct connection between effort and outcome. You imagined something. You made it real. The thing you made existed because of you.

Then the thing got too big for one person. You needed a team. The team needed coordination. The coordination needed management. The management needed reporting. The reporting needed meetings. The meetings needed agendas. The agendas needed preparation.

Suddenly you were spending more time talking about the work than doing the work. The direct connection between effort and outcome was mediated by layers of organizational machinery. The joy of making got buried under the burden of coordinating.

What AI Returns

AI doesn’t just make production faster. It returns autonomy.

When one person can build what used to require a team, that person doesn’t need to negotiate, align, compromise, or wait. They can just build. The vision in their head can become reality without being filtered through a committee.

This is creative liberation. Not freedom from work—I work more hours than ever—but freedom to do work that matters. Freedom from coordination overhead. Freedom from status meetings. Freedom from the organizational theater that exists because humans alone were slow.

Most people won’t want this. Full autonomy means full responsibility. Being the lone vision-holder is exhausting. Many prefer delegation, collaboration, shared ownership. They’d rather be part of something than be the whole thing. That’s fine. Markets don’t require universal adoption. They require a small number of outliers to destabilize cost structures and force the ecosystem to adapt. The question isn’t whether everyone becomes a solo operator—it’s whether enough people do to change the economics for everyone else.

The Uncompromised Vision

Here’s what changes when one person can build complex things:

No design by committee. The product reflects a single vision, not a negotiated compromise between competing stakeholders. For better or worse, it’s coherent.

No stakeholder management. You don’t need to sell the idea to people who can block it. You can just try it and see if it works.

No documentation debt. The context lives in your head and your AI’s context. No one else needs to be brought up to speed.

No dependency waiting. You don’t block on other teams or other people. If something needs to be done, you do it.

No permission seeking. The only approval you need is your own.

This isn’t necessarily better in all cases. Sometimes committees catch errors. Sometimes stakeholders have valuable perspectives. Sometimes documentation helps future maintainers.

But for creative work—for building new things, for expressing a vision, for making art—the uncompromised path often produces better results. The best games, the best software, the best products often came from small teams or individuals with strong visions, not from large committees seeking consensus.

AI makes the small team even smaller. It makes the individual viable at scales that previously required organizations.

The Craft Remains

Here’s what AI didn’t take: the soul of the work.

The decisions about what to build. The taste that shapes the output. The judgment about what matters. The vision that gives the work coherence. The stubborn commitment to quality that refuses to ship garbage.

These remain human. They might always remain human. And they’re the parts that matter.

AI took the friction. It left the craft. It handled the tedium so we could focus on the creativity. It managed the mechanics so we could concentrate on the meaning.

This is not replacement. This is liberation.

Conclusion: Twelve Deaths and What Remains

The predictions were wrong. Not slightly wrong—inverted.

AI didn’t come for the coders. AI came for the reasons coders needed organizations around them.

We’ve traced twelve deaths. Eleven structures that made the enterprise possible—teams, documentation, backlogs, estimation, meetings, deadlines, specialization, version control, career ladders, job descriptions, human-centric architecture. And then the twelfth: the enterprise itself, the thing these structures served.

Each death follows the same logic. The structure existed because humans alone were slow, limited, error-prone. The structure compensated for human weakness. AI removes the weakness. The structure loses its purpose.

The maker didn’t become obsolete. The maker became powerful enough to need no one else.

The enterprise didn’t incorporate AI. AI dissolved the enterprise’s reason to exist.

The coordination, the management, the meetings, the documentation, the estimation, the planning—these were never the point. They were necessary evils, taxes we paid for the privilege of building things together. When “together” is no longer required, the tax disappears.

What comes next? I don’t know. No one does. The structures that organized human labor for centuries are dissolving, and what replaces them hasn’t emerged yet. Maybe it can’t be predicted—only discovered, through the same iterative failure that now defines how we build.

But this much is clear: the future is not humans replaced by machines. It’s humans liberated from the machinery that grew up around their limitations. The scaffolding falls away. What remains is the work itself. The making. The craft. The vision becoming reality.

I’ve never coded more hours in my life. And I’ve never enjoyed it more.

The future belongs to the makers who embrace the tools.


Co-authored by Human and Claude (Anthropic). This essay is itself an artifact of the process it describes—conceived, debated, and written in a single sitting through human-AI collaboration.