Software Will Stop Being a Thing
What the "AI changes coding" discourse can't see
A thoughtful essay made the rounds recently, arguing that AI-assisted coding splits the software world into three tiers. Tech companies at the top, where senior engineers review what AI produces. Large enterprises in the middle, buying platforms with guardrails and bringing in fractional senior expertise. And small businesses at the bottom, served by a new kind of local developer, a "software plumber" who builds custom tools at price points that finally make sense.
It's a useful map of right now. The analysis is grounded. The advice is practical. I'd give it to any enterprise CTO and feel good about it.
But it describes a transitional state, and mistakes it for a destination.
## The artifact assumption
Every analysis in the current discourse (this essay included, and nearly everything else written about AI and software) shares an assumption so deep it has become invisible: that software is an artifact. A thing you build, ship, maintain, and eventually retire. The three tiers differ only in who builds the artifact and how. Tech companies build it with elite engineers. Enterprises buy it or outsource it. Small businesses get it from the local plumber. Different supply chains, same product category.
But follow the trend lines, and they don't converge on cheaper artifacts. They converge on the dissolution of the artifact itself.
We're already seeing the early signals. Cloudera's CTO talks about "disposable apps", temporary modules conjured by a prompt, used once, dissolved. Others describe "vibe coding," where you state the goal and constraints and the system produces the implementation. These are being interpreted as faster product cycles. But that reading misses the deeper signal: the concept of "an app" is becoming incoherent. When software can be generated, used, and regenerated in seconds, it stops being a thing you have and starts being something a system does.
This distinction matters. Every previous attempt to democratize software (CASE tools in the '90s, low-code, no-code) tried to make it easier to produce artifacts. They replaced syntax with drag-and-drop, but preserved the underlying complexity. They failed because they confused the difficulty of writing code with the difficulty of designing systems. AI-driven generation is categorically different. It doesn't constrain you to pre-built blocks. It generates from intent. And it improves at doing so faster than any other technology in history.
## All forms of scale are migrating into the model
Today, building software requires assembling many forms of scale. You need teams (people who can build). Infrastructure (systems to run on). Organizational knowledge (patterns, processes, institutional memory). Vendor relationships (contracts, support agreements). Architectural expertise (someone who knows what works and what breaks at scale).
The three-tier model in the original essay is really a description of how organizations differ in their access to these forms of scale. Tech companies have all of them. Enterprises have some. Small businesses have almost none, which is why they need the plumber.
Here is the structural observation should make us think: all of these forms of scale are migrating into the model.
The model already knows how to architect distributed systems. It already knows integration patterns across thousands of APIs. It knows security best practices, compliance frameworks, testing strategies, database optimization, deployment pipelines. And it is getting better at all of these simultaneously, with every training run, every iteration. No human team improves this fast across this many dimensions at once.
When all forms of scale reside in the model, the tiers lose their organizing principle. A five-person wine distributor and a multinational bank access the same depth of technical capability. The differentiator is no longer who has the best engineers or the most sophisticated platform. It's who has the clearest understanding of what they actually need.
This is not a marginal shift in software economics. It's the end of software economics as we know them. Every business model that depends on a scarcity of technical capability, consulting, outsourcing, SaaS, platform engineering, faces a reckoning.
## The feedback loop replaces the product
If the AI provides the technical capability, what's left? The original essay says: judgment. Someone still has to know whether the implementation is correct. Someone has to understand the business problem well enough to define the requirement.
This is true today. But it frames judgment as a static resource. Something humans possess and AI doesn't. A fixed quantity you either have in-house or rent fractionally.
Judgment isn't a thing. It's the output of a process. Specifically, it's the output of an iterative dialogue between someone who has a problem and someone (or something) that can propose solutions. The quality of the judgment depends on the quality of the loop: how fast it cycles, how honest the feedback is, how effectively tacit assumptions get surfaced and tested.
Today, that loop runs through humans. A business analyst interviews stakeholders. A consultant reviews the architecture. A product manager prioritizes the backlog. Each of these people is running some version of the same process: "Here's what I think you need. Is this right? What am I missing? Let me try again."
AI is learning to run this loop. And it has structural advantages that no human intermediary can match. It can cycle faster. Twenty clarifying iterations in the time it takes a consultant to schedule a meeting. It doesn't get tired, defensive, or politically constrained. It can hold the full context of a business's operations, history, and constraints in working memory. It can propose, implement, demonstrate, and revise in a single continuous flow, rather than breaking the process into separate phases of analysis, specification, development, and review.
The feedback loop between business intent and system behavior is becoming the only interface that matters. Not the code. Not the architecture. Not the deployment pipeline. The conversation.
## The AI will be its own consultant
Here's where I part ways with most of the commentary, including the original article.
The article imagines a future where AI handles implementation, but humans handle everything above it: requirements elicitation, architectural decisions, review, maintenance strategy. This preserves comfortable roles for the people writing the analysis. The senior engineer becomes a reviewer. The consultant becomes a fractional architect. The plumber becomes a translator of business needs.
But the AI is learning to do all of these things. Not eventually. Now. And the pace of improvement is staggering.
Requirements elicitation, the process of figuring out what a business actually needs, has been the hardest problem in software engineering for fifty years. It's hard because stakeholders hold tacit knowledge they can't articulate. They don't know what they want until they see what they don't want. The traditional solution was to put a skilled human in the room: a business analyst, a consultant, someone who could ask the right questions and read between the lines.
But a conversational AI that can ask clarifying questions, propose concrete implementations, show working prototypes in real time, and iterate based on reactions is performing the same function through a different mechanism. It can surface tacit knowledge not by reading the room, but by rapidly narrowing the space of possibilities through demonstration. "Is this what you mean? No? How about this? Closer? What if we changed this part?" Twenty cycles of this, and the requirement that would have taken a consultant three workshops to extract is on the screen, working.
The AI will install itself. It will onboard the user. It will teach the user how to interact with it more effectively. It will learn the business, remember its history, adapt to its evolving needs, and maintain its own systems without being asked.
The only human role that may persist in this introduction is the initial trust-building: someone who says, "I've used this. It works. Let me show you how to start." A handshake, not a service contract. An introduction, not an engagement.
## What remains: the provider and the user
When the intermediary layer dissolves, the structure simplifies radically.
The model provider (the lab) handles capability, reliability, safety, liability, and continuous improvement. This is a massive, capital-intensive operation. It's the utility company of the AI era. You don't build your own power plant; you plug into the grid. Similarly, you won't build your own AI; you'll subscribe to a provider who ensures the system works, stays secure, meets regulatory requirements, and gets better over time.
The user brings intent. They engage in the feedback loop. They say what they need, respond to what the AI proposes, and shape the system through ongoing interaction.
That's it. Two parties. One provides capability. The other provides direction.
There will be competition among providers. Different models, different strengths, different pricing, eventually open-source alternatives. There will be regulatory frameworks governing liability and data handling. There will be industry-specific compliance layers. All of this is important and none of it changes the fundamental structure.
The entire intermediary layer (the consulting firms, the outsourcing companies, the SaaS vendors, the platform engineering teams, the fractional architects, the software plumbers) exists because technical capability was scarce and expensive to assemble. When it's abundant and accessible through a subscription, the intermediary's economic reason for existence evaporates.
## The real bottleneck
So what's actually hard in this future?
Not technology. The model handles that. Not integration. The model handles that too. Not security, not testing, not maintenance, not deployment.
What's hard is knowing what you want.
This sounds trivial, but most businesses that fail at software don't fail because of bad engineering. They fail because they lack clarity about their own operations, goals, and constraints. They ask for a "customer portal" when what they actually need is a different workflow. They spec a "reporting dashboard" when the real problem is that nobody trusts the underlying data. The gap between what people ask for and what they actually need has been the central problem of software engineering since it began.
The AI addresses this through the feedback loop. Through rapid, iterative cycles of proposal and refinement that surface assumptions and narrow toward real needs. But the human still has to show up to the conversation honestly. They have to be willing to say "that's not right" and engage with why. They have to tolerate the discomfort of discovering that what they thought they wanted isn't what they need.
This is not a technical skill. It's a human capacity. And developing that capacity, the ability to examine your own assumptions, articulate your actual goals, and stay in a productive dialogue with a system that keeps asking "are you sure?" might be the most important competency of the next decade.
The organizations that thrive won't be the ones with the best engineers or the most sophisticated AI tools. They'll be the ones whose people are genuinely good at figuring out what they want.
## The article describes a real moment. The moment is passing.
None of this invalidates the original essay's advice. If you're running engineering at an enterprise today, you absolutely need platform engineering, SRE capacity, and a pipeline for developing senior talent. If you're a small business, a skilled developer with AI tools can build you things that were unaffordable five years ago. These are real, actionable observations about the current landscape.
But the landscape is shifting under our feet. The three tiers are transitional configurations, not permanent structures. The judgment bottleneck is real, but judgment itself is migrating into the model. The intermediary roles that feel essential today are the early casualties of a compression that won't stop at implementation.
Software is not going to be a cheaper thing to build. It's going to stop being a thing at all. It will be a capability provided by a system you interact with, shaped by your intent, maintained by its provider, and refined through a continuous dialogue that never really ends.
The businesses that understand this early won't waste time optimizing their software supply chain. They'll invest in the only resource that matters: their own capacity to say, clearly and honestly, what they need.
Everything else, the machine will handle.

