Thoughts On: Software as Prosthesis, Revisited
2026-03-16
So, it turns out that the ongoing AI tech-staffing bloodbath is my fault. Over twenty years ago, I told "the suits" that they could save more money replacing IT folk than end-users. What I didn't predict was that it would result from an own-goal: Tech folk eagerly developing and embracing a technology that "promises" to make them obsolete!
But will it? Sure, LLMs and the state of the tech industry are all anyone can talk about at the moment, but the similarities between that old blog and today suggest that the root cause is more fundamental than AI, enshittification or even runaway capitalism. No, this seemingly recent crisis has its origin in how early computer manufacturers and their customers framed the "problem of software" itself. This framing has dominated the industry since at least 1968's NATO Software Engineering Conference in Garmisch. And when they said "problem of software" they meant it literally. Consider the first line of the conference notes:
The present report is concerned with a problem crucial to the use of computers, viz. the so-called software, or programs, developed to control their action.
This report marks the rise of the Software Engineering (SE) paradigm1 and the infamous "manufacturing metaphor" that has plagued our industry ever since. Even just the highlights and preface sections2 paint a picture of an industry in crisis; its growth hampered by software "production" costs spiraling out of control and an industry at a loss about how to address it. Their solution was to apply their hardware manufacturing thinking and processes to software development too. Customers were sold, not on the benefits, but on the savings that adopting computing would bring; primarily in the form of reduced labor costs (what I called the "replacement" strategy in the earlier article). What seemingly went unexamined at the time was whether "software predictability" was best understood as a manufacturing problem in the first place!
Yet this mythical spec-based approach has always been problematic: Almost immediately practitioners questioned the merits of a priori specification (only to see our arguments subverted)3. We called out category errors of scale, and were largely ignored. We developed technical innovations to compensate, but the stakes (and difficulties) scaled faster than the solutions. We constructed grand modeling tools to help make sense of it all, but these proved too expensive and cumbersome to apply (let alone iterate). We even proposed a sea-change toward simplicity, but since it never really challenged those basic premises of scale and variance, it too failed to rescue SE from its roots.
And so the industry just lurched along. Now, LLM tech promises to finally realize the ultimate SE fever dream: eliminating human variance from the manufacturing process entirely! And if you accept this SE premise, such a desire is sadly rational; IT staff is now arguably the largest cost center for most businesses and variance remains the biggest challenge. However, this "AI replacement" strategy is going to fail too. Sure, LLMs might someday be capable of accurately implementing a spec, but the fundamental "problem of software" was never the challenge of implementing one, but the inability to create one in the first place. Successful computing applications must resolve demands in context, and keep doing so as both context and demands change. Thus, the only spec possible has always been the living software itself; a model of our bit of the world whose correctness we discover and rediscover by comparing4 what we have to an ever-changing context of what we need. Replacing the humans who understand and make these decisions is only going to make "the software problem" worse.
Fundamental problems like these demand that we question our fundamental assumptions. Granted, I wasn't thinking of AI (let alone LLMs) when I wrote that old post, but what I concluded then and remain bullish on now is the superiority of augmentation over replacement: The idea that skilled people, augmented by computing5 working together in a creative context6, will outperform even much larger (and more expensive) development efforts where such folks are seen simply as too expensive per unit resource. The looming (inevitable?) AI bubble collapse may not just be the next big SE failure, it may actually result in a Kuhnian crisis for the paradigm itself. Imagine! applied computing freed from the procrustean bed of manufacturing and the bad decisions it encourages. Maybe we'd even discover and share better explanations than greed, ignorance and a lack of professionalism for why SE thinking has failed us so thoroughly, for so long.
SE isn't going away any time soon of course—its proponents will likely far outlive me—but that doesn't mean that it's working, or that something better isn't desperately needed! Hell, in the short-term, LLMs might even improve the efforts of those orgs most thoroughly committed to SE's manufacturing-driven efforts to "scale up production"7. But for those of us who have always felt this wasn't the best or only way, I have some good news: it's easier than ever for small groups of knowledgeable computing folks to out-compete these "scaled up" delivery factories... and even use the "tools of the oppressor" to do it. While the industry pumps up their next epic failure, we have an opportunity to rebuild community, rediscover creativity and simply out-compete this tired, broken way of computing.
LLMs will not deliver the big wins the industry is hoping for, but they do show tremendous promise for augmenting the capabilities of those with the skill and taste to do the actual work. For sixty years, the SE paradigm has been at odds with the reality of the work it describes. Meanwhile, those of us who actually do the work have had to work around the bad decisions of those who desire our results, yet don't understand what makes those results possible. It's long overdue for us to stop doing the same thing over and over and expecting different results.
-
Thomas Kuhn's term "paradigm" is often pretentiously misused in everyday discussion. Here, we're pretentiously using it correctly. ↩
-
The whole paper is a fascinating snapshot of our industry in its infancy. Definitely worth a read, but for our purposes here, the point's been made in the first paragraph! ↩
-
Yes, Royce's paper coining Waterfall was a cautionary tale, not an endorsement! ↩
-
Comparing is important: our judgement, our discernment, our... taste is a function of our experiences, our intuitions and only to a (far) lesser degree on what we know. LLMs have a lot of knowledge, but IME they have poor judgement and zero taste. ↩
-
It's not simply augmentation OR replacement, but rather what activities we want the computer to focus on: the creative problem solving vs the drudge work that all serious creative work requires. The distinction lives in the different value placed on the role of context, taste, judgement and other factors that have little to do with computing (or language) per se in producing desired outcomes and impact. ↩
-
"Creative context" includes pretty much every valuable human endeavor, tho that might be hard to see in today's hyper-specialized enterprise organization—this is SE writ large, the water we all swim in: the idea that specialization along the "create/use axis" is even possible, let alone ideal. ↩
-
I mean, big IT has always had a reputation for mediocrity and trying to address its shortcomings by further scaling IT to meet the demands of the "solutions" it scaled in the first place. More recently, the proliferation of silicon valley-led unicorn hunts using headcount growth as a proxy for success (at least until IPO) has weaponized bloat itself. But, replacing people who weren't contributing to quality or success with software that's not contributing to quality or success, isn't exactly the compelling argument its advocates believe it to be. ↩