Let me begin with a small confession.This morning, before the tea had properly steeped and before my spectacles had fully accepted the shape of my face, I watched an AI cheerfully autocomplete a function I had half-typed. It did so with the confidence of a man who has never had to reboot a server at 3 a.m. while muttering apologies to an imaginary customer. And I thought to myself, not for the first time: well then… we’ve clearly crossed a line.
I still remember the sound.
That shrill, wounded-animal scream of a dial-up modem in the late ’90s, negotiating a fragile ceasefire with the internet. You didn’t log in back then. You arrived hesitantly, hoping the connection would hold long enough for Netscape to load and not collapse into a sulk.
Software lived close to consequence in those days. Memory was finite. CPUs were honest. Bugs were personal. When something broke, it was usually because I broke it. There was nobility in that intimacy.
I romanticise those years shamelessly. Deployments by FTP. Shell scripts held together with faith and bad punctuation. The quiet, terrifying magic of a Makefile. ipchains standing guard like Heimdall at the gates of a shared Cobalt RaQ server. I was “full stack” before it became a LinkedIn affectation. The romance was real. So was the fragility.
Those systems collapsed under their own success. Brittle monoliths. Architectures that demanded constant firefighting just to remain upright. Many of our so-called heroics were not virtues, they were symptoms of immature tooling and a complete absence of safety rails.
Zoom out far enough and a simple pattern emerges. Software engineering, stripped of fashion and jargon, has always been an attempt to tame three unruly beasts:
Data. Scale. Complexity.
In the beginning, we fought all three simultaneously and mostly lived to fight another day. Data lived wherever it could fit on RAID arrays chosen after long arguments about read/write ratios. Scale was a prayer. Complexity was managed through caffeine, and on especially bad nights, beer.
Then came Linux, followed by MySQL, PHP, and Apache - later canonised as LAMP. This was a turning point. Databases grew up. We learned that data is not just storage, it is gravity, liability, and memory. Flat files gave way to schemas. Indexes replaced incantations. Constraints replaced hope.
Much later came the cloud, NoSQL, message queues, and their many cousins. Almost overnight, scale was tamed. We stopped asking, “Will this survive traffic?” and started asking, “How much will this cost if it does?” Servers became cattle. Monoliths fractured into microservices. Capacity planning became a spreadsheet conversation. Scale shifted from existential dread to a billing line item.
This wasn’t merely convenience. It rewired how we imagined software. You could assume growth. Recklessness became, if not wise, at least affordable.
And now we arrive at the present time: AI and the promise of addressing complexity. Not computational complexity - that battle remains locked at the gates of mathematics but human complexity. The exhausting translation of intent into implementation. “Tell me what you want, not how,” the machines whisper, confidently.
It is seductive.And it is treacherous.
By 2025, the evidence was already awkwardly clear. AI helped in some places - greenfield projects, well-bounded tasks, less-experienced developers, redesigned workflows. But in mature systems, those with history, scar tissue, and tribal knowledge - it would slow experienced engineers down. Time leaked into prompting, reviewing, and correcting suggestions that looked right but weren’t quite. The machine sounded confident. The code demanded supervision.Velocity hid risk.
Security reports told a less flattering story. A disturbing share of AI-generated code shipped with vulnerabilities, not trivial syntax errors, but deeper architectural and logic flaws that evade casual review. Syntax improved. Judgment did not. Complexity hadn’t vanished, it had mutated, becoming probabilistic, quieter, and harder to reason about.
And yet, here we are in 2026, where AI can stare into the abyss of your 2003-era PHP monolith, whisper a digital “bless your heart,” and in seconds transform spaghetti code into a modular, TypeScript-flavoured masterpiece. It will be clean. It will be DRY. And you would probably be judged for your nested if statements more harshly than a Michelin-starred chef judging a microwave burrito.
This is no longer just autocomplete with opinions.
The newer generation of models like Claude and its peers, paired with tools like Cursor, Devin, and agentic workflows, carry far deeper context windows, stronger reasoning traces, and memory mediated through workspace state rather than a single clever answer. They don’t merely suggest code anymore, they observe systems, traverse repositories, run tests, file pull requests, and wait patiently for feedback.
These agents don’t stare blankly at legacy code. They walk it. They map dependencies, infer invariants, notice patterns that humans stopped seeing years ago out of sheer exhaustion. Given time and guardrails, they can potentially refactor entire subsystems not as isolated snippets, but as coordinated change.
This is real progress.And it matters.
But here’s the catch - understanding is not the same as responsibility.
The agent may grasp structure, but it does not own consequence. It can preserve intent disturbingly well, even when that intent is flawed. Business logic written under outdated assumptions is faithfully migrated, lovingly documented, and efficiently parallelised. The bugs survive not because the AI is careless, but because it is obedient.
This could result in a new class of problem - logic that is wrong at scale, cleanly typed, exhaustively tested, and deployed with confidence.AI no longer produces messy spaghetti.It produces beautifully plated technical debt - sushi-grade, provenance-tracked, and still capable of giving you food poisoning if the fish was off to begin with.
The danger is no longer bad code.It is convincing code.
Which brings us to the uncomfortable question: what does it mean to write code now? The modern programmer is no longer a typist of logic. Syntax is solved. Boilerplate is dead. The job has shifted upward. Today’s engineer is a translator of intent from ambiguity to constraint, from desire to system. Soon enough, programmers will decide what not to build. They will prune early. They will sense where complexity will metastasise and cut before it spreads.
And the mythical 10× programmer? They still exist, but not by default.
Once, 10× meant output. More code, faster, leading and teaching several programmers by them just reading the code. Today, 10× means leverage over data, scale, and complexity. It means using AI as a force multiplier without surrendering judgment. Delegating the boring. Distrusting the plausible. Obsessing over the edges where systems age, fail, or get attacked.
This is not automatic. It will need to be learned. It will emerge and persist where taste, discipline, and deliberate practice meet powerful tools. Without that fluency, teams often move faster and accumulate debt even faster.
The real work now is thinking in time.
How will this age?Who will maintain it?What breaks first and how quietly?
These are not questions machines are particularly bothered by. Humans must remain responsible for them.
So no, coding is not dying.It is shedding skin - yet again.
We have moved from wrestling bits to managing probability as intelligence for raw material. The intoxication is real. The dangers are measurable. The progress is undeniable.
And for the first time, the most dangerous code in the system may be the code everyone agrees looks correct.

