Predictions about AI (I’m really bad at predictions)

I’ve been tidying up around here lately — as one does every few years when one maintains a long-lived, little-trafficked blog mostly out of a guilty sense of obligation I already feel bad enough about breaking the “URLs should be permanent” rule, but the name scheme I settled on years ago was too dumb to keep — and looking at some of my older tech posts, I’ve made some really bad predictions. I think the worst one was that JSON would never displace XML as the structured data format of choice. (“But it has no namespaces! And attributes and child nodes might collide!”)

So here I am to make some more predictions; we can check back in a few years to see how wrong I am.

AI is useful for production code

I’m starting with an easy one that’s already (relatively) uncontroversial. But six months ago I’d have made the opposite prediction here. So see, I’m already ahead of the game!

Early models produced garbage, to be sure; and even some current agentic models (GPT-4 and below for example) tend to incidentally break more functionality than they introduce. But I’ve successfully used both Claude Sonnet and whatever model backs Rovo to:

  • perform a major refactor and redesign of a large, complex application feature, by basically prompting “Here’s the old version, here are the characteristics of the new version, good luck” and then iterating a bit on the UI. All this in a framework I’m basically a novice in (Angular).

  • search for and extract data from a scattered collection of human-structured (i.e. inconsistently-formatted) Confluence documents

  • analyze a very noisy date-and-value sequence — ok, yes, my scores in an idle game — by basically prompting it “Here’s some data, what can you tell me about it?” It was able to detect exponential growth, decided to calculate a moving average to smooth out the noise, made accurate predictions about future trends, and even correctly identified the inflection point at which the rate of growth changed. (Which was the date of an in-app purchase). All without hand-holding on my part; I don’t have the statistical knowledge to do so even if I’d wanted to.

I’m not going to make any guesses about to what degree what we’re calling AI actually lives up to that acronym. But in practice this is functionally well beyond “predict the next token.”

The job of “software developer” will soon more closely resemble that of a tech-oriented manager

Working with an AI agent feels a lot like working with a very eager junior developer. They make similar types of mistakes: the main complaint from skeptics these days is that they can produce unmaintainable spaghetti code if not closely supervised. Well, guess who else does that?

They work best when they have clear instructions and the full context. Just like a real developer. Constructing a good prompt feels an awful lot like writing a good JIRA ticket. (I wouldn’t be at all surprised if someone’s already working on building a direct JIRA-to-code pipeline, except that most JIRA tickets aren’t that well-written or clear.)

They have their own personalities. Claude is a flatterer, constantly congratulating you on how good your suggestions are. Gemini is chatty as hell (which they’ve obviously tried and failed to rein in; the chain-of-thought pre-response is littered with the phrase “I need to be concise, so...”). GPT models are terse and sometimes pretend their mistakes didn’t happen, which they have in common with a subset of human engineers. (I’m really not impressed with GPT, if that wasn’t clear.)

They perform better if you’re nice to them. Seriously, I really believe they do; I don’t think I’m just anthropomorphizing here. When I’ve gotten frustrated with the AI making mistakes and started SHOUTING or being sarcastic, I’ve seen the chain-of-thought include “Based on the user’s tone, I need to...” and start outputting noticeably inferior results. I’ve even see people theorize that they have moods, or perform differently at different times of day; though this one I suspect is anthropomorphism, or else the AI vendors quietly swapping in cheaper models at peak hours.

All of them do, sometimes, create redundant functions or construct difficult-to-understand complex objects; the code they produce isn’t always clear and maintainable. They sometimes go down rabbit holes and get stuck on the wrong approach or code themselves in circles.

Again, just like a junior engineer.

(I’ve found it’s best to let them write a rough first draft of whatever feature I’m asking them to implement, then switch to a different chat context and ask for a code review and refactor. So, yes, you do need to watch their output closely, have enough tech experience to be able to distinguish good code from bad, and to ask for improvements when necessary — true unskilled “vibe coding” isn’t really enough. Yet.)

So the upshot of coding with AI is that you no longer need to have a grasp on the syntax or low-level details, the agents can take care of that; but you do need to do careful code reviews, understand what you’re looking at, and ask for improvements where necessary.

This feels very similar to the technical aspects of managing software engineers.

We’ll be stuck with current frameworks forever

AI agents work best when working in a familiar context, because they have a large corpus of existing code to draw from. I’ve found that they’re substantially more productive when working in a React, Angular, or Vue context than in vanilla or custom-rolled code. (I’m mostly a front-end guy, so I’m not as able to evaluate whether this applies to back-end code as well, but I expect it does).

Because we no longer have to pay as much attention to the fine syntax and structural details, there’s little incentive to write a substantially new framework in the age of AI — or at least for a newly-created one to grow large and successful enough for there to be enough human-generated code in it for an AI to train on and become expert in.

So what we have now — React, Vue, Angular, webpack, babel, etc — is going to be with us for a very very long time. They’ll get incremental updates, sure, but I wouldn’t expect a paradigm shift in code structure anytime soon.

The disposable front end (and maybe backend too)

Rewrites and refactors are ridiculously easy now. There’s no clearer prompt for an agent than an existing codebase, and agents are far better at remembering long lists of edge cases and workarounds than humans are.

I think the age-old wisdom of “never do a full rewrite” is approaching its end. Because the hard part of a rewrite is reading and understanding the existing code — which you no longer need to do. I’m already looking at some of my older projects and realizing that it’d probably be easier to start from scratch than to update and refactor what already exists. And I’m less concerned about losing the bugfixes and improvements that will have collected over time, because the AI is good at retaining those in the rewrite.

I think this works because I’ve done it already. My team was falling behind on a major-version refactor of wildly-outdated legacy code, so I rolled up my sleeves and claimed one of the features for myself. Put the old version in one folder, started fresh in a new folder, added a feature flag to toggle between them, and over the course of a couple weeks Claude and I reproduced all the functionality with modern code patterns and the new workflow. Would’ve taken me three months if I’d done it by hand, conservatively, because I’d have had to learn the framework first.

It passes all the tests, was accepted as maintainable by human code reviewers, and the only bug reports it’s garnered since it went live are actually feature requests. (Well, ok, there was one bug, but it was my fault; I didn’t catch a duplicated line of text. That’s it, seriously.)

The main concern with AI-generated code is that it’ll be difficult for humans to understand and maintain... but maybe that doesn’t matter anymore. This is the prediction where I may be furthest out on a limb, but I think it’s at least possible that “throw it all out and start over” will be a valid pattern for major-version updates.

Turmoil and chaos while we figure this stuff out

Job losses will be incalculable. It’ll be much more difficult to break into the industry, there’ll be a shortage of actual junior engineers before too long. It’ll resemble the offshoring movement but this time without the cultural or timezone barriers — and much less expensive; going flat out with the most expensive models I’m only able to spend about eight dollars a day on tokens. No human can compete with that.

As the current crop age out to retirement there’ll suddenly be a shortage of skilled seniors capable of supervising the AI output. (It’ll be a race to see whether the AIs can get to the point where they need as much skilled supervision before that happens. If it doesn’t, we may see a rehash of the COBOL experts coming out of retirement for Y2K, except this time it’ll be for everything. If it does, software engineering will become a niche or academic activity Someone still needs to build and train the models, and invent new algorithms and approaches, but those people are already a tiny segment of the current industry. and the product teams will take over directly for most software.)

Competition will be faster and fiercer. It’ll be much easier to quickly replicate a competitor’s feature set; if someone comes up with a great new idea everyone else will have it production within weeks. Market success will come more from marketing, branding, ease of use, and patent law, than from the feature list. Startups will therefore have a much harder go of things; large established companies will have the branding, and can afford the marketing and patent lawyers. There’ll be less incentive for acquisitions. Good UX may be the only way true startups have left to compete.

It’s going to kind of suck for almost everyone, honestly. As exciting and productivity-enhancing this stuff is on an individual level, I’m kind of relieved I’m closer to the end of my career path than the beginning. I don’t envy people graduating from bootcamps or their CS degrees this year.