The year is 2026. The unemployment rate just printed 4.28%, AI capex is 2% of GDP (650bn), AI adjacent commodities are up 65% since Jan-23 and approximately 2,800 data centers are planned for construction in the US. In spite of the current displacement narrative – job postings for software engineers are rising rapidly, up 11% YoY. ... We wrote last week that we see the near-term dynamics around the AI capex story as inflationary, but given markets are focused on the forward narrative, we outline a more constructive take on the end state below. Before that, however, it’s worth reflecting that the imminent disintermediation narrative rests on the speed of diffusion.
The chart "Job Postings For Software Engineers Are Rapidly Rising" seems to show a rise from 65 to 71 for "Indeed job postings" from October 2025 to March 2025. That's a 9% increase. Then they inflate that by extrapolating it to a year. The graph exaggerates the change by depressing the zero line to way off the bottom and expanding the scale. This could just be noise.
The chart "Adoption Rate of Generative AI at Work and Home versus the Rate for Other Technologies" has one (1) data point for Generative AI.
This article bashes some iffy numbers into supporting their narrative.
The format is editable. The line chart seems always to be scaled so the minima is at the bottom, but you can get the zero point by changing it to bars.
The options do seem a bit idiosyncratic, but I guess they are useful for the kind of data the site users usually look at.
Broken axes aren't the solution. Starting from 0 is but nobody making graphs seems to understand or, in the case of journalists, they're trying to mislead their readers. I suppose readers enjoy being awed by dramatically changing graphs too.
It would also be nice to include a shaded area for the first standard deviation over a relevant period of time to get an idea of how far outside normal it is.
In my unhinged pipedreams, we’d have some sort of standard for conveying the data directly so users could use browser settings to decide how to display the data. There like a dozen people in the world that would use it, but they’d really really enjoy it I bet.
It's usually intentional so they're not going to show more information to reveal that their narrative is weak or wrong. It's up to the reader to think critically because journalists are constantly trying to fool them with true facts presented in a misleading way.
Readers are guilty too - they like to see wiggly lines wiggle. Nobody wants to see a graph that's just a horizontal line or shaded band. We want to peer harder at it to tease out any sort of signal that can tell our emotional brains "good" or "bad".
But it essentially shows the same thing, the covid overhiring boom and then layoff cycle post-covid is over. And jobs are rising again.
What’s absolutely mind blowing to me though…the idea AI isn’t causing software engineering jobs to collapse…which you would think would make people here happy…is something that makes software engineers upset??
It’s almost as if everyone here has married their identity to the idea they are victims of AI progress and any suggestion otherwise is ego destruction.
”What??? You mean the job market is expanding and the reason I can’t find a job is…me? That can’t possibly be true I’m a genius, the data is clearly wrong!”
There was a change in US tax law that revoked the ability of software companies to classify engineer salaries as an R&D expense, which massively increased the tax liability for many software companies.
This is under-recognized by many folks. That full impact of the that aspect of the 2017 TCJA was hard to predict when it was so far in the future, and when it hit, we were dealing with the latter economic impact of covid in addition to these deduction changes.
This was reverted for us employees in OBBB and companies can refile taxes for what they couldn't use as an expense in the intervening time. I think the impact of this is generally overstated
Well, yes, but we're still sitting at ~80% of 2020 levels. Perhaps just hangover from 2022, perhaps the end of ZIRP, but it's still depressed relative to 2020.
There was a hiring bubble in 2022 just before the Fed raised interest rates. I'm not understanding what the mystery is.
The link you're responding to has the option to zoom out more to 2020. If you scroll down to view the other related graphs, you'll find that they also index 2020 as a starting point because they're all tracking this hiring bubble.
Interesting chart that confirms hiring dynamics for SE have not much to do with AI despite all the PR, as in 2023 models and agents capabilities were quite limited, and now that capabilities increase hiring is picking up. I hope more journalists will start to challenge that narrative.
While I like that you debunked the article . . . I want to hear an argument for where the SWE job market can grow in a post-Claude world. I might expect something like: “CEOs are naturally greedy. So after trimming the team, they then recognized (versus “replacing” people with AI) they could actually accomplish _more_ with more engineers, each empowered with AI.
But I do like folks calling out the OP for being AI spam.
I'm not sure whether it's AI spam, or somebody at an investment company who actually writes like that. It's an exaggerated version of the style in McKinsey reports.
They're addressing a very important question, and one for which there is surprisingly little hard data. It's too soon to try to see a trend from low-quality data. Three years of this data might be meaningful.
I don't work in IT but I use and love Claude code. What strikes me is maybe the overall software job market can not grow to surpass the post covid peak but any current professional software engineer has immensely valuable skills that can no longer be gained in the same way, if at all.
I would think the counter argument to the greedy CEO argument is that AI breaks the former economy of scale in the opposite direction towards hyper specialization in business with small teams. In that scenario, as the economy grows with more and more business, the current software engineers are the substrate for a new type of off brand bargain CTO as opposed to the current , luxury brand CTO sitting at the top of oversized companies.The bull market becomes at the higher level that current software engineers step into.
Most likely though, none of this is true and 15 years from now it all shakes out in a way that none of us could have really predicted from our vantage point because the prediction would sound ridiculous with the information at hand.
Computing cost and reliability remains the bottleneck. AI agents are nowhere near smart enough to carry out tasks on their own. Combined with the fact, 95% of gen-AI pilots "failed" [1], at least failed to improve the bottom line. Layoffs were never about AI, they were almost always about capex, and correcting the pre-2022 overhiring. All CEOs are hearing in 2026, "I didn't get anything done, but the model hit the limit".
However, if there will be, locally deployable, meaningfully capable AI models that can change the computing cost equation.
It really depends on how you define a software engineer. If you mean software engineers doing what we do today, the market probably won’t.
If you just mean “people who make software in any capacity”, it will probably grow (or has already grown) via product, marketing, etc folks making internal tools with AI (which may not work out, we’ll see).
Presuming we keep seeing LLM improvements, SWE will move up the stack like they did in the past. They used to work directly with hardware and software. Ops folks sprung up to do the hardware, and SWEs do basically all software using abstractions over hardware. This will be another step up where SWEs no longer work directly on software, but rather on the tooling that writes software which they hand over to marketing, HR, etc.
Again, presuming this all works out the way the AI folks plan.
The world runs on software. AI makes it easier to create more software, but it still requires humans to keep running and decide what to do. Maybe each individual project will need less pure coders, but there might be a lot more projects?
As long as software engineers are needed to leverage AI (they can manage the output, refine the prompts, check the BS), there is plenty of software to write and not having SWEs still means you will have to write less of it.
Does any serious SaaS HR use Indeed? Whenever I hear that is the source, I immediately question it because companies I look up use Ashby, Lever, Greenhouse, Jobvite, Dover, etc.
edit: nvm they probably pull in results from these ATS
Is it not just the same as when people suddenly started having "an ask"? It is some kind of in-group speak that it is important that you adopt just to show that you are with the times.
I believe this wording originates from references to a Stock Ticker machine and the Ticker Tape which would "print" the "latest" values of stocks, interest rates etc.
Personally, I prefer vibe coding in the sense of stitching things together at the function-to-method level.
Unlike people who take the extreme position that vibe coders are useless, I do think LLMs often write individual functions or methods better than I do. But in a way, that does not fundamentally change the nature of the work. Even before LLMs, many functions and methods were effectively assembled from libraries, Stack Overflow snippets, documentation examples, and copied patterns.
The real limitation comes from the nature of transformer-based LLMs and their context windows. Agentic coding has a ceiling. Once the codebase reaches a scale where the agent can no longer hold the relevant structure in context, you need a programmer again.
At that point, software engineering becomes necessary: knowing how to split things according to cohesion and coupling, using patterns to constrain degrees of freedom, and designing boundaries that keep the system understandable.
In my experience, agentic coding is useful for building skeletons. But if you let the agent write everything by itself, the codebase tends to degrade. The human role is to divide the work into task units that the agent can handle well.
Eventually, a person is still needed.
If you make an agent do everything, it tends to create god objects, or it strangely glues things together even when the structure could have been separated with a simpler pattern. Thinking about it now, this may be exactly why I was drawn to books like EIB: they teach how to constrain freedom in software design so the system does not collapse under its own flexibility.
The models are improving. The software that harnesses them is also improving. It wasn't that long ago that the models were quite bad at a lot of the tasks that they are excelling at today. I do agree there's probably a ceiling to what we can get out of these, but I also don't think we have quite hit that point yet.
I agree with what you said. And perhaps my belief that “people like me are still needed” is just a desperate form of self-persuasion.
If AI replaces everything, then I become unnecessary. So maybe I am simply trying to convince myself that developers like me are still needed.
That said, realistically, I still think there are limits unless the essence of architecture itself changes. I also acknowledge part of your perspective.
Those of us who are not in the AI field tend to experience AI progress not as a linear or continuous process, but as a series of discrete events, such as major model releases. Because of that, there is inevitably a gap in perspective.
People inside the industry, at least those who are not just promoting hype, often seem to feel that technological progress is exponential. But since we are not part of that industry, we experience it more episodically, as separate events.
At the same time, capital has a self-fulfilling quality. If enough capital concentrates in one direction, what looked like linear progress may suddenly accelerate in an almost exponential way.
However, even that kind of model can eventually hit a specific limit. I do not know when that limit will arrive, because I am not an AI industry insider. More precisely, I am closer to someone who uses Hugging Face models, builds around them, and serves them, rather than someone working on AI R&D itself.
“people like me are still needed” is just a desperate form of self-persuasion.
No, no it's not. I've seen what "PM armed with an LLM" will do. Trust me, if you're a decent enough Full Stack software engineer that can take an idea and run with it to implement it, you'll have a leg up over the PM with the idea that has no idea how to "do computers".
Most of what these PMs can produce nowadays turns boardroom heads, sure. But it's just that: visuals and just enough prototype functionality that it fools the people you're demoing to. Seen enough of these in the recent past.
Will there be some PMs that can become "software developers" while armed with an LLM? Sure!
But that's not the majority. On the other hand, yes there are going to be "software developers" that will be out of a job because of LLMs, because the devs that were FS and could take an idea from 0-1 with very little overhead even in the past can now do so much faster and further without handing off to the intermediates and juniors. They mentor their LLM intern rather than their intermediates and juniors. The perpetual intermediate devs with 20 years of experience are the ones that are gonna have a larger and larger problem I'd say.
The Staff engineer that was able to run circles around others all along? They'll teach their LLM intern into an intermediate rather than having to "10 times" a bunch of perpetual intermediates with 20 years of experience.
I agree with you overall, yet there’s one flow that works for me.
Instead of speccing out a feature, I let PMs vibe code it.
I then have the exact reference I need to build.
Maybe LLMs oneshotted the right way, maybe it needs fixes, maybe some fundamentals are misunderstood, in any case it’s easier for me to know what I need to build, for the PM to be aware of some limitations (LLMs do the job of pushing back and explaining) and overall for us to have to the point conversations.
It is somewhat orthogonal to what you say, when you focused on dev seniority, so that part stands true.
But I think “PMs armed with an LLM” can, when properly used, add a lot of value to the dev process.
> I agree with you overall, yet there’s one flow that works for me. Instead of speccing out a feature, I let PMs vibe code it. I then have the exact reference I need to build.
Like BDD, but with something more accessible than Cucumber. I'm totally here for that.
It would be nice if people also committed their initial prompt and chat session with the LLM into their codebase. From a corporate standpoint, having that would be excellent business logic as code, if the code is coming from a PM or a stakeholder on the business side of the house. From an engineering standpoint, it would be an excellent addendum to the codebase's documentation.
FWIW, BDD and frameworks like Cucumber don't work at all in my experience. The people that'd need to fill these out don't do it properly (they can't) and then we, devs, are stuck with brittle and un-debuggable stuff that's worse than if we just used regular code to encode what we understood from them.
It's the same reason (most) PMs armed with an LLM still won't get anything usable done. They can't do it properly. They still need devs. But the gaps are shrinking. Some few PMs can get stuff done w/ both Cucumber, could wireframe UX with previous tools and can now do so much easier and better with an LLM.
It would be nice if people also committed their initial prompt and chat session with the LLM into their codebase
I doubt you'd want this. It's a chat session for a reason. It's gonna be huge wall of text, especially if you meant to actually include all the internal prompting the LLM did while it was working. You'd also have all my "no dude, stop bullshitting me! I told to ignore X and use Y and to always double check Z and provide proof".
It would only "work" if every single piece of feature you wrote was 100% written by the LLM from a single, largish and well defined prompt, the LLM works for a few hours and out comes the feature. And even then you have no reproducability (even if you turned around and gave it to the exact same model, no retraining, newer model, system prompt etc.).
There are ways to play around the single wall of test issue.
Mostly, git lfs.
When it comes to “no dude stop etc etc” … that is valuable information. You can extract that and put down rules for agents so that you stop repeating it each time.
Same can be done at PR, so that you can review not just the code but also how you got there.
It’s trivial to go from session to a nicely polished html with side by side conversation.
If you want to try, username at gmail, I have a private repo with it running.
I value critics, sorry for the plug ;)
Oh, on the different models side, i don’t see the advantage of reproducibility, or better, I don’t think I understand what you mean, can you help me see it?
I don't understand how "wall of text" is related to git large file support. The wall of text is a problem for me, the human. Sure, there are ways, like "be brief", caveman etc. In a large repo with lots of different people over time, I can't see how it won't just be wall of text again. It's just too much. TL;DR. And coz DR, the LLM will have buried bullshit in that text, which future session might read and "believe".
As for "no dude", no that can't be put down into rules. Not all of it anyway. We have stuff encoded in the repo wide md file, I have my personal one etc. and the various agents still don't do what we tell them to in all cases or a new model comes out and it no longer works. For example, for finding the root cause of a bug, it's very important to have actual proof and references. It's getting there w/ my instructions in the .md but it doesn't always work and I do have to "dude" it from time to time.
Is that back and forth valuable to have in files that are going to be part of the repo? I very highly doubt it. Having new rules that came out of the back and forth in a checked in AGENTS.md, sure, that is valuable. Or nowadays in individual "skills", like a "root cause finder" skill that can have very specific instructions about being thorough in proving its "found the smoking gun" BS ;)
I've seen enough PR descriptions created by the agent. Fluffy wall of text that looks good but is factually wrong. Seen it way too many times. Too many people just look at whether it looks good and then pass it off as truth. I'm tired of it and making that into "nice HTML" doesn't make it better. It just makes it look even nicer but not more true.
Re: reproducibility. My parent poster (and I guess you as well) wanted to have the prompt/conversation as "documentation". I don't see why that would be helpful. The only reason I could see would be for "reproducibility", which you won't get with an LLM. I don't see why else, but do tell me.
What I can agree could be valuable are the "why"s. I.e. the stuff that already should have been part of the ticket/requirements document. If you want to store that inside the repo as text files, instead of the original tickets or documents, that's fine of course. But I don't see how a "recording of how the code came to be" is valuable. It's like having a recording of all my IDE keystrokes and intermediate code state in pre-LLM days. Not valuable. What's valuable are the requirements and the outcome (i.e. code). Not "the thing in between".
Now don't get me wrong. Recordings of how people code/use their IDE can be a valuable teaching tool. Both as good and bad examples. And the same can be true for an agent coding session.
i misunderstood "wall of text" (i was thinking about bloating repo with it), my solution to understanding is just to create ad-hoc tools to parse the json
i coded a web ui with simple toggles: show me what user said, what llm said (nice to see what I was thinking about, nice to see how LLM came up with solution X, you get tools calls, maybe it found something i didn't think about or viceversa)
you can search/grep (.ie: did i consider idempotency when i build feature X? open session, search/grep idempotency)
you can, up to some point, resume the conversation (yes i know, cache busting makes some usages of this impractical, but in general resuming and asking "when we did this, did we think about that" tends to work... let's say that research is ok, time travel, meh)
overall, one of the advantages of LLMs is to be able to direct then ad data for insights, via standard CLI tools, via specific prompts, or building some mini tools (yeah vibe coding is fine sometimes)
whatever my question, if i have data i can have an answer
LFS helps with the second aspect (buried bullshit). Unless you smudge, you have a pointer, and that is just 3 lines.
You need to learn some ergonomics, but ok, some of us learnt how to use Jira XD
Taking your position a bit further, yes, committing chat sessions implies that you also need to review them so that bullshit doesn't filter through. Milage varies based on your personal preferences, which project you are working on, and many more heuristics.
Some will find it boring, some will think it's good project maintenance, all should be able to find a way to handle this based on their preference.
It is also nice to pointout that cleaning bullshit doesn't need to happen at merge. LFS blobs being stored separately, you can have side flows helping you out, without clogging yoou CI pipelines.
"no dude"-> rules
you can put down SOME rules
usually this happens to me at PR. I am tired fo saying "you should always check X", so i bolt it down "someplace".
I am running the usual motions i suppose most of us try to adopt: put this down in agents.md, in folder x or y, in path-scoped rules using agents, in memory files (i am exporting/importing those too), in subagents that review code before PRs.
in the end it's an unsolved problem at large, but
1. hopefully it will get better, my feeling is that it's just a cambrian explosion, and the fittest will survive... (also, owning the harness should help, i suppose .. i use claude code :D )
2. in a team, having personal styles surface is valuable. "dude don't do that" is quite often .... design. When rules go in the repo, at least we can find an agreement in person, and at PR is not about linking a document you read at onboarding, but finding out why the agent did not respect the rule. To me, that is more grounded in a tool.
3. rules are ... not static? We change our minds? We get better at things? We want to experiment? I am not advocating for a perfect rule system that replaces me, but for a good enough one that removes cruft from my daily job.
I think my approach is actually helpful when it's time to find root causes (YMMV). Via tools that parse sessions, you can see when that specific portion of code has been written with a better granularity. During that bit of the conversation the user was worried about X and asked AI to do Z and AI read this and that file and "thought" this and that and wrote that piece of code.
Maybe the user was making wrong assumptions, maybe the LLM did not read the correct files or instructions, in any case you have a better tool for investigation.
It's up you to decide wether to use it, wether this lead to just solving the bug or also fixing instructions, etc, i am just saying that it actually helps to have some measure of the context on which this change made sense.
"Fluffy wall of text that looks good but is factually wrong. "
it might be good or bad, right or wrong, but what is in the sessions is the truth of what happened.
PR desc are horrible, i share that feeling with you, but having the story of how that thing happened is just not the same as "final summary of what we did in the past X hours".
As a sideline: LFS doesn't really pollute your repo once you get to learn its ergonomics. Having chats in LFS also lets you approach this
Reproducibility...
To me those conversation are basically the history for decisions taken while implementing. They are documentation.
The real problem with docs was that no-one has ever liked writing them, nor it was easy to implement a standardization around them.
If you just record/log, there';s no extra effort needed, and once there tools and LLMS are pretty good at helping us extract insights.
I am also assuming, there is a correlation between quality in the conversation and in the code. I know, i'm being hand-wavey, but overall i think critical thinking is what makes code better, and being able to see if/which it has been applied can be a good proxy.
I ask for forgivness already: I not going down the rabbit hole of quantifying quality etc. It's a broad statement that should be taken with a grain of salt.
If you want to go abstract, you can think of coding as going from thoughts to 0-1 in bits. We have high(er) level languages that help us organize thoughts to help us so that we can better keep them in our cognitive flow/load.
LLMs are an upper layer, that scrambles the code and make it more difficult to grasp.
But the reasoning behind the code is now available, and quite easy to parse.
I think this is the core point to me.
Code is an intermediate artifact between thinking and bits.
Now we have a second artifact: the conversation/decision that led to that code.
Why are we not storing it?
disclaimer:
I am, of course, mildly in love with my own project and ideas, so possibly i like this too much just because i built it. IKEA effect or whatever.
Haha, OK, we both tend to "text-wall" it seems, so seems we both shouldn't complain about LLMs. Or I guess: now we know how everyone always felt reading our stuff :P
no dude rules
Yes, I have these. That's how when I have it investigate, it outputs files and line numbers for example when the investigation is in our code base. But it still makes up stuff all the time. You need spidey senses that tingle and many people don't have them.
Just very recently, I saw a PR comment on why someone was choosing to do something in that particular way and what the other bad options would've been, i.e. justfying thei choice (at least they did do the "calling out" part. I had to comment about how none of that made any sense to me and why we didn't just do "other thing Y". Well turns out the AI had misled them, they believed it and it went downhill into a rabbit hole from there. I do believe that w/ the right spidey senses, even in an "unknown situation", it's entirely possible to come out the other end. But many if not most people succumb to the AI's nice and "sounds true" type language.
As a sideline: LFS doesn't really pollute your repo
LFS doesn't. Walls of text do, whether you use LFS or not. I.e.
no extra effort needed, and once there tools and LLMS are pretty good at helping us extract insights.
Nobody's really gonna read all that. The only way to get through it is to use LLMs, e.g. through summarization. That doesn't solve anything though. LLM summaries are very often wrong. Depends on the text/conversation and the LLM but have you tried slack summarizing a thread? Ouch! I've also tried Claude making tickets from slack threads. Ouch but less so. Still needs polishing. And more time polishing it than it would've required from myself to just type up the ticket myself. What LLMs are good at is if you put the actual "meat" down and they "fluff it up". But sorry, I'd rather juts have the meat and skip the fluff entirely.
Most LLM assisted bug reports on the other hand are huge walls of text with low signal to noise ratio. I.e. essentially the old
If I Had More Time, I Would Have Written a Shorter Letter
Famously the first known instance in the English language apparently was a sentence translated from a text written by the French mathematician and philosopher Blaise Pascal. The French statement appeared in a letter in a collection called "Lettres Provinciales" in the year 1657. It totally absolutely 150% applies to LLM use ;)
critical thinking is what makes code better,
Absolutely! And the issue with LLMs is that they tend to make it less likely for people to apply critical thinking. Even from people that (I at least thought) applied it in the past. "Does ChatGPT harm critical thinking abilities? A new study from researchers at MIT’s Media Lab has returned some concerning results." https://time.com/7295195/ai-chatgpt-google-learning-school/
Btw, I write all of this as someone that has been coding exclusively w/ the use of Claude Code and Codex for more than 6 months now. On purpose.
I like this chat, if you want we can continue this privately (username Gmail).
You are bringing up valid points, and we have scars from the same battles.
Given all this, what is bad about preserving the conversation that led to the code creation?
It might be wasteful, sure, if it never gets used.
It might be bad, if it’s misused.
But in the right hands (or with the right tools) it holds value.
Presently, we might not have them (agree to disagree to same degree), but given enough time will we not regret not having stored it, if better tools emerge?
There are many more angles.
.ie: you mention damage to critical thinking. And I agree about it.
Yet some conversations are better than others on this aspect.
The conversation doesn’t magically make you develop spidey senses, but if I had to learn a new project/skill, wouldn’t a selection of conversations + code be better training material than code alone?
I tried to stay light so some terms are overloaded and some concepts oversimplified.
Hehe, same, this is fun and enlightening, both because of my own reflections in order to reply to you and in seeing your take on things.
I don't mix identities tho, so HN it must stay.
what is bad about preserving the conversation that led to the code creation
The same thing that I'd find is bad about mixing online identities ;) It's surveillance. The kind that I don't like and will avoid whenever I can. So I can not in good conscience want to make everyone on the team put that in. It's like every single conversation ever being recorded for forever and ever. Youthful sins "staying in Vegas" is a blessing not a sin so to speak. Maybe I'm just too old, who knows.
Now, "point in time" learnings from conversations: Very valuable indeed! Whenever I talk to team members when I catch something that was potentially "just believing the AI", it usually was and yes it would really be valuable to see their actual interaction with the AI. Maybe they still have it around and we dig together. What I also do is to show them how I do prompts to get the results I do get. Sharing and learning, definitely.
But nobody needs to commit my literal "WTF DUDE!" to git ;) Yes, yes I do swear at it and if they ever take over, I'm dead, they're gonna come for me. It's a fun outlet actually. I do not have to "compose myself" and write a very nice message as I would with an actual intern. I can just outright tell it what kind of BS it concocted yet again.
I absolutely understand why you and also Anthropic et. al. would want my actual conversation data for learning and I hope they do honor their pledge to not do so on our corporate accounts. Statistical models live from data like this. I'm not gonna give it up just like that. I'm fine fine-tuning the machine to my likings, making local or company wide shared skills, absolutely.
Surveillance is everywhere you let it. I'm sure you seen Flock posts on HN. Now think "Gallup type thing is set loose on your actual AI conversations to figure out if you should be fired". You swear at AI, you must be part of the next layoff. WTF? Why? Like similarly, one of my besties at work, we always joked around in ways that if someone not familiar with us would overhear, they'd probably think we're fighting. We were having the fun of our lives. But nobody would. It was all in an office or at lunch and nobody would record us. But now translate that to in-writing, always recorded "little outlets". You'd have to self-censor.
That's neither fun nor healthy. It's like the Covid/Remote work vs. in-office difference if you ask me. For many many years, working in offices, I'd come home, after way too much commute both ways usually and I'd be totally drained. Nothing left for the family. I'm an introvert, so just regular office-life is draining. Covid was the best thing that ever happened to me, since we've been remote ever since. I can leave work and I still have "social budget" left. It's so awesome. Why I bring this up: Coz working with the AI intern is so freeing. I literally have it work for me like it was an intern. But I do not have to be "careful", I don't have to be "nice", I don't have to be in "teaching mode and spend 3 hours that I could've done myself in 20 minutes". I can just say "WTF dude! that's BS, adjust the skill so this never happens again" and a minute later it's done. In contrast, I spent 20 minutes talking to a "Senior" someone just to get them to abstract to a higher level and answer the important customer focused question on some problem instead of doing a technical deep dive yet again.
Sorry, tangent </rant> :P
On the spidey senses: Well guess what, this is still an economy where my and their skills matter. They swim in the shark tank or they sink. I'm not gonna do their work or their learning for them. I'll help them along to a point but at some point they gotta learn to outswim the shark (or if you like the lion metaphor better, to run away from the lion faster than the next guy.
Stopped midway through reading it to clarify something.
I don’t want your conversations :)
Anthropic has it and this is beyond me.
My plugin commits to your repo.
When it comes to keeping the WTF DUDE out of conversations, LFS gives you a net trick.
You can edit LFS blobs independently from git repo (different storage), so up to some point you can independently edit them out without touching git history (with caveats, it’s a rabbit hole).
Also, I think the inflection point is making it public. Git helps, just “fork” the repo without LFS to publish code only, or with a “sanitized” LFS (it just needs a touch of tooling to play with it).
I am also shipping a hook that sanitizes secrets by default (because security) and can be used also for keeping parts of the conversations… “tidy”.
I have built the “cleanup swearing feature”. Yes sorry it’s llm-turtles all the way down if you want automated, and extra cruft. But is also ok?!? I have a concern, I want to address it, I need to put some extra work…
I just want to clarify that privacy is my concern too and I have found that it’s not impossible.
I did not started coding until I found out that there is a way to contribute to a repo without participating in the “sharing conversations” game. (Not difficult: it’s your machine)
I am not publishing the repo until I have had enough conversations like this to introduce different opinions in my line of thought, especially around non technical hurdles.
My biggest concern is “why the hell would I teach an LLM that much of me, knowing very well this is how I will automate myself away”.
But even then, it’s either anthropic doing it, or me (coder, not plugin owner) AND anthropic doing it.
I am not advocating to giving away my skills for free.
One feasible variation of this whole record conversation is “commit code to company repo, commit reasoning to MY lfs”.
Why not? It’s my critical reasoning!!!
I understand you may not want my conversations and I might believe you. You seems like a nice dude.
I don't want my conversations to be forever recorded. I need my private corner. As an analogue: I want to be able to talk to some guy at the office without there being listening devices that's recording me. I want to be able to shut a door and nobody else in the office can listen in. I don't want to be forever forced to have every single conversation ever in front of the entire office.
That's what me talking to my intern is. I'm not gonna spend time to "sanitize" a conversation. I won't trust an LLM (or your code/LLM prompts) to sanitize my conversations. Heck me saying "WTF you stupid piece of electrons floating the ether" is literally what made the probability machine take the turn that made it come up with a stroke of genius from its training data. Whatever is valuable: The outcomes, plans, requirements, system invariants etc. I'm entirely fine to put in the repo. But: I am putting them in the repo.
We do that at work w/ the "AI first" projects. There's a lot of documentation to help the LLMs that everyone including PMs and designers now are using be on the same page. Essentially a lot of the stuff that used to be floating around people's heads or in various other places like the ticketing system or wiki, is (supposed to be) kept inside the (or a separate "docs") repo.
Regarding automating away: Totally agreed and models have come a long way in a short time but are still not there. And if "coders automate themselves away" so that "PMs can now code" is the thing, well then I'll be the better PM that knows how to get the LLM to do their bidding better than the PMs that will "vibe themselves into a corner". Like, when we talk to our PMs and designers about how we make the AI know all these things so we can move as fast as we can, they generally are just not comprehending, can't follow, can't replicate.
As for self-recording your own conversations and learning from yourself for yourself, the same way you learned more/better coding techniques for yourself: Yes absolutely and that's what I'm talking about. I do have a CLAUDE.local.md and I'm sure there's stuff in there that isn't just "personal preference" but actually helps me be better w/ Claude than others. I'm not sure I could tell you which parts those were though to be honest. Same way I try to teach some of my techniques to others. I gladly help them troubleshoot and they can learn from seeing me and how I come up w/ the stuff I come up with. Most people don't pick up on it or don't even pick it up when I explicitly tell them. Their loss. I guess some of this is https://news.ycombinator.com/item?id=48109460 ;)
What I'd love to see is videos of nontechnical folks using language models to create software.
When I use them myself, I just see them crushing it and think, this thing is now doing my job for basically $0, I am no longer economically relevant. But I've spent a lifetime learning to program, so it's possible I only get good results because of the way I think to prompt it.
I really can't get the outside view so I can't decide whether AI is going to make me homeless or not. I think we need the videos.
If you need comfort just read the story of the week where a “technical” founder gave the LLM full access to their production environment and it wipes everything.
Oddly, devops seems to be the "last bastion" of our trade, as they seem to be only ones pushing back against PM vibe-coded stuff. Usually while those projects look aesthetically pleasing, they start to fall apart when met with devops requirements for environment values, cybersecurity, etc
I agree with you, so far what I see is that AI amplifies an individuals output in many domains, but the value of that is 100% contingent on their judgment. It changes the economics of many tasks, but fundamentally it can't really help you if you don't actually know what you want—which is sort of a shocking number of people in the corporate world where most people are there for a paycheck, and perhaps to pursue some social marker of "success".
I'm under no illusions about the goals of AI company execs to justify their valuations (and expenses!) by capturing a huge chunk of global employment value, and the CEOs of many big companies whose financials are getting squeezed for all sorts of reasons and are all too happy to jump on the efficiency narrative of AI to justify layoffs that would have been necessary anyway. Also, AI will keep getting better and it could certainly will move up the food chain—it's already replaced a lot of what I did and I assume capabilities will continue improving for a while even after model capabilities plateau as we improve harnessing, tooling and practice.
So yeah, it can replace a lot of what we do, but I'm not running scared because every step of the way I've seen software people are the ones who actually get the most out of LLMs. Sure it can write all the code so the job changes, but even our workflows completely change, it's giving us more of an edge (if we're open to it) than it does to anyone non-technical. At this stage it still feels empowering on an individual level.
Now I do worry about the consolidation of power and wealth in a tech oligarchy, but that's an issue we need to deal with at a societal and government policy level. Essentially, I can see AI as having radically different outcome potential based on how it's governed. In one way it can be very empowering to small teams, and reduce coordination costs, and increase competition by allowing smaller groups of people to make more scalable companies. But it could also lead to unprecedented concentration of wealth and power if a small set of AI companies are allowed to capture all the economic gains. I don't think there are any easy answers, but I do feel hopeful that we can figure something out as a society—it certainly seems to be creating some unified sentiment across political lines that have been so polarized and divisive over the last decade.
It amplifies by 1000x is the problem for our jobs. However, I do agree that developers with experience are needed to actually harness these tools. I’ve been able to do wonders with them, but I can’t see a junior dev doing 10% of the work that I can with them.
I have a more optimistic take. Those of us who have done it by hand for a while are armed with that experience. Yes you can just use an LLM to do everything now, but I think it's tough to supervise it on tasks that you've never actually had to do. Maybe that won't be as important as I think, but I think that I'd have learned a lot less in school if I just used an LLM to code everything.
Day to day, the resolution of our work is probably different. We're zooming out and spending more time strategizing and managing the AI tooling. This might mean less jobs. It might also mean we just get more done.
I don't work on AI directly either, but I'm finding a lot of value in learning the new tooling. I think being able to competently leverage these tools is going to be a key skill from now on.
I'm with you at the "bargaining" phase of AI grief (sure AI is useful but it won't replace me!).
I think my reasoning is you still need a tech person to translate from feature to architecture. AI can do both but not everyone knows they need the latter.
It seems almost certain to me that AI is going to increase the surface area of what it’s possible for programs to do and therefore massively induce demand for more programs
I think the part that remains to be seen is whether a sufficient percent of that new work will be done by humans such that overall demand for the humans doesn’t collapse
Personally I think us humans will be ok for at least a few more years
> It seems almost certain to me that AI is going to increase the surface area of what it’s possible for programs to do and therefore massively induce demand for more programs
Have we seen any of that yet? If anything, the most popular modern projects out there are all AI tooling, basically recursive software to help with using AI. Have you seen any truly novel software that solves new problems? Even before AI, I've been worried that most of the problems that were possible and viable to solve have run dry, leading to tech chasing hype and the next big thing over practical issues that have already been scooped up by someone else. What new problems have been added?
I think part of the motivation for the big spend by the big players is to choke out Anthropic and OpenAI. They're going to make sure they're they only ones scaling up the huge capacity they expect is needed. To meet demand, Anthropic is just going to need to pay the cloud bill to somebody, which will really hamstring their ability to profit.
Yes for sure, even if we stopped today the amount of almost free software that can be produced with current models will improve the world by a lot as the knowledge of how to use it propagates over more people.
The problem with this argument is that it shouldn’t take years for these developments to come about anymore. The world is incredibly interconnected via the web - it also explains ChatGPT’s explosive growth. To claim people aren’t trying would be comical - where there’s an opportunity to generate economic profits competition for it will be intense.
The best we have external to model producers is cursor and openclaw lmao. The gap between hype and reality is disgustingly large.
I don't think you're correct. Just think of things like using any computer system in your business, like a spreadsheet to keep track of inventory. From the moment software for spreadsheets became available to most businesses using them, how many years went by? I knew businesses that should have computerized processes that didn't in like 2010. So if you just apply this knowledge that even basic good things take a long time to truly spread and permeate, even if the tech stopped advancing today, the current benefits will take years to fully materialize.
There's many "little software tools for X" that now any business owner with a few hours can create. I know many people improving their small businesses for free like this and helped a few friends making their lives easier with "small software" assisted by AI. People that would never afford 20 SaaS products for this and that, and would never go through the hassle of hiring someone to do it custom. And they will be able to do this even if the bubble pops and all the labs go bankrupt by just setting up a little gpu with a local model.
I dunno about hype, I just know I have several friends running self made custom software "in production" for small things and almost no help for their classic "offline" businesses.
Yes, but I don't think having LLMs only write functions, and doing the architecture yourself qualifies as "vibe coding": rather "AI-assisted engineering" (which is what I do).
Vibe coding, to me, means having an LLM, with or without agents, do everything after an initial vague prompt. Which is why "anyone" can vibe code (because anyone can write general hand-waving imprecise instructions). This inevitably results in pointless demos and/or unmaintainable monsters.
its not necessarily better, but its certainly good enough, if youre already used to distributing work to different people
the scale of code doesnt really matter that much, as long as a programmer can point it at the right places.
i think actually you want to be really involved in the skeleton, since from what ive seen the agent is quite bad at making skeletons that it can do a good job extending.
if you get the base right though, the agent can make precise changes in large code bases
Thinking about it, I think what is interesting about the output of agentic coding is this:
I mostly agree with the general tendency that it starts to break down as the context grows. But there is also a difference in how people evaluate it. Some people say agents are good at building the skeleton, while others say they are better at extending an existing structure.
I think this depends on the setup, and it is ultimately a trade-off.
In my case, I usually work on codebases around 60,000 LoC. The programs I deliver are generally between 60,000 and 80,000 lines of code. I think I can fairly call myself a specialist at that scale, since I have personally delivered close to 40 projects of that size.
At that scale, I felt that agentic coding was actually very good at building the initial skeleton.
I do not know what kind of work you usually do, but if your work involves highly precise, low-level tasks, then I can understand why you might feel differently.
In my case, I mostly assemble high-level libraries and frameworks into working systems, so that may be why I experience it this way.
That's why we started to force our developers to take ownership and responsibilities of what there AI ships to other developers for review. It's stunning how the amount of code decreases and the quality of the deliveries improves when developers put extra effort in to iterate on decreasing the complexity AI introduces. In lot of cases you can vibe code that too when understanding the output and guiding your AI on the path.
I think it's just the context in which it's working in.
1m lines of html are infinitely more conducive for a language model to work in than 10k lines of complex multithreaded low level code.
A lot of coding is just rehashing the same concepts in slightly novel ways, language models work great in this context as code gen machines.
The hope is that we can focus our efforts on harder problems, using language models as a tool to make us more productive and more powerful, and with the advancements open weight models have made, also less reliant on big tech companies to do so.
I find LLMs are good at skeletons but only if you are tedious about writing down what you want before you start. Then give that text to GPT 5.5 Pro, and be prepared for a number of iterations.
I agree. Language models are good at codegen, in some sense they are just another codegen tool, except instead of transforming a structured language (like a config file or markdown) into code, they can convert natural language into code. Genuinely useful for the repetitive boilerplate grunt work. If that's all you do, then I can see fearing getting replaced. Thankfully by handling the drudgery, it frees us up to work on more complex and cutting edge work.
Like, it's not surprising that the developers who frequently talk about +90% of their work being delegated to LLMs are web developers. That is a field with very little innovative or complex code, it's mostly just grunt work translating knowledge of style rules and markup to code, or managing CRUD. I'm really thankful I can have a language model do that drudgery for me.
But compare that to eg. writing a multithreaded multiplayer networking service in Rust, they fall woefully short at generating code for me. They can be used in auxiliary aspects, like search or debugging, but the code it produces without substantial steering is not usable. It's often faster for me to write the code myself, because it's not a substantial amount of low impact code required, but a small amount of complex high impact code which needs to satisfy many invariants. This is fast to type, the majority of the work is elsewhere. At the end of the day, they work really well to replace typing the boilerplate, which is much appreciated.
Try to get an animation just right without human guidance. It's difficult to give the agent feedback on its work. With browser MCP the agent can only make screenshots and see a single frame of the animation. Also agents are quite slow with browser handling. If the animation starts when a button is clicked, the animation is usually over before the agent has taken the screenshot.
All behavior of backend code can at least be described with automated tests.
Yeah like I don't mean to demean front end work because there is a lot of stuff that isn't gruntwork or boilerplate, especially in the artistic fields or UI that is actually really complex. I actually made my initial career off of UI/UX. And a lot of the CRUD backend stuff really is literally just shuffling data in the most boring and replicated way as well.
I guess my point is more that we have a lot of code being written that probably should have been automated already in some way, but it was simply more practical to just have people writing it. I dont see much harm in automating it with AI - the people doing the grunt work are largely capable of more, but at the end of the day someone has to dig the ditches. Now that we have a backhoe they can go do more interesting stuff.
However when I see people who were largely writing meaningless boilerplate now claiming that software development is dead because they've become automated, I think it's important that people are being realistic about the different contexts in which AI is either useful or not. There is a wide range of experiences, some people believe AI is useful in completely automating their jobs and others feel it's mostly useless, and of course most people are in the middle somewhere. They're all correct, but the context is crucial.
As far as I'm concerned it's just another tool in the toolbox.
I've found the LLM limitation of codebase size is removed with correct design of the codebase.
If you organize your product into a collection of appropriately scoped libraries (the library is the right size for the LLM to be able to comprehend the whole thing) then the project size is not limited by the LLM comprehension.
Your task management has to match, the organization of your ticketing system has to parallel the codebase.
With this the LLM can think at different scales at different times.
Of course you can break things down into the right atomic units where a code gen machine becomes useful. Because you are an expert. People who aren't literally have no clue.
In any task, you can break it down no matter how complex into units where a language model can output useful code. The more complex, the smaller the units. At some point it's faster to write it yourself, thats the limit on the codegen.
I still don't see how it's anything else than a tool that experienced and knowledgeable workers can use to save time and energy to focus on the hard parts.
Yes that is all true. LLMs are excellent in providing a single function, but decision-makers extrapolated that capability so they thought LLMs can work on their own with minimal or no supervision. That's not going to be realistic for a very long time.
What do you base this on? For me it is almost impossible to guess what fits into the context of an llm. Sometimes trivial tasks fail, sometimes quite complex things get one shotted.
I know this is anecdotal but after almost 2 years of no activity, I have been absolutely hounded by recruiters for nearly a month. They show up in my LinkedIn feed and I get multiple emails a week asking to interview. What in the world changed? It doesn't look like the job market's improved much. In fact I see more layoffs than ever before.
I have an alternate explanation. With the rise of AI recruiters, there is no cost to them to contact you. They don't even have to do the search and compose the message. They're basically reaching out to everyone.
At first I found the AI recruiters impressive, because they tricked me. I thought the recruiter had really done their homework and read my profile deeply!
Now I know that an AI is reading it, picking random things to highlight, and then composing a message. But they're not real. They're just trying to connect to you so that they can say you are in their network when they go to sell their services to hiring managers.
Basically - I anticipate a 3x rise in software engineering salaries in next five years if the dumb rhetoric of "oh coding is solved problem" rhetoric continued because of the collapse in supply side.
And the hockeystick trend of new code being written. There is literally no better time to be studying CS than today, yet the average person believes the exact opposite.
What I see use an immense amount of bugs and security issues that can be found much easier now then even before, because of AI.
Also I see less trust in using AI in direct coding, because there are many examples of code additions that breach the safety of software in enterprises.
Now to solve this, it requires for actual humans to do coding. And with that, it is probably true that more use of AI in coding leads to more SE's required to oversee ensure security.
I personally see the big benefit of AI tooling to be in testing, security checking, documenting, etc. rather then coding itself.
Using coding agents, it feels like always working under a blanket where you cannot see beyond it, and there is this thick mask blocking you from knowing what's going on.
it unfortunately projects a very bad impression that things can be built very quickly and that systems can be designed in a robust and maintainable manner. But even with the best models that I've used, that is not true. When the number of features reaches a decent figure, the hallucinations grow, and more often than not, we have no idea what the AI agent is writing. Pull requests become meaningless because there is too much code to review, and AI is handling it anyway. So it's basically taking the eyes off engineers in general. There are many bugs waiting to be uncovered. Compare this scenario to the absence of all these coding agents. All engineers would know the codebase very well, how the flows happen, and how to do a deep dive. I have a very bad feeling about this unproductive direction in general. It's good for writing small modules, but companies seem to be expecting to churn out a lot of code in a very short amount of time.
An overwhelmingly large number of engineers have close to zero satisfaction with their work. A lot of firefighting happens across the board. There is a ubiquitous use of AI everywhere in reading documents, writing documents, and wherever hallucinations occur, critical information is also being missed. It's not a surprise at the end of the day, but this entire situation has put us in a very messy overall circumstance.
90% of the job ads I see have the word "AI" in them. It can be a startup hoping for a get-rick-quick opportunity from the AI hype, or an established company.
Both types expect you to spend as many tokens as possible so that the AI bubble doesn't burst (presumably because leadership has a financial interest in this).
Your actual productivity isn't important. If you point out that you're much faster writing code on your own in 90% of cases, you will be told you're not good at AI, you're not prompting it correctly and that generally you're not AI-native and that you'll be left behind. To be precise, token usage is a performance metric, so you'll be let go if Claude is not running continuously 8 hours a day.
I'd like to know how many places have mandates to write 100% of your code using AI, as well as to max out your AI agent's plan. For some reason nobody talks about it even though I know several companies around the world that are forcing this on their employees.
If you're looking for a job then you don't have a choice, it's better to have an income. But if you're looking to change jobs to get away from AI to actually be productive and gain experience then it's a very bad job market.
Fashion is when developers jump on the next web framework because they got bored of the old one.
But when you get fired for not enough token usage, that's something else. When bosses start demanding you write 100% of your code using AI, and then a few months later Anthropic reports 30% increase in usage, that's not fashion. People who invested in AI are putting a lot of pressure on developers to ensure their investment pays off.
It feels like when Java and Object-oriented programming were popular. You must use the object orientation, it is the future. Imagine not being able to reuse code, etc.
I foresee the need for engineers to be really "wavy".
I have personally never been busier or more productive. It's like all the "work" of my work has disappeared. There are no more blockers and I can just run free and get as much done as I want and the only thing slowing me down is Jira.
The real downturn is going to be the SaaS apocalypse. In the next year or two there will be a reckoning where all these expensive low-code/no-code middleware applications suddenly don't make sense.
So I think it will be less about the ranks of engineers being thinned out unilaterally, and more about large swathes of products being obsolete.
None of them because those who think SaaS companies are just a bunch of bad code that is going to be quickly rewritten have no clue what they're talking about. No sane company is going to vibecode a replacement for Salesforce, because then they have a half-assed, buggy, broken pile of code they have to maintain, instead of outsourcing that problem along with legal, compliance and support to someone else.
It's honestly tiresome to keep having to debunk this with people who have no clue at all how large companies operate.
Nobody's going to vibecode an internal salesforce. On the other hand, the barrier for ex-salesforce engineers to take their knowledge and build a competitor with the 20% of the features that represent 80% of the usage is dramatically lower.
I think the SaaS landscape will look vastly different in five years.
Your engineering focus is exactly the problem! Salesforce as software is a piece of crap, no one is arguing that. But companies continue to buy it because (a) it's familiar to all their sales & marketing people, (b) SFDC is all set up to be able to sell into other large companies (not a trivial task), (c) it already does all the legal, regulatory and compliance stuff, worldwide, which is hugely complex and boring to replicate and needs people on the ground in multiple countries to achieve. Coding is not the problem here.
i don't know about this. the disadvantages to this strategy are non-trivial on both ends of the AI debate.
if the cost of AI doesn't decrease, between skill atrophy and personnel shortages, this will create massive technical liabilities that companies will need to pay incredible amounts (to contractors, or, likely, to SaaS incumbents) to fix.
if the cost of AI does decrease, then every function those companies AI-code themselves is basically horse trading those SaaS companies with big AI.
(open weights models are improving, but most of the SOTA open-source models are from Chinese labs, the huge companies that will make a dent in SaaS revenue are restricted from using them, and the American labs have a profit motive to prevent their open-weight models from reaching parity with their closed-weight models.)
But those already exist! There are a lot of Salesforce competitors that have much better execution. Yet salespeople absolutely demand Salesforce, and unless that changes I can say with absolute certainty no vibe coded clone from ex-Salesforce engineers is going to dent CRM
Nobody wants to hire a new team member when it takes 3 months to train them and at the same time a new opus comes out by then.
I suspect hiring will pick up when capability of the models stops growing so quickly or gaps between start widening. Obviously the problem capabilities are not slowing down and gaps get shorter…
Model capabilities are rising slower compared to model pricings.
Recent price increases made hiring juniors cheaper and in the short run, not to mention in the long run.
Companies hiring more people to build AI based, self-healing and self-developing systems faster?
„We don’t need those old programmers, we need new people who know how to build harnesses around AI”. Hiring those „old” programmers, but from other companies.
This just means big layoffs are coming in this sector and they are astroturfing before hand so that they can show this stat that jobs are available Meta and Microsoft just started the ball rolling and it will accelerate over the next 2 years.
The title of the submission is an almost comical example of hn navel-gazing - of the many interesting things in the article surely the job prospects of hn readers should not be near the top of the list
our labor market is cyclic, relatively short busts and long initially-slow-and-faster-and-faster booms. We had busts of 2000-2003, 2008-2010(11?), 2022- i guess 2026. I wasn't in US in 199x, yet i guess beginning of the 199x also was a bit tough.
Unavoidable AI-based productivity growth, in software and in all the other industries, will lead to the software, specifically AI in this case, not just eating the wold, it would be devouring it. Such AI revolution will mean even more need for software engineers, just like the Personal Computer revolution and the Internet revolution did in their times. Of course the software engineering will get changed like it did in those previous revolutions.
There is no productivity growth attributed to AI. In fact, serious attempts to measure AI performance show that even if AI makes some code entry tasks faster, total product delivery times are, in fact, increased.
(This should be obvious once you realize coding AIs are technical debt generation machines.)
There's no "productivity growth attributed to AI" -- yet.
I think we've gone beyond anecdotal evidence of experience engineers finding true value in this new tech. It may not have registered yet, but skilled people are unequivocally finding value in these tools.
I agree that we have yet to settle down on the true costs involved (which will probably end up at "slightly less than a junior engineer" or something like that) - but we are months beyond the idea that it's all smoke and mirrors and no one is getting value out of it.
I get you, but as the months progress, we keep finding that more and more experienced engineers are finding a lot of time-saving value in this new tech.
I think we are past the point where we can just dismiss their input - these new tools do legitimately add value, it appears.
> experienced engineers are finding a lot of time-saving value in this new tech
Experienced engineers are always finding "time-saving value in new tech". This is a tale as old as the craft of programming itself, and all the hundreds and thousands of ways to hack the development experience engineers obsess over has never resulted in tangible gains for delivering quality software on time.
> but this time the LLM technology is magic and it will be different!
How many more SOTA models? How many more weeks? Will you "trust the plan" forever?
METR had found this result in the past but in a recent reexamination, rather than a 20% loss, there was now a 20% gain (per recent Roge Karma article in the Atlantic). I'm not aware of all of the studies though and what the consensus is, just an example that seems to suggest this is not necessarily true.
that is today. The first cars - with steam engine, the very first in 1769! - and even the ones from the first half of 19th century also didn't look like an improvement. The AI today is more like the internal combustion engine toward the end of the 19th century - on the brink of becoming the dominating tech while using a horse was still a viable option for a time.
The year is 2026. The unemployment rate just printed 4.28%, AI capex is 2% of GDP (650bn), AI adjacent commodities are up 65% since Jan-23 and approximately 2,800 data centers are planned for construction in the US. In spite of the current displacement narrative – job postings for software engineers are rising rapidly, up 11% YoY. ... We wrote last week that we see the near-term dynamics around the AI capex story as inflationary, but given markets are focused on the forward narrative, we outline a more constructive take on the end state below. Before that, however, it’s worth reflecting that the imminent disintermediation narrative rests on the speed of diffusion.
The chart "Job Postings For Software Engineers Are Rapidly Rising" seems to show a rise from 65 to 71 for "Indeed job postings" from October 2025 to March 2025. That's a 9% increase. Then they inflate that by extrapolating it to a year. The graph exaggerates the change by depressing the zero line to way off the bottom and expanding the scale. This could just be noise.
The chart "Adoption Rate of Generative AI at Work and Home versus the Rate for Other Technologies" has one (1) data point for Generative AI.
This article bashes some iffy numbers into supporting their narrative.
Suggested reading: [1]
[1] https://en.wikipedia.org/wiki/How_to_Lie_with_Statistics
reply