I don't write code for a living but I administer and maintain it.
Every time I say this people get really angry, but: so far AI has had almost no impact on my job. Neither my dev team nor my vendors are getting me software faster than they were two years ago. Docker had a bigger impact on the pipeline to me than AI has.
Maybe this will change, but until it does I'm mostly watching bemusedly.
That's only certain if investments in tech infrastructure always led to productivity increases. But sometimes they just don't. Lots of firms spent a lot of money on blockchain five years ago, for instance, and that money is just gone now.
Yep. All AI has done for me is give me the power of how good search engines were 10+ years ago, where I could search for something and find actually relevant and helpful info quickly.
I've seen lots of people say AI can basically code a project for them. Maybe it can, but that seems to heavily depend on the field. Other than boilerplate code or very generic projects, it's a step above useless imo when it comes to gamedev. It's about as useful as a guy who read some documentation for an engine a couple years ago and kind of remembers it but not quite and makes lots of mistakes. The best it can do is point me in the general direction I need to go, but it'll hallucinate basic functions and mess up any sort of logic.
Same here, more or less, in the ops world. Yeah, I use AI but I can't honestly say it's massively improved my productivity or drastically changed my job in any way other than the emails I get from the other managers at my work are now clearly written by AI.
I can turn out some scripts a little bit quicker, or find an answer to something a little quicker than googling, but I'm still waiting on others most of the time, the overall company processes haven't improved or gotten more efficient. The same blockers as always still exist.
Like you said, there has been other tech that has changed my job over time more than AI has. The move to the cloud, Docker, Terraform, Ansible, etc. have all had far more of an impact on my job. I see literally zero change in the output of others, both internally and externally.
So either this is a massively overblown bubble, or I'm just missing something.
> ... but I'm still waiting on others most of the time, the overall company processes haven't improved or gotten more efficient. The same blockers as always still exist.
And that's the key problem, isn't it? I maintain current organizations have the "wrong shape" to fully leverage AI. Imagine instead of the scope of your current ownership, you own everything your team or your whole department owns. Consider what that would do to the meetings and dependencies and processes and tickets and blockers and other bureaucracy, something I call "Conway Overhead."
Now imagine that playing out across multiple roles, i.e. you also take on product and design. Imagine what that would do to your company org chart.
Imagine if your grandma had wheels! She'd be a bicycle. Now imagine she had an engine. She could be a motorcycle! Unfortunately for grandma, she lives in reality and is not actually a motorcycle, which would be cool as hell. Our imagination can only take us so far.
To more substantively reply to your longer linked comment: your hypothesis is that people spend as little as 10% of time coding and the other 90% of time in meetings, but that if they could code more, they wouldn't need to meet other people because they could do all the work of an entire team themselves[1]. The problem with your hypothesis is that you take for granted that LLMs actually allow people to do the work of an entire team themselves, and that it is merely bureacracy holding them back. There have been absolutely zero indicators that this is true. No productivity studies of individual developers tackling tasks show a 10x speedup; results tend to be anywhere from +20% to minus 20%. We aren't seeing amazing software being built by individual developers using LLMs. There is still only one Fabrice Bellard in the world, even though if your premise could escape the containment zone of imagination anyone should be able to be a Bellard on their own time with the help of LLMs.
[1] Also, this is basically already true without LLMs. It is the reason startups are able to disrupt corporate behemoths. If you have just a small handful of people who spend the majority of their work time writing code (by hand! No LLMs required!), they can build amazing new products that outcompete products funded by trillion-dollar entities. Your observation of more coding = less meetings required in the first place has an element of truth to it, but not because LLMs are related to it in any particular way.
> Imagine if your grandma had wheels! She'd be a bicycle.
I always took this to be a sharp jab saying the entire village is riding your grandma, giving it a very aggressive undertone. It's pretty funny nonetheless.
Too early to say what AI brings to the efficiency table I think. In some major things I do it's a 1000x speed up. In others it is more a different way of approaching a problem than a speed up. In yet others, it is a bit of an impediment. It works best when you learn to quickly recognize patterns and whether it will help. I don't know how people who are raised with ai will navigate and leverage it, which is the real long-term question (just as the difference between pre- and post-smartphone generations is a thing).
This isn't the counter you think it is. It's too much to expect existing behemoths to reshape their orgs substantially on a quick enough timeline. The gains will be first seen in new companies and new organizations, and they will be able to stay flat a longer and outcompete the behemoths.
Humans are funny. But most cant seem to understand that the tool is a mirage and they are putting false expectations on it. E.g. management of firms cutting back on hiring under the expectation that LLMs will do magic - with many cheering 'this is the worst itll be bro!!".
I just hope more people realise before Anthropic and OAI can IPO. I would wager they are in the process of cleaning up their financials for it.
The dev team is committing more than they used to. A lot, in fact, judging from the logs. But it's not showing up as a faster cadence of getting me software to administer. Again, maybe that will change.
I’m at a big tech company. They proudly stated more productivity measures in commits (already nonsense). 47% more commits, 17% less time per commit. Meaning 128% more time spent coding. Burning us out and acting like the AI slop is “unlocking” productivity.
There’s some neat stuff, don’t get me wrong. But every additional tool so far has started strong but then always falls over. Always.
Right now there’s this “orchestrator” nonsense. Cool in principle, but as someone who made scripts to automate with all the time before it’s not impressive. Spent $200 to automate doing some bug finding and fixing. It found and fixed the easy stuff (still pretty neat), and then “partially verified” it fixed the other stuff.
The “partial verification” was it justifying why it was okay it was broken.
The company has mandated we use this technology. I have an “AI Native” rating. We’re being told to put out at least 28 commits a month. It’s nonsense.
They’re letting me play with an expensive, super-high-level, probabilistic language. So I’m having a lot of fun. But I’m not going to lie, I’m very disappointed. Got this job a year ago. 12 years programming experience. First big tech job. Was hoping to learn a lot. Know my use of data to prioritize work could be better. Was sold on their use of data. I’m sure some teams here use data really well, but I’m just not impressed.
And I’m not even getting into the people gaming the metrics to look good while actually making more work for everyone else.
Lol its gonna take longer than it should for this to play out.
Sunk cost fallacy is very real, for all involved. Especially the model producers and their investors.
Sunk cost fallacy is also real for dev's who are now giving up how they used to work - they've made a sunk investment in learning to use LLMs etc. Hence the 'there's no going back' comments that crop up on here.
As I said in this thread - anyone who can think straight - Im referring to those who adhere to fundamental economic principles - can see what's going on from a mile away.
1) Not a throaway, can't remember what my old account is called
2) Feel free to screen shot. Stick it on your desktop and set a reminder and check the state of the world in 12 months time.
12 months I won't be surprised if there's not much change. But in 5 years? 10? Anything can happen. It is presumptuous to think you can project the future capabilities of this technology and confidently state that labour markets will never be affected.
Guys like you dont get it. You think OAI, Amazon etc can freely put large amounts of money into this for 5-10 years? Lmao - delusional. Investors are impatient. Show huge jumps in revenue this year or you no longer have permission to put monumental amounts of money into this anymore.
Short of that they'll just destroy the stock price by selling off; leaving employees who get paid via SBC very unhappy.
Most of these are new features, but then they have to integrate with the existing software so it's not really greenfield. (Not to mention that our clients aren't getting any faster at approving new features, either.)
Did you train a self-hosted/open source LLM on your existing software and documentation? That should make it far more useful. It's not claude code, but some of those models are 80% there. In 6 months they'll be today's claude code.
Its this kind of thinking that tells me people cant be trusted with their comments on here re. "Omg I can produce code faster and it'll do this and that".
No simply 'producing a feature' aint it bud. That's one piece of the puzzle.
I've taken to calling LLMs processors. A "Hello World" in assembly is about 20 lines and on par with most unskilled prompting. It took a while to get from there to Rust, or Firefox, or 1T parameter transformers running on powerful vector processors. We're a notch past Hello World with this processor.
The specific way it applies to your specific situation, if it exists, either hasn't been found or hasn't made its way to you. It really is early days.
From my experience as a software engineer, doubling my productivity hasn’t reduced my workload. My output per hour has gone up, but expectations and requirements have gone up just as fast. Software development is effectively endless work, and AI has mostly compressed timelines rather than reduced total demand.
This seems unlikely. My company is in competition with a number of other startups. If AI removes one of my co-workers, our competitors will keep the co-worker and out-compete us.
It depends on the "shape" of the company. Larger companies have a lot more of what I call "Conway Overhead", basically a mix of legit coordination overhead and bureaucracy. Startups by necessity have a lot less of that, and so are better "shaped" to fully harness AI.
> If AI removes one of my co-workers, our competitors will keep the co-worker and out-compete us.
This assumes that the companies' business growth is a function of the amount of code written, but that would not make much sense for a software company.
Many companies (including mine) are building our product with an engineering team 1/4 the size of what would have been required a few years ago. The whole idea is that we can build the machine to scale our business with far fewer workers.
How many companies have you worked at in the past where the backlog dried up and the engineering team sat around doing nothing?
Even in companies that are no longer growing I've always seen the roadmap only ever get larger (at that point you get desperate to try to catch back up, or expand into new markets, while also laying people off to cut costs).
Will we finally out-write the backlog of ideas to try and of feature requests? Or will the market get more fragmented as more smaller competitors can carve out different niches in different markets, each with more-complex offerings than they could've offered 5 years ago?
This is already happening. Fewer people are getting hired. Companies are quietly (sometimes not, like Block) letting people go. At a personal level all the leaders in my company are sounding the “catch up or you’ll be left behind” alarm. People are going to be let go at an accelerated pace in the future (1-3 years).
I don’t think that addresses my point. I understand a lot of companies are firing under the guise of AI, but it’s unclear to me whether AI is actually driving this - especially when the article we are both responding to says:
> We find no systematic increase in unemployment for highly exposed workers since late 2022
That's not necessarily a result of AI, you also have to consider the broader economic environment. I mean, it was also difficult to get a job as a graduate in 2008, whereas it's typically been easier to get a job when credit is cheap.
Isn't it, for something like 70-80% of families? Just in slow-motion?
How long have we been hearing about crushing affordability problems for property? And how long ago did that start moving into essentials? The COVID-era bullwhip-effect inflation waves triggered a lot of price ratcheting that has slowed but never really reversed. Asset prices are doing great, as people with money continue to need somewhere to put it, and have been very effective at capturing greater and greater shares of productivity increases. But how's the average waiter, cleaning-business sole-proprietor, uber driver, schoolteacher, or pet supply shopowner doing? How's their debt load trending? How's their savings trending?
There’s a difference between a collapse and a slowdown. We don’t need a collapse for hiring to slow down [1,2]. I think we’re finally just seeing the maturation of software development. Software is increasingly a commodity, so maybe the era of crazy growth and hiring is over. I don’t think that we need AI to explain this either, although possibly AI will simply commodify more kinds of software.
FAANG realizing that they can't make infinite money by expanding into every possible market while paying FAANG salaries for low-scale-CRUD-prototyping roles has a lot to do with this, and that started a bit earlier than the AI wave.
Lots going on right now in the market, but IMO that retreat is the biggest one still.
Many companies were basically on a path of infinite hiring between ~2011 and ~2022 until the rapid COVID-era whiplash really drove home "maybe we've been overhiring" and caused the reaction and slowdown that many had been predicting annually since, oh, 2015.
Manager gigs at FAANG are pretty rough right now in my network, you can't be a manager when the higher-ups notice your group isn't a big revenue generator and so doesn't justify new hires and bigger org charts, and cutting the middlemen is the easiest way to juice the ROI numbers. If the ICs that now have 1/3 the managerial structure and have to wear more hats don't turn things around, oh well, it's not a critical area anyway, just nuke it.
Erm its been fucked for many years across many professions, it was just less so for software engineering in particular. Now entry into the S-E profession is taking a hit.
Also dont forget theres only so many viable revenue-generating and cost-saving projects to take. And said above - overhiring in COVID.
There's definitely tone deaf statements from managers/leaders like "AI will allow us to do more with less headcount!" As if the end worker is supposed to be excited about that, knuckleheads, lol.
Yeah I’ve been scratching my head about this too. Like, if my boss said this, I would basically start looking for a new job right then and there. Seems like a good way to drive off your own talent.
In a bear market in a bloated company, maybe. We’re still actively hiring at my startup, even with going all-in on AI across the company. My PM is currently shipping major features (with my review) faster and with higher-quality code than any engineer did last year.
Then any company that was staffed at levels needed prior to the arrival of current-level LLM coding assistants is bloated.
If the company was person-hour starved before, a significant amount of that demand is being satisfied by LLMs now.
It all depends on where the company is in the arc of its technology and business development, and where it was when powerful coding agents became viable.
It’s hard to compare, honestly. Last year, my PM didn’t have the AI tools to do any of this, and engineers were spread thin. Now, the PM (with a specialized Claude Code environment) has the enthusiasm of a new software engineer and the product instincts of a senior PM.
When you work long enough you'll find it. Places where changing software is risky you can end up waiting for approvals. Places where another company purchased yours or you are getting shutdown soon and there is no new work. Sometimes you end up on a system that they want to replace but they never get around to it.
Being overworked is sometimes better than being underworked. Sometimes the reserve is better. They both have challenges.
Outside of purchased-and-being-shutdown, these are still frequently "we want to do things but we're scared of breaking things" situations, not "we don't want to do anything." Even if the things they want to do are just "we want to move off this 90s codebase before everyone who knows how it works is dead."
In that sort of high-fear, change-adverse environment "get rid of all the devs and let the AI do it" may not be the most compelling sales pitch to leadership. ("Use it to port the code faster so we can spend more time on the migration plan and manual testing" might have better luck.)
That’s the economy in general. Labor saving innovations increase productivity but do not usually reduce work very much, though they can shift it around pretty dramatically. There are game theoretic reasons for this, as well as phenomena like the hedonic treadmill.
Ideal state for every company is to have minimum input costs with maximum output costs. Labor always gets cut out of the loop because it’s one of the most expensive input costs.
I'm working on a project right now, that is heavily informed by AI. I wouldn't even try it, if I didn't have the help. It's a big job.
However, I can't imagine vibe-coders actually shipping anything.
I really have to ride herd on the output from the LLM. Sometimes, the error is PEBCAK, because I erred, when I prompted, and that can lead to very subtle issues.
I no longer review every line, but I also have not yet gotten to the point, where I can just "trust" the LLM. I assume there's going to be problems, and haven't been disappointed, yet. The good news is, the LLM is pretty good at figuring out where we messed up.
I'm afraid to turn on SwiftLint. The LLM code is ... prolix ...
All that said, it has enormously accelerated the project. I've been working on a rewrite (server and native client) that took a couple of years to write, the first time, and it's only been a month. I'm more than half done, already.
To be fair, the slow part is still ahead. I can work alone (at high speed) on the backend and communication stuff, but once the rest of the team (especially shudder the graphic designer) gets on board, things are going to slow to a crawl.
>> I no longer review every line, but I also have not yet gotten to the point, where I can just "trust" the LLM.
Same here. This is also why I haven't been able to switch to Claude Code, despite trying to multiple times. I feel like its mode of operation is much more "just trust to generated code" than Cursor, which let's you review and accept/reject diffs with a very obvious and easy to use UX.
Most of the folks I work with who uninstalled Cursor in favor of Claude Code switched back to VSCode for reviewing stuff before pushing PRs. Which... doesn't actually feel like a big change from just using Cursor, personally. I tried Claude Code recently, but like you preferred the Cursor integration.
I don't have the bandwidth to juggle four independent things being worked on by agents in parallel so the single-IDE "bottleneck" is not slowing me down. That seems to work a lot better for heavy-boilerplate or heavy-greenfield stuff.
I am curious about if we refactored our codebase the right way, would more small/isolatable subtasks be parallelizable with lower cognitive load? But I haven't found it yet.
I don't think there's been much of an impact, really. Those who know how to use AI just got tangentially more productive (because why would you reveal your fake 10x productivity boost so your boss hands you 10x more tasks to finish?), and those w/o AI knowledge stayed the way they were.
The real impact is for indie-devs or freelancers but that usually doesn't account for much of the GDP.
Management often has a perverse short-term incentive to make labor feel insecure. It’s a quick way to make people feel insecure and work harder ... for a while.
Also, “AI makes us more productive so we can cut our labor costs” sounds so much better to investors than some variation of “layoffs because we fucked up / business is down / etc”
You should look into the concepts of skepticism, materialism, and cynicism. Maybe don't trust the leadership of where you work, the leadership that sees you as a number and not a human.
I also thought it was hilarious that they invented a brand new metric that (surprise) makes their product’s long term projection look really good (financially).
One of the more interesting takes I heard from a colleague, who’s in the marketing department, is that he uses the corporate approved LLM (Gemini) for “pretend work” or very basic tasks. At the same time he uses Claude on his personal account to seriously augment his job.
His rationale is he won’t let the company log his prompts and responses so they can’t build an agentic replacement for him. Corporate rules about shadow it be damned.
My day to day is even busier now with agents all over the place making code changes. The Security landscape is even more complex now overnight. The only negative impact I see is that there’s not much need for junior devs right now. The agent fills that role in a way. But we’ll have to backfill some way or another.
The problem with using unemployment as a metric is hiring is driving by perception. You're making an educated guess as to how many people you need in the future.
Between 2004 and 2008 I did many things in computing as a company that offered my services, one of these was information gathering automation. It almost never immediately lead to decreases in employment. The systems had a to be in place for a while, people had to get used to them, people had to stop making common mistakes with them.
Then the 2008 crash happened and those people were gone in a blink of an eye and never replaced. The companies grew in staff after that, but it was in things like sales and marketing.
Shipping speed never/is was the issue. Most companies are terrible at figuring out what exactly they should be allocating resources behind.
Speeding up does not solve the problem that most humans who are at the top of the hierarchy are poor thinkers. In fact it compounds it. More noise, nice.
Apple has already shown this decades ago - they got the iPhone and iPod developed and out the door in relatively short-time scales given the impact of the products on the world.
Once you know what you want, exactly what you want, things moves fast - really fast.
Or worse. I’ve heard stories from friends where leadership expects huge boosts in productivity due to LLMs, and perceive anything but an order of magnitude boost as incompetence or a refusal to adapt.
> There's suggestive evidence that hiring of young workers (ages 22–25) into exposed occupations has slowed — roughly a 14% drop in the job-finding rate
There goes my excuse of not finding a job in this market.
A possible outcome of AI: domestic technical employment goes up because the economics of outsourcing change. Domestic technical workers working with AI tools can replace outsourcing shops, eliminating time-shift issues, etc at similar or lower costs.
This rhymes with another recent study from the Dallas Fed: https://www.dallasfed.org/research/economics/2026/0224 - suggests AI is displacing younger workers but boosting experienced ones. This matches what we see discussed here, as well as the couple similar other studies we've seen discussed here.
I think the underlying reason is simply because companies are "shaped wrong" to absorb AI fully. I always harp on how there's a learning curve (and significant self-adaptation) to really use AI well. Companies face the same challenge.
Let's focus on software. By many estimates code-related activities are only 20 - 60%, maybe even as low as 11%, of software engineers' time (e.g. https://medium.com/@vikpoca/developers-spend-only-11-of-thei...) But consider where the rest of the time goes. Largely coordination overhead. Meetings etc. drain a lot of time (and more the more senior you get), and those are mostly getting a bunch of people across the company along the dependency web to align on technical directions and roadmaps.
I call this "Conway Overhead."
This is inevitable because the only way to scale cognitive work was to distribute it across a lot of people with narrow, specialized knowledge and domain ownership. It's effectively the overhead of distributed systems applied to organizations. Hence each team owned a couple of products / services / platforms / projects, with each member working on an even smaller part of it at a time. Coordination happened along the heirarchicy of the org chart because that is most efficient.
Now imagine, a single AI-assisted person competently owns everything a team used to own.
Suddenly the team at the leaf layer is reduced to 1 from about... 5? This instantly gets rid of a lot of overhead like daily standups, regular 1:1s and intra-team blockers. And inter-team coordination is reduced to a couple of devs hashing it out over Slack instead of meetings and tickets and timelines and backlog grooming and blockers.
So not only has the speed of coding increased, the amount of time spent coding has also gone up. The acceleration is super-linear.
But, this headcount reduction ripples up the org tree. This means the middle management layers, and the total headcount, are thinned out by the same factor that the bottom-most layer is!
And this focused only on the engineering aspect. Imagine the same dynamic playing out across departments when all kinds of adjacent roles are rolled up into the same person: product, design, reliability...
These are radical changes to workflows and organizations. However, at this stage we're simply shoe-horning AI into the old, now-obsolete ticket-driven way of doing things.
So of course AI has a "capability overhang" and is going to take time to have broad impact... but when it does, it's not going to be pretty.
The TL;DR is that there is little measurable impact (and I'd personally add "yet").
To quote:
"We find no systematic increase in unemployment for highly exposed workers since late 2022, though we find suggestive evidence that hiring of younger workers has slowed in exposed occupations"
My belief based on personal experience is that in software engineering it wasn't until November/December 2025 that AI had enough impact to measurably accelerate delivery throughout the whole software development lifecycle.
I have doubts that this impact is measurable yet - there is a lag between hiring intention and impact on jobs, and outside Silicon Valley large scale hiring decisions are rarely made in a 3 month timeframe.
The most interesting part is the radar plot showing the lack of usage of AI in many industries where the capability is there!
> My belief based on personal experience is that in software engineering it wasn't until November/December 2025 that AI had enough impact to measurably accelerate delivery throughout the whole software development lifecycle.
Gemini 3 and Opus 4.6 were the "woah, they're actually useful now!" moment for me.
I keep saying to colleagues that it's like a rising tide. Initially the AIs were lapping around our ankles, now the level of capability is at waist height.
Many people have commented that 50% of developers think AI-generated code is "Great!" and 50% think its trash. That's a sign that AI code quality is that of the median developer. This will likely improve to 60%-40%, then 70%-30%, etc...
I don’t see definitive evidence that there is some kind of Moore’s law for model improvement though. Just because this year’s model performs better than last year’s model doesn’t mean next year’s model will be another leap. Most of the big improvements this year seem to be around tooling - I still see Opus 4.6 (which is my daily driver at work) making lots of mistakes.
My alternative? Nationalize the company and implement a workplace democracy to replace the executive team + board.
I trust the workers more to dictate the direction of a company than most executives.
They can't do worse.
edit: or what another commentator said, fucking academia. Public universities have done more for humanity than nearly anything to come out of SV. Surveillance capitalism, mass misery + psychosis; it's very telling what our society values when mass amounts of the Earth are desperately trying to ban these very same services to protect children.
Just like anything else, you either think "that's definitely wrong" or "huh, I guess that's probably it." If its really serious, you have to pause and make of a judgement call of course.
There was a recent anecdote from the head of radiology, Mayo Clinic I believe, that went something like this:
- AI has allowed radiologists to review a much higher rate of x-rays
- The above has led to a dramatic increase in need for faster processing, more storage of scans etc
- which in turn led to needing a bigger IT department to manage all of the additional workload
There was a similar anecdote about the IRS where the claim is they went from having N accountants to having much fewer accountants but now they need N IT people to manage the new systems.
Every time I say this people get really angry, but: so far AI has had almost no impact on my job. Neither my dev team nor my vendors are getting me software faster than they were two years ago. Docker had a bigger impact on the pipeline to me than AI has.
Maybe this will change, but until it does I'm mostly watching bemusedly.
A famous economist once said, "You can see the computer age everywhere but in the productivity statistics."
There are many reasons for the lag in productivity gain but it certainly will come.
https://en.wikipedia.org/wiki/Productivity_paradox
I've seen lots of people say AI can basically code a project for them. Maybe it can, but that seems to heavily depend on the field. Other than boilerplate code or very generic projects, it's a step above useless imo when it comes to gamedev. It's about as useful as a guy who read some documentation for an engine a couple years ago and kind of remembers it but not quite and makes lots of mistakes. The best it can do is point me in the general direction I need to go, but it'll hallucinate basic functions and mess up any sort of logic.
1) Do that inside their IDEs, which is less funny
2) Generate blog post about it instead of memes
I can turn out some scripts a little bit quicker, or find an answer to something a little quicker than googling, but I'm still waiting on others most of the time, the overall company processes haven't improved or gotten more efficient. The same blockers as always still exist.
Like you said, there has been other tech that has changed my job over time more than AI has. The move to the cloud, Docker, Terraform, Ansible, etc. have all had far more of an impact on my job. I see literally zero change in the output of others, both internally and externally.
So either this is a massively overblown bubble, or I'm just missing something.
And that's the key problem, isn't it? I maintain current organizations have the "wrong shape" to fully leverage AI. Imagine instead of the scope of your current ownership, you own everything your team or your whole department owns. Consider what that would do to the meetings and dependencies and processes and tickets and blockers and other bureaucracy, something I call "Conway Overhead."
Now imagine that playing out across multiple roles, i.e. you also take on product and design. Imagine what that would do to your company org chart.
I added a much more detailed comment here: https://news.ycombinator.com/item?id=47270142
> Now imagine
> Imagine what that would do
Imagine if your grandma had wheels! She'd be a bicycle. Now imagine she had an engine. She could be a motorcycle! Unfortunately for grandma, she lives in reality and is not actually a motorcycle, which would be cool as hell. Our imagination can only take us so far.
To more substantively reply to your longer linked comment: your hypothesis is that people spend as little as 10% of time coding and the other 90% of time in meetings, but that if they could code more, they wouldn't need to meet other people because they could do all the work of an entire team themselves[1]. The problem with your hypothesis is that you take for granted that LLMs actually allow people to do the work of an entire team themselves, and that it is merely bureacracy holding them back. There have been absolutely zero indicators that this is true. No productivity studies of individual developers tackling tasks show a 10x speedup; results tend to be anywhere from +20% to minus 20%. We aren't seeing amazing software being built by individual developers using LLMs. There is still only one Fabrice Bellard in the world, even though if your premise could escape the containment zone of imagination anyone should be able to be a Bellard on their own time with the help of LLMs.
[1] Also, this is basically already true without LLMs. It is the reason startups are able to disrupt corporate behemoths. If you have just a small handful of people who spend the majority of their work time writing code (by hand! No LLMs required!), they can build amazing new products that outcompete products funded by trillion-dollar entities. Your observation of more coding = less meetings required in the first place has an element of truth to it, but not because LLMs are related to it in any particular way.
Too early to say what AI brings to the efficiency table I think. In some major things I do it's a 1000x speed up. In others it is more a different way of approaching a problem than a speed up. In yet others, it is a bit of an impediment. It works best when you learn to quickly recognize patterns and whether it will help. I don't know how people who are raised with ai will navigate and leverage it, which is the real long-term question (just as the difference between pre- and post-smartphone generations is a thing).
Humans are funny. But most cant seem to understand that the tool is a mirage and they are putting false expectations on it. E.g. management of firms cutting back on hiring under the expectation that LLMs will do magic - with many cheering 'this is the worst itll be bro!!".
I just hope more people realise before Anthropic and OAI can IPO. I would wager they are in the process of cleaning up their financials for it.
There’s some neat stuff, don’t get me wrong. But every additional tool so far has started strong but then always falls over. Always.
Right now there’s this “orchestrator” nonsense. Cool in principle, but as someone who made scripts to automate with all the time before it’s not impressive. Spent $200 to automate doing some bug finding and fixing. It found and fixed the easy stuff (still pretty neat), and then “partially verified” it fixed the other stuff.
The “partial verification” was it justifying why it was okay it was broken.
The company has mandated we use this technology. I have an “AI Native” rating. We’re being told to put out at least 28 commits a month. It’s nonsense.
They’re letting me play with an expensive, super-high-level, probabilistic language. So I’m having a lot of fun. But I’m not going to lie, I’m very disappointed. Got this job a year ago. 12 years programming experience. First big tech job. Was hoping to learn a lot. Know my use of data to prioritize work could be better. Was sold on their use of data. I’m sure some teams here use data really well, but I’m just not impressed.
And I’m not even getting into the people gaming the metrics to look good while actually making more work for everyone else.
Sunk cost fallacy is very real, for all involved. Especially the model producers and their investors.
Sunk cost fallacy is also real for dev's who are now giving up how they used to work - they've made a sunk investment in learning to use LLMs etc. Hence the 'there's no going back' comments that crop up on here.
As I said in this thread - anyone who can think straight - Im referring to those who adhere to fundamental economic principles - can see what's going on from a mile away.
Are you hiring?
IMO AI will make 70-80% job obsolete for sure.
People who actually know how to think can see it a mile away.
Job done fella.
Guys like you dont get it. You think OAI, Amazon etc can freely put large amounts of money into this for 5-10 years? Lmao - delusional. Investors are impatient. Show huge jumps in revenue this year or you no longer have permission to put monumental amounts of money into this anymore.
Short of that they'll just destroy the stock price by selling off; leaving employees who get paid via SBC very unhappy.
No simply 'producing a feature' aint it bud. That's one piece of the puzzle.
The specific way it applies to your specific situation, if it exists, either hasn't been found or hasn't made its way to you. It really is early days.
This assumes that the companies' business growth is a function of the amount of code written, but that would not make much sense for a software company.
Many companies (including mine) are building our product with an engineering team 1/4 the size of what would have been required a few years ago. The whole idea is that we can build the machine to scale our business with far fewer workers.
Even in companies that are no longer growing I've always seen the roadmap only ever get larger (at that point you get desperate to try to catch back up, or expand into new markets, while also laying people off to cut costs).
Will we finally out-write the backlog of ideas to try and of feature requests? Or will the market get more fragmented as more smaller competitors can carve out different niches in different markets, each with more-complex offerings than they could've offered 5 years ago?
This is already happening. Fewer people are getting hired. Companies are quietly (sometimes not, like Block) letting people go. At a personal level all the leaders in my company are sounding the “catch up or you’ll be left behind” alarm. People are going to be let go at an accelerated pace in the future (1-3 years).
> We find no systematic increase in unemployment for highly exposed workers since late 2022
It is absolutely likely. The hiring market for juniors is fucked atm.
(And if it is, what is the cause?)
How long have we been hearing about crushing affordability problems for property? And how long ago did that start moving into essentials? The COVID-era bullwhip-effect inflation waves triggered a lot of price ratcheting that has slowed but never really reversed. Asset prices are doing great, as people with money continue to need somewhere to put it, and have been very effective at capturing greater and greater shares of productivity increases. But how's the average waiter, cleaning-business sole-proprietor, uber driver, schoolteacher, or pet supply shopowner doing? How's their debt load trending? How's their savings trending?
[1] https://www.npr.org/2026/02/12/nx-s1-5711455/revised-labor-d...
[2] https://www.marketplace.org/story/2025/12/18/expect-more-of-...
Lots going on right now in the market, but IMO that retreat is the biggest one still.
Many companies were basically on a path of infinite hiring between ~2011 and ~2022 until the rapid COVID-era whiplash really drove home "maybe we've been overhiring" and caused the reaction and slowdown that many had been predicting annually since, oh, 2015.
There's a lot of perverse interests and incentives at play.
> We find no systematic increase in unemployment for highly exposed workers since late 2022
Also dont forget theres only so many viable revenue-generating and cost-saving projects to take. And said above - overhiring in COVID.
Then any company that was staffed at levels needed prior to the arrival of current-level LLM coding assistants is bloated.
If the company was person-hour starved before, a significant amount of that demand is being satisfied by LLMs now.
It all depends on where the company is in the arc of its technology and business development, and where it was when powerful coding agents became viable.
That's... not a good look for your engineers?
Being overworked is sometimes better than being underworked. Sometimes the reserve is better. They both have challenges.
In that sort of high-fear, change-adverse environment "get rid of all the devs and let the AI do it" may not be the most compelling sales pitch to leadership. ("Use it to port the code faster so we can spend more time on the migration plan and manual testing" might have better luck.)
Best time to be a solo founder in underserved markets :)
However, I can't imagine vibe-coders actually shipping anything.
I really have to ride herd on the output from the LLM. Sometimes, the error is PEBCAK, because I erred, when I prompted, and that can lead to very subtle issues.
I no longer review every line, but I also have not yet gotten to the point, where I can just "trust" the LLM. I assume there's going to be problems, and haven't been disappointed, yet. The good news is, the LLM is pretty good at figuring out where we messed up.
I'm afraid to turn on SwiftLint. The LLM code is ... prolix ...
All that said, it has enormously accelerated the project. I've been working on a rewrite (server and native client) that took a couple of years to write, the first time, and it's only been a month. I'm more than half done, already.
To be fair, the slow part is still ahead. I can work alone (at high speed) on the backend and communication stuff, but once the rest of the team (especially shudder the graphic designer) gets on board, things are going to slow to a crawl.
Same here. This is also why I haven't been able to switch to Claude Code, despite trying to multiple times. I feel like its mode of operation is much more "just trust to generated code" than Cursor, which let's you review and accept/reject diffs with a very obvious and easy to use UX.
I don't have the bandwidth to juggle four independent things being worked on by agents in parallel so the single-IDE "bottleneck" is not slowing me down. That seems to work a lot better for heavy-boilerplate or heavy-greenfield stuff.
I am curious about if we refactored our codebase the right way, would more small/isolatable subtasks be parallelizable with lower cognitive load? But I haven't found it yet.
The real impact is for indie-devs or freelancers but that usually doesn't account for much of the GDP.
Don't know if this is effective and I don't think management knows either, but it's what they're doing
Doesn't mean the two are related.
Is AI just the excuse? We've got tariffs, war, uncertainty and other drama non stop.
Management often has a perverse short-term incentive to make labor feel insecure. It’s a quick way to make people feel insecure and work harder ... for a while.
Also, “AI makes us more productive so we can cut our labor costs” sounds so much better to investors than some variation of “layoffs because we fucked up / business is down / etc”
"We've frozen hiring because our growth potential is tapped out."
"We've frozen hiring because AI can replace employees."
Instead they are using Electron and calling it a day. Very ironic isn't it? If AI is so good then why don't we get native software from Anthropic?
His rationale is he won’t let the company log his prompts and responses so they can’t build an agentic replacement for him. Corporate rules about shadow it be damned.
Only the paranoid survive I guess
Anthropic can cause layoffs through pure marketing. People were crediting an Anthropic statement in causing a drop in IBM's stock value, which may genuinely lead to layoffs: https://finance.yahoo.com/news/ibm-stock-plunges-ai-threat-1...
We'll probably have to wait for the hype to wear off to get a better idea, but that might take a long while.
Then the 2008 crash happened and those people were gone in a blink of an eye and never replaced. The companies grew in staff after that, but it was in things like sales and marketing.
Shipping speed never/is was the issue. Most companies are terrible at figuring out what exactly they should be allocating resources behind.
Speeding up does not solve the problem that most humans who are at the top of the hierarchy are poor thinkers. In fact it compounds it. More noise, nice.
Writing code is lesser problem than figuring out what we want when we want, and to get stakeholders at one place.
Apple has already shown this decades ago - they got the iPhone and iPod developed and out the door in relatively short-time scales given the impact of the products on the world. Once you know what you want, exactly what you want, things moves fast - really fast.
There goes my excuse of not finding a job in this market.
Also, it seems to me the concept of "observed exposure" is analogous to OpenAI's concept of "capability overhang" - https://cdn.openai.com/pdf/openai-ending-the-capability-over...
I think the underlying reason is simply because companies are "shaped wrong" to absorb AI fully. I always harp on how there's a learning curve (and significant self-adaptation) to really use AI well. Companies face the same challenge.
Let's focus on software. By many estimates code-related activities are only 20 - 60%, maybe even as low as 11%, of software engineers' time (e.g. https://medium.com/@vikpoca/developers-spend-only-11-of-thei...) But consider where the rest of the time goes. Largely coordination overhead. Meetings etc. drain a lot of time (and more the more senior you get), and those are mostly getting a bunch of people across the company along the dependency web to align on technical directions and roadmaps.
I call this "Conway Overhead."
This is inevitable because the only way to scale cognitive work was to distribute it across a lot of people with narrow, specialized knowledge and domain ownership. It's effectively the overhead of distributed systems applied to organizations. Hence each team owned a couple of products / services / platforms / projects, with each member working on an even smaller part of it at a time. Coordination happened along the heirarchicy of the org chart because that is most efficient.
Now imagine, a single AI-assisted person competently owns everything a team used to own.
Suddenly the team at the leaf layer is reduced to 1 from about... 5? This instantly gets rid of a lot of overhead like daily standups, regular 1:1s and intra-team blockers. And inter-team coordination is reduced to a couple of devs hashing it out over Slack instead of meetings and tickets and timelines and backlog grooming and blockers.
So not only has the speed of coding increased, the amount of time spent coding has also gone up. The acceleration is super-linear.
But, this headcount reduction ripples up the org tree. This means the middle management layers, and the total headcount, are thinned out by the same factor that the bottom-most layer is!
And this focused only on the engineering aspect. Imagine the same dynamic playing out across departments when all kinds of adjacent roles are rolled up into the same person: product, design, reliability...
These are radical changes to workflows and organizations. However, at this stage we're simply shoe-horning AI into the old, now-obsolete ticket-driven way of doing things.
So of course AI has a "capability overhang" and is going to take time to have broad impact... but when it does, it's not going to be pretty.
The TL;DR is that there is little measurable impact (and I'd personally add "yet").
To quote:
"We find no systematic increase in unemployment for highly exposed workers since late 2022, though we find suggestive evidence that hiring of younger workers has slowed in exposed occupations"
My belief based on personal experience is that in software engineering it wasn't until November/December 2025 that AI had enough impact to measurably accelerate delivery throughout the whole software development lifecycle.
I have doubts that this impact is measurable yet - there is a lag between hiring intention and impact on jobs, and outside Silicon Valley large scale hiring decisions are rarely made in a 3 month timeframe.
The most interesting part is the radar plot showing the lack of usage of AI in many industries where the capability is there!
Gemini 3 and Opus 4.6 were the "woah, they're actually useful now!" moment for me.
I keep saying to colleagues that it's like a rising tide. Initially the AIs were lapping around our ankles, now the level of capability is at waist height.
Many people have commented that 50% of developers think AI-generated code is "Great!" and 50% think its trash. That's a sign that AI code quality is that of the median developer. This will likely improve to 60%-40%, then 70%-30%, etc...
They do nothing?
I trust the workers more to dictate the direction of a company than most executives.
They can't do worse.
edit: or what another commentator said, fucking academia. Public universities have done more for humanity than nearly anything to come out of SV. Surveillance capitalism, mass misery + psychosis; it's very telling what our society values when mass amounts of the Earth are desperately trying to ban these very same services to protect children.
I hope they know what they're doing.
- AI has allowed radiologists to review a much higher rate of x-rays
- The above has led to a dramatic increase in need for faster processing, more storage of scans etc
- which in turn led to needing a bigger IT department to manage all of the additional workload
There was a similar anecdote about the IRS where the claim is they went from having N accountants to having much fewer accountants but now they need N IT people to manage the new systems.