People tend to forget how fast technological development advances. Even if you lived trough it you tend to forget how recently the world looked very different.
- before 2012 there was no smartphone
- before 2001 there was no wikipedia
- before 1995 less then 10 percent of the rich country home users had internet
- before 2023 there was no ai available to home users.
Hardware has been getting faster by a factor of 100 in 10 years and ~10‘000 in 20 years. Ai currently develops faster because of a combination of software and hardware improvements. Even if the best current system is only right 1/100 times right now, its likely nearly allways accurate in 10 years.
I also like to remind people that the phone i am writing this on (iphone 12), has the same computing power as the earth simulator in 2003. that was the fastest computer on the earth back then.
Imagine this development and think what changes might come.
Every morning the turkey rejoiced and said to himself "oh joy, I'm such a lucky turkey, I don't have to do anything, the food is plentiful, I just eat and shoot the breeze the whole day long, what an awesome life!" Until one morning, the day before thanksgiving, the turkey rejoiced about the awesome day he's about to have ... just to be picked up 5 minutes later and dragged to the slaughterhouse.
It’s a compelling story; but what you’re describing is, to the turkey, a black swan event, rather than an obvious inevitability that all the other turkeys keep telling the turkey is going to happen.
Years ago when you went into computers, you didn't have normies warning you that one day computers will program themselves? 20 years ago, nobody could tell you if this would happen in 20 years or 200 years, but I do believe there has been a general sense of this sort of thing happening eventually.
Let's suppose you are a medium sized business. You've always wanted to provide top quality customer service but couldn't do it before because you'd need to hire 5 people to do it right. Instead, you strategically decided to not provide quality customer service and sell the product at a lower price than competitors. So you have no customer service person in the company. Service is bad. It limits growth. But it was strategic to not provide good service in order to gain an advantage somewhere else in the business.
But now, you can hire 1 customer service person, who could then use AI agents to provide the top quality customer service. Previously, you needed to hire 5 people, which wasn't worth it.
So you went from no customer service employee to 1.
I suspect that this is what will happen. Many companies will hire their first customer service person or more. Many big companies will layoff most of their customer service people. The net effect might actually increase total customer service employment.
I suspect that job openings for customer service employees will actually be higher than now but companies won't be able to find enough AI-skilled people to fill the job. We're going to read about how there are more job openings than ever but companies can't find the AI skillset they need. This is why I think people who adopt AI now, learn it, understand it, get good at it, will be in high demand.
In some ways AI sounds almost utopian. I theory it could redistribute manpower more evenly between small and large businesses, allowing them to compete more fairly and improving the efficiency of capitalism (the idealistic model, not the real world state). However, than you remember that the AI tech is currently almost fully in control by the big tech (and its next generation) and you have to ask whether they’ll be able to sabotage that improvement because they will do their worst for sure since liberating the market is not beneficial to them. Let’s hope that despite all odds and current trends we actually reach a state where AI is possible to run on-prem/locally and there are still SOTA models at least as open as they are today.
Don't get lost in the customer service example. Focus on the idea instead which can be applied to many other professions.
People thought AI being better than a human at reading medical images would put radiologists out of a job. But instead, radiologists had more demand than ever because it made getting a scan more affordable, more accurate which led to more customer demand.
Same can happen for customer service. AI makes customer service cheaper, better, faster. More companies offer good customer service in order to stay competitive. More customers demand customer service because it's better now and they expect it since all companies big or small can afford quality customer service.
AI needn't respond, it can instead be used to sort the meaningless noise from the actionable complaints, where previously all would have been ignored. Raise only the issues that matter and can be addressed to the human.
You sure could let the agent try to do the task and supervise it before sending of replies to customers. Similar to as to that I still check the code produced by an agent.
If you remove the human from the loop in customer service, you won't gain a thing.
To be perfectly honest, the majority of work is going to see a restructure soon anyway.
"Triaging by LLM before sending task to any human" can work for almost anything, not just support calls. On another story I saw someone mention that they'd like something like an ad-blocker, but for content - a "content-blocker". Not too hard running even a local model that, via a browser extension, scnas the current page and places it into one of several bins: Read verbatim, summarise with ChatAI, Ignore completely, Read and mark for re-reading.
Software dev? Bin a ticket into "complex", "simple", "talk to lead dev".
Software proposal? Bin the proposal into "CotS available", "FOSS available", "Quick dev", "Too costly to proceed".
Bookkeeping? Accounting? They all have tasks that can be binned.
What does this all mean, I hear you ask? Well, you no longer need as many employees if some of the bins are "ChatAI and/or agent can complete this" with human review.
So, yeah, a lot of people are going to be out of work if this works like they say it does.
Or it might mean they produce more valuable product and more of it and therefore need more devs to do it.
If a dev produces value for the company, and then the company can automate away the least valuable part of the dev's job, the dev is now more valuable. Why would tbe company get rid of them just at that moment?
Well, some will, because some companies are badly-run. Others will take advantage of the opportunity.
> Or it might mean they produce more valuable product and more of it and therefore need more devs to do it.
You're assuming unbounded demand for whatever product the company is producing. If demand for their product is bounded, having 1 dev produce the output of 5 devs means that the company is going to have devs simply sitting around doing nothing for most of the day.
> If a dev produces value for the company, and then the company can automate away the least valuable part of the dev's job, the dev is now more valuable.
I don't follow this argument - there is a practical limit to how much development a company requires. In the past they may have had a team of 10 to satisfy that limit. If the limit is satisfied by a team of 2 the company... does what exactly?
I implemented HN article triage years ago using nothing more than naive Bayesian classification on the text of the headlines. Worked surprisingly well, you might try that.
The current deployments of chatbots are not the bar to compare with. There’s an incoming wave of extremely capable agents and process reimagining that is going to be highly disruptive.
Been in this space over a decade and this time really is different. It’s hard for humans to perceive the exponential, it will be slow then sudden.
At a recent AI workshop management made clear that they see AI as rendering sprints and scrums obsolete, that Kanban makes a lot more sense, and that estimating effort/story-points is also becoming meaningless. Which is a strong silver lining if you ask me.
I think it's to do with the bottleneck shifting away from code generation and towards specifying and reviewing and integrating code. The process of working with AI agents to produce specs, tech specs, code, and reviews lends itself more to a flow-based structure (like kanban).
Bear in mind this is a B2B enterprise company with a mix of legacy and greenfield. Might elbe different elsewhere.
What exactly will these agents be able to do with enough consistency, accuracy, and reliability that people will want to hire them over humans?
In my experience with even the most basic implementation of agents, i.e. customer service chat bots, I literally cannot stand interacting with them even once. They are extremely unhelpful and I will hang up or immediately ask to speak to a human.
Obviously your support chatbot with talk to your flavor of clawd that will call Claude Code that will code a solution that will be reviewed by Codex that will merge and release it and then will ping clawd that will send an email to the user announcing that their issue has been fixed. /s just in case
I’ve been involved in building a system that reads structured data from a special form of contracts from a specific industry. Prices, clauses, pick up, delivery, etc. A couple hundred datapoints per contract. We had many discussions around how to present and sell an imperfect system. The thing is, the potential customers are today transcribing the contracts manually and we quickly realized that people make a ton of mistakes doing that. It became obvious when we were working on assertion datasets ourself. It’s not a perfect system and you have to consider how you use the data (aggregating for price indexing for instance), but we’re actually doing better than what people are achieving when they have to transcribe data for hours a day.
The voice agents in development right now feel 100x the current chatbots deployed by companies.
I had same opinion till a few months ago, now would prefer the [redacted company so as to not give free marketing] AI agent. You’ll start seeing this wave in around 3-6 months as most are in trial
Most support agents lack... well, agency. If you connect a chatbot to an FAQ, that's exactly what you get. Just another instance of enterprise software being badly designed, badly written etc. It doesn't mean that it's actually an impossible problem.
I think the argument here is a bit of a strawman, though there is a good point in there as well. AI will not automate all customer support, but it has the potential to automate a large fraction of it.
The anecdote in there is about complex B2B enterprise software. That's not the majority of customer support, and is very heavy on escalating to actual experts.
You don't have to remove 100% of the jobs to have huge effects. Automating large parts of a few sectors would already create significant disruptions.
I think this mentality must have its own imminent apocalypse. Gifted with an enormous increase in potential productivity, the decision is to do the same but cheaper? Who allocates capital to such spiritless commodification? It all feels like using a printing press to make one bible a month.
There must be a role that can be more productive. It might not necessarily be our skillsets that fit those roles - and the roles might be more stratified - but someone is going to be able to be do more, be paid more.
The article literally addresses this point. The easily automated stuff doesn’t save that much money. The big costs of support are the hard things you can’t automate.
> Because the remaining 10% is what required most of the CS team’s time. They built an FAQ you can talk to.
These days it's hard to get people to read an email longer then 5 lines - yet people are super excited about abundant masses of text generated by LLMs. It does not compute....
Is this based on anything real or just AI-generated slop meant to trigger angry reactions? It doesn't quote any sources for any of the stories, so as far as I can see, they're probably 100% made up...
Bifurcation is the right model and it’s already happening:
For things where the end customer doesn’t care if they’re interacting with an AI, reading content by an AI, etc. – or if the company doesn’t care what the customer thinks (see: automated phone customer support lines for the last twenty years) – the work will be replaced by AI work. Examples are any kind of rote documentation, generic digital asset creation like blog images, low level customer support, and most things where the company doesn’t really care about the customer, because the company is getting paid regardless.
If it does matter what the end customer thinks, the role will become increasingly humanistic in nature. Examples are high-end enterprise sales, personality and expertise-driven media and content, and anything where being “revealed” as an AI is perceived negatively.
I mean, looking around in social media i would describe most of LLM preachers and worshippers as either conmen or non-tech guys that still have no clue about the technology behind.. the kind that worship elon and his kind, without questioning every new absurd sales pitch that comes from that bunch as the future, no matter how little is that based in reality.
On the other side the doom posters tend to be awfully mediocre professionals (or again, conmen leveraging FOMO). Skeptics like in the article tend to be dismissed. I'm also a skeptic, and someone who you would define as a 10x i think, except a few years ago i would have just been, you know, good at my job?
Please let me know when i'm going to be automated so i can start becoming good at something else. The future may not be bright for a number of reason, but i still have not submitted to doom.
In between the brainless masses mindlessly regurgitating press releases (most of whom are human, not bots) or reddit doomerisms from the other side, there are tons of people discussing real successes and failures they've had with the tech. If you can't see the middle, it's probably because you're in one of the extremes.
Of course it is, it’s just a scapegoat to lower the wages, another power dynamic trick pulled on employees. I have noticed a lot of managers going with coop+AI combo or outsourcing+AI, thinking it’s the ultimate goldmine to minimize expenses and maximize profits, and they soon hit a reality check. And when they do, unfortunately, instead of resolving the root cause issue, they go and hire only one senior in the team and overload and overwork him, while praising the AI how it increased the productivity and all.
Managed decline policies of western governments are much more threatening to white-collar workers and everyone else than AI will ever be.
AI will enable significantly faster economic growth, which is something the EU has been making impossible with legislation designed to destroy Europe's economic advantage.
Right, so it's time to dismantle environment/climate protection, worker safety/rights, employee protections etc. etc. like the Trump administration is currently doing over in the US, and make Europe great again?!
(actually, MEGA would be a great acronym, but Trump's friends in the EU are more focused on dismantling it rather than making it great)
There is such a mind-bogglingly huge amount of waste in IT services worldwide, particularly in the consulting and offshoring areas, that big swings, up and down, in that area don’t actually have anything to do with what works well or doesn’t. Decisions are made to offshore work or drop offshore contracts based on the latest hype cycle, not whether it is effective or worthwhile.
So while there may be lots of consultants losing their jobs, that’s not because AI tools do the work better. It’s because management thinks investors will accept the story that AI tools will do the work better and save money. Management, and investors, don’t know, can’t judge, and honestly don’t actually care if it’s better or worse. And they run things so poorly it would be impossible to tell anyway.
My biggest worry currently isn't even job-related, it's that corporations and authorities will use AI for customer/client relations but that this AI will not be allowed to make any significant changes and is therefore an utter waste of time. In many places, this could turn an already dire situation into an absolute nightmare. What might make it even worse is that authorities - and probably also corporations - will likely ban or block user AI agents, so you cannot even use your own AI to negotiate with their AI.
That's something that needs to be addressed by lawmakers ASAP. There needs to be a right to speak to a human, or (the perhaps overly tech optimistic route) a prohibition of AI that doesn't have adequate decision-making power.
- before 2012 there was no smartphone
- before 2001 there was no wikipedia
- before 1995 less then 10 percent of the rich country home users had internet
- before 2023 there was no ai available to home users.
Hardware has been getting faster by a factor of 100 in 10 years and ~10‘000 in 20 years. Ai currently develops faster because of a combination of software and hardware improvements. Even if the best current system is only right 1/100 times right now, its likely nearly allways accurate in 10 years.
I also like to remind people that the phone i am writing this on (iphone 12), has the same computing power as the earth simulator in 2003. that was the fastest computer on the earth back then.
Imagine this development and think what changes might come.
That's still almost three orders of magnitude from the iPhone 12 (0.02 Linpack TFLOPS, 4GB RAM, 256GB storage).
But now, you can hire 1 customer service person, who could then use AI agents to provide the top quality customer service. Previously, you needed to hire 5 people, which wasn't worth it.
So you went from no customer service employee to 1.
I suspect that this is what will happen. Many companies will hire their first customer service person or more. Many big companies will layoff most of their customer service people. The net effect might actually increase total customer service employment.
I suspect that job openings for customer service employees will actually be higher than now but companies won't be able to find enough AI-skilled people to fill the job. We're going to read about how there are more job openings than ever but companies can't find the AI skillset they need. This is why I think people who adopt AI now, learn it, understand it, get good at it, will be in high demand.
Disclaimer: I'm an AI compute investor.
People thought AI being better than a human at reading medical images would put radiologists out of a job. But instead, radiologists had more demand than ever because it made getting a scan more affordable, more accurate which led to more customer demand.
Same can happen for customer service. AI makes customer service cheaper, better, faster. More companies offer good customer service in order to stay competitive. More customers demand customer service because it's better now and they expect it since all companies big or small can afford quality customer service.
If you remove the human from the loop in customer service, you won't gain a thing.
* Customer wants the human touch
* The company's systems were broken and the customer wouldnt have called at all if they could quickly and easily do what they wanted online.
* Customers are routinely furious and want to complain and/or understand and the company wants to brush them off.
AI doesnt help the first two, it only helps with deflection (what they call the last one).
"Triaging by LLM before sending task to any human" can work for almost anything, not just support calls. On another story I saw someone mention that they'd like something like an ad-blocker, but for content - a "content-blocker". Not too hard running even a local model that, via a browser extension, scnas the current page and places it into one of several bins: Read verbatim, summarise with ChatAI, Ignore completely, Read and mark for re-reading.
Software dev? Bin a ticket into "complex", "simple", "talk to lead dev".
Software proposal? Bin the proposal into "CotS available", "FOSS available", "Quick dev", "Too costly to proceed".
Bookkeeping? Accounting? They all have tasks that can be binned.
What does this all mean, I hear you ask? Well, you no longer need as many employees if some of the bins are "ChatAI and/or agent can complete this" with human review.
So, yeah, a lot of people are going to be out of work if this works like they say it does.
If a dev produces value for the company, and then the company can automate away the least valuable part of the dev's job, the dev is now more valuable. Why would tbe company get rid of them just at that moment?
Well, some will, because some companies are badly-run. Others will take advantage of the opportunity.
You're assuming unbounded demand for whatever product the company is producing. If demand for their product is bounded, having 1 dev produce the output of 5 devs means that the company is going to have devs simply sitting around doing nothing for most of the day.
> If a dev produces value for the company, and then the company can automate away the least valuable part of the dev's job, the dev is now more valuable.
I don't follow this argument - there is a practical limit to how much development a company requires. In the past they may have had a team of 10 to satisfy that limit. If the limit is satisfied by a team of 2 the company... does what exactly?
After all, a limit is a limit.
Been in this space over a decade and this time really is different. It’s hard for humans to perceive the exponential, it will be slow then sudden.
Bear in mind this is a B2B enterprise company with a mix of legacy and greenfield. Might elbe different elsewhere.
True, but also there are perception biases that lead us to believe progress is exponential, even though it might as well be an S-curve.
I'm having a hard time finding the right terms, but I'm sure there is some bias to think that "the line goes up".
(Let’s not talk about my blockchain startup and my VR startup and my NFT startup). My house is nice though.
What exactly will these agents be able to do with enough consistency, accuracy, and reliability that people will want to hire them over humans?
In my experience with even the most basic implementation of agents, i.e. customer service chat bots, I literally cannot stand interacting with them even once. They are extremely unhelpful and I will hang up or immediately ask to speak to a human.
I had same opinion till a few months ago, now would prefer the [redacted company so as to not give free marketing] AI agent. You’ll start seeing this wave in around 3-6 months as most are in trial
I don't want Codex dammit! I'm a Claude Code man.
The anecdote in there is about complex B2B enterprise software. That's not the majority of customer support, and is very heavy on escalating to actual experts.
You don't have to remove 100% of the jobs to have huge effects. Automating large parts of a few sectors would already create significant disruptions.
I think this mentality must have its own imminent apocalypse. Gifted with an enormous increase in potential productivity, the decision is to do the same but cheaper? Who allocates capital to such spiritless commodification? It all feels like using a printing press to make one bible a month.
There must be a role that can be more productive. It might not necessarily be our skillsets that fit those roles - and the roles might be more stratified - but someone is going to be able to be do more, be paid more.
These days it's hard to get people to read an email longer then 5 lines - yet people are super excited about abundant masses of text generated by LLMs. It does not compute....
For things where the end customer doesn’t care if they’re interacting with an AI, reading content by an AI, etc. – or if the company doesn’t care what the customer thinks (see: automated phone customer support lines for the last twenty years) – the work will be replaced by AI work. Examples are any kind of rote documentation, generic digital asset creation like blog images, low level customer support, and most things where the company doesn’t really care about the customer, because the company is getting paid regardless.
If it does matter what the end customer thinks, the role will become increasingly humanistic in nature. Examples are high-end enterprise sales, personality and expertise-driven media and content, and anything where being “revealed” as an AI is perceived negatively.
Get prepared. Something is coming *soon*
And how any even slightly skepctical commend gets downvoted to hell. One may start thinking there are bots promoting the narrative.
Or maybe you're choosing to perceive bots when actually a lot of people disagree with you?
On the other side the doom posters tend to be awfully mediocre professionals (or again, conmen leveraging FOMO). Skeptics like in the article tend to be dismissed. I'm also a skeptic, and someone who you would define as a 10x i think, except a few years ago i would have just been, you know, good at my job?
Please let me know when i'm going to be automated so i can start becoming good at something else. The future may not be bright for a number of reason, but i still have not submitted to doom.
AI will enable significantly faster economic growth, which is something the EU has been making impossible with legislation designed to destroy Europe's economic advantage.
(actually, MEGA would be a great acronym, but Trump's friends in the EU are more focused on dismantling it rather than making it great)
infact, i go and implement dumb AI models in many companies and executives immediately show "how many people they can fire with this advancement".
So while there may be lots of consultants losing their jobs, that’s not because AI tools do the work better. It’s because management thinks investors will accept the story that AI tools will do the work better and save money. Management, and investors, don’t know, can’t judge, and honestly don’t actually care if it’s better or worse. And they run things so poorly it would be impossible to tell anyway.
That's something that needs to be addressed by lawmakers ASAP. There needs to be a right to speak to a human, or (the perhaps overly tech optimistic route) a prohibition of AI that doesn't have adequate decision-making power.