> When we do land on something, if it affects existing subscribers you'll get plenty of notice before anything changes. Will hear it from us, not a screenshot on X or Reddit.
If you don't want things like this spreading through screenshots of X and Reddit, don't run "tests" like this in the first place!
(Also "if it affects existing subscribers" is a cop-out, I need to know the pricing of Claude Code for NEW subscribers if I'm going to adopt it at a company with a growing team, or recommend it to other people, write tutorials etc.)
pretty much none of these big providers are offering the guarantees needed to be taken seriously in workplaces right now. the technology itself isn't offering the deterministic guarantees that should warrant it in the workplace right now. problem is everyone's foot is just on the gas. even if your workplace isnt paying for it, people are just straight up rolling their own personal claude accounts to do work at orgs.
ive been trying to make the case all year that if we're going to let employees do shit with ai, lets try claude. in the past like.. 2-3 weeks all that goodwill has basically evaporated.
local inference needs to take off asap because all of these entities actually suck and i wouldn't trust a single sla with anthropic. they are not acting like a serious company right now, this is a joke.
I just cancelled before seeing this news. i was already pissed about constantly hitting limits on the 20 a month plan and looking for alternatives and this seals the deal. Bye bye!
I just paid for Pro for the first time 24 hours ago. Its been great, but the limits are crazy. It's nice not dealing with ChatGPTs sycophantic gaslighting, and not having random bugs.
That said, I seem to be caught in that 2% test if I open in a private tab. What nonsense. I wouldn't be paying for Claude if it wasn't for its quality abilities, which necessarily includes Claude Code.
I can easily hit the weekly limit on Claude even on the $200 plan. I have yet to ever hit a rate limit on Codex $100. And the results are almost as good. And don't get me started on Anthropic's extra usage scam.
To play devil's advocate, without A/B testing a lot of decisions would be made with insufficient relevant data, and lead to subpar results that affect the many negatively form the road.
counter-point : the companies that are most famous for A/B testing routinely are also the ones with the most notoriously non-existent customer service departments globally, facebook/google/amazon/ebay. Groups that harbor dissatisfied customers by essentially being 'the only show in town.'.
so, what i'm saying is : I think a lot of companies align themselves with the cash first and then measure whether or not the negative image/user impact is manageable .
A lot of decisions made with A/B testing are also made with insufficient relevant data, but it's less obvious since it's easy to think the A/B results cover everything.
> Depends entirely on the stakes and whether personal data is involved
Sure. Let me just A/B test whether or not you'll respond positively or negatively to having your news delivered via push notification or delayed by 10 minutes.
I'm sure you would appreciate being tested on without your consent, just so that I can make an extra quick buck at your expense. Nothing amoral or unethical about it.
What do you think about slow rollouts for new features? Like, we think this new push notification system will be loved but let’s ship to only 1% of users in case there’s a horrible unforeseen consequence like occasional 10min delays? Dashboard goes upside down -> revert then work through logs to figure out what the hell went wrong.
It’s pretty reasonable to say “demand is way up, quality is up, supply is constrained, and so price needs to rise”.
It seems weird to segment this way though. Surely it’s better to just give Sonnet to your bottom tier, rather than cut out the entire Claide Code product entirely?
Give folks a taste rather than lock the whole product behind a $100/mo plan.
Maybe a silly bet where the head of sales had 1-2 glasses of wine too much... "I bet they will still pay us 20 bucks/mo without CC! Don't believe me? I'm going to prove it!"
I don’t get the surprise or discontent. People hooking themselves up to a paid SaaS that only two vendors can offer (Anthropic and OpenAI), no competition or regulation to speak of… of course they’ll do whatever they want with their plans.
Hope you can still resume working on your projects without AI.
>"his title should be changed to Head of Corporate Bullshitting"
They're hitting the physical limits of energy production and chip supply for inference capacity. There's literally nothing that can be done but reduce usage to spread it around for now.
Hopefully the negative responses in that thread + the conversation here on HN might help them realize that totally removing Code access for Pro users isn't a good look.
And with no free trial period on top of that, nobody is going to want to pay $100+ just to check it out. I can't imagine the conversion rate of that test being positive.
CC has such egregious API subsidies that it’s hard to not to leverage it unless the license tells an enterprise otherwise. Love the subsidized pricing while it lasts.
Yeah I flat out don't believe the 2% thing. It's possible that I was the 1 out of 50 who checked the page and saw that Claude code was removed... but it really seems like everyone I shared it with saw the same thing which is incredibly unlikely. Also I am an existing subscriber and checked the price page while logged in, so I shouldn't be counted in "2% of new subscribers" at all...
Yep, and the price point theyre looking at is 95% of an engineer.
Once they get people hooked, deskilled, and paying, the money ratchet only tightens.
And the companies KNOW that theyre replacing engineers, or trying to. So each engineer replaced is X salary a year they now have available, so make it back in SaaS LLM tokens.
Is it? I’m curious because I thought they were raising prices to pay for exorbitant training costs, not because subscribers are expensive on a unit basis.
I thought inference was cheap so there was little marginal cost of a new subscriber.
Random data point: Guest passes apparently still include Claude Code in their Pro trial. If they are running a test this is a really sloppy way to do it.
It is honestly truly fucking incredible how corps still find new, innovative ways to enshittify. Regular enshittification won't cut it, they have to exercise their artistic creativity. Who the fuck comes up with the idea that what services you get with your subscription are random? It's mind-boggling that some percentage of people visiting the website will be presented with an inferior version of the same subscription for the same price. I'm not even mad (despite my colorful wording), I don't use Claude, just impressed with the bold new territory being explored here.
> It is honestly truly fucking incredible how corps still find new, innovative ways to enshittify. Regular enshittification won't cut it, they have to exercise their artistic creativity.
I had a bit of an epiphany the other day thinking about these VC companies offering products to the public at unsustainable prices. It's classic anticompetitive behavior.
You imagine anticompetitive behavior to come from a monopoly because they can afford to burn money to drive competition out before they bring prices back to profitable but the whole VC burn is the same thing. People talk about it a lot without really saying it explicitly when they talk about moats. The only moat Anthropic and OpenAI have is money and they utilize it by offering products below cost.
The two companies are just trying to outlast the other one until they are the only one left.
So it's not really enshitification as much as you were previously getting the deal of a lifetime.
In physical markets we call this kinda thing dumping and it's often regulated. Maybe offering SaaS or compute at below profitable rates should be investigatable too, to avoid killing competitors too easily?
The whole Silicon Valley VC industry and the majority of the net worth of SWEs on HN is based on dumping. "Burning VC cash" is transparently dumping, and it's squarely what the US big tech dominance is founded on. Amazon, Uber, Youtube, now LLMs. The huge majority of "success stories" of the last 15 years are based on dumping their product far below cost price, running at a loss for years until they dominate the market, and then jacking up prices/enshittifying/selling user data.
Dumping is typically used in the context of international trade.
There are some predatory pricing laws, but they're much more narrow than most people believe. There is no law requiring things to be sold for more than it costs to produce.
I think it's funny that these topics make people angry enough to demand that we make laws to force companies to raise prices. We'll stick it to these companies by forcing them to charge us more! That will show them!
Such laws would be very bad for startups and newcomers because they'd be forced to price their new product higher than established competitors who have economies of scale. It would be a nice handout to the big companies.
This happens naturally because no company can run at a loss forever.
I think it's funny that we're getting subsidized and discounted services and this makes some people so angry that the comment section is demanding laws that would force companies to charge us more.
I think of enshittification as "we're making plenty of money but let's make more." In other words greed.
Based on how much money Zitron has reported that these companies are losing on every subscription, this feels more like they're just trying to survive. In other words "ohshittification."
My take: it is not enshittification to raise the price for a product whose demand outstrips its supply. That is basic economics. There are alternatives, it’s not a monopoly. If you think it’s the best product, then pay more for it.
Personally I would be perfectly content if the price of Max went up a bit and Pro no longer worked for CC if it meant that Max was faster and more stable.
It could be an A/B test to see whether people without an existing subscription care about Claude Code (CC) at all. If they sign up then CC is disabled (or not as it is not really an issue to offer more). Capturing that info would definitely be useful to a growth team.
No I think the test is that some new sign ups won't get Claude code in that tier if they pick it and they're seeing if users will still pay for it without it?
Although the ones that never touch claude code are a free $20 a month, the ones that do are potentially a seventy to eighty dollar twenty dollars a month . it’s not instantly obvious which customers you prefer (revenue vs cash negative growth- on second thought obviously they prefer the second)
They've preferred the second so far, but they might have a fair reason to see if they can keep growing with the first one instead or cut down on some loss leading, right?
That's how i read it too - they want to test if people will still pay for pro plan if it doesn't include Claude Code. At the same time they are also saying that if you subscribe having been told it does include Claude Code, they may still change their mind later and take it away!
Apparently it's just an A/B test. Legit LMAO moment, speedrunning reputation destruction to your entire userbase just to test a question whose answer you can probably already guess.
---
> For clarity, we're running a small test on ~2% of new prosumer signups. Existing Pro and Max subscribers aren't affected.
> When we launched Max a year ago, it didn't include Claude Code, Cowork didn't exist, and agents that run for hours weren't a thing. Max was designed for heavy chat usage, that's it.
> Since then, we bundled Claude Code into Max and it took off after Opus 4. Cowork landed. Long-running async agents are now everyday workflows. The way people actually use a Claude subscription has changed fundamentally.
> Engagement per subscriber is way up. We've made small adjustments along the way (weekly caps, tighter limits at peak), but usage has changed a lot and our current plans weren't built for this.
> So we're looking at different options to keep delivering a great experience for users. We don't know exactly what those look like yet - that's what we're testing and getting feedback on right now.
> When we do land on something, if it affects existing subscribers you'll get plenty of notice before anything changes. Will hear it from us, not a screenshot on X or Reddit.
Personally I love how they have increased everyone's quotas to counteract the Opus 4.7 tokenizer change a few days ago, but are immediately regretting it and trying to cut off subscription users.
If the subscriptions are unprofitable, then just communicate honestly, raise the price or lower limits for new subscribers transparently, and grandfather in existing users. That's what GLM coding plan is doing and it works fine for them. Don't ruin your reputation with opaque messaging and hidden changes. Lol
MS paused Copilot subscriptions because they don't have enough capacity. Anthropic is trying to confuse new users and literally don't want 20$/mo because they don't have enough capacity. Seems like there's a trend here. A lot of people in scaling threads were saying that capacity projections and DC buildouts were "fantasy" a few years ago. Not so much anymore...
big fan of A/B tests that dehumanize the consumer into some kind of money making lab rat funnel whose only purpose is to be experimented on how you can extract more money out of it
I don't think I've ever been on such a rollercoaster with a company's reputation in the developer space. I started in January on the $20 plan, essentially my first agentic AI programming. I quickly started hitting limits developing several apps at the same time. I went up to the $200 plan after seeing the value.
After seeing my own issues with 4.6 and the mega-post on Github about declining metrics in a decent dataset of claude chats by Stella Laurenzo at AMD (https://github.com/anthropics/claude-code/issues/42796), I downgraded to the $100 plan. Hallucinations. Laziness. Lack of thinking. The responses on those mega-threads from Anthropic rubbed me the wrong way in a "you're holding it wrong" kinda way.
In the past week, I downgraded back to the $20 plan because the Codex $20 plan on 5.4 was working so well for me.
Then throw in other oddball events like the source code leak, and the super positive Anthropic events like their interactions with the current administration. It's a wild ride.
I can't understand removing Claude Code from $20. I'm interested to see whether this is confirmed or not.
I'm a career engineer and I went from being one of their most outspoken proponents (at least within my circle) and now.... I'm not.
Same loved them, told my team about them, got them to switch off of cursor, now I'm telling them to swap to Codex.
Anthropic really pissed me off with their harness crap. They're well within their rights but their communication over it was enough to get me to swap. I don't need extra hurdles when there's a perfectly valid alternative right there. They don't have the advantage they think they do.
I think we are inevitably heading to using the cheap Chinese models like Kimi, GLM, and Minimax for the bulk of engineering tasks. Within 3-6 months they will be at Opus 4.6 level.
This was literally my task today, to try out Qwen 9B locally on my, albeit a bit memory-constrained at 18GB, macbook with pi or opencode. Before reading this update.
MiniMax has its own issues. Server overloads, API errors, and failure to adhere to even the system prompt. It can happily work for hours and get no job done.
I ran OpenCode + GLM-5.1 for three weeks during my vacation. It’s okay. It thinks a lot more to get to a similar result as Claude. So it’s slower. It’s congested during peak hours. It has quirks as the context gets close to full.
But if you’re stuck with no better model, it’s better than local models and no models.
I have to say, OpenCode’s OpenUI has taught me what modern TUIs can be like. Claude’s TUI feels more like it’s been grown than designed. I’m playing around with TUI widgets trying to recreate and improve that experience
> I have to say, OpenCode’s OpenUI has taught me what modern TUIs can be like. Claude’s TUI feels more like it’s been grown than designed.
Claude's TUI is not a TUI. It's the most WTF thing ever: the TUI is actually a GUI. A headless browser shipped the TUI that, in real-time, renders the entire screen, scrolls to the bottom, and converts that to text mode. There are several serious issues and I'll mention two that do utterly piss me off...
1. Insane "jumping" around where the text "scrolls back" then scrolls back down to your prompt: at this point, seen the crazy hack that TUI is, if you tell me the text jumping around in the TUI is because they're simulating mouse clicks on the scrollbar I would't be surprised. If I'm not mistaken we've seen people "fixing" this by patching other programs (tmux ?).
2. What you see in the TUI is not the output of the model. That is, to me, the most insane of it all. They're literally changing characters between their headlessly rendered GUI and the TUI.
> Claude’s TUI feels more like it’s been grown than designed.
"grown" or "hacked" are way too nice words for the monstrosity that Claude's TUI is.
Codex is described as a: "Lightweight coding agent that runs in your terminal". It's 95%+ Rust code. I wonder if the "lightweight" is a stab at the monstrosity that Claude's TUI is.
Anthropic will kick and scream as those are often distilled from their latest models and is cutting into their margin. Though it is not like their hands are clean neither, it is just a different type of stealing, an approved one :-)
Getting them running is easy (check out LMstudio or ask one for some recommendations). The real question is whether you have the hardware to make them run fast enough to be useful.
One thing I enjoy about Cursor and Codex mac apps is the embedded preview window. I know it's not as hardcore as the terminal/tmux but it's hella convenient. But Cursor bugs me with the opacity around what model I'm using. It seems deliberately to be routing requests based on its perceived complexity. What draws you to codex vs cursor?
LLM monsters are deeply unprofitable, going by the industry hearsay (which is the only thing we have, given ultra secrecy of the LLM corporations). The only two LLM companies which disclosed their finances without lies, were two Chinese corporations and they, unsurprisingly, were deeply in red.
Remember the old saying about boiling a frog? LLM corporations need to make most of their users pay hundreds per month, asap. This is Anthropic increasing temperature regulator under the pot just a tiny little bit. Not the first and not the last time.
Matches my experience very well. All the goodwill earned from taking a stand against the DoD seemingly forgotten in a month. Coincidentally, I canceled my pro subscription and got set up with OpenCode and OpenRouter last night.
> I can't understand removing Claude Code from $20
Not according to their webpage: "Claude Code is included in your Pro plan. Perfect for short coding sprints in small codebases with access to both Sonnet 4.6 and Opus 4.7." [1]
There are clear contradictions across their marketing site. As others have pointed out, it's being removed from some help articles and the pricing chart now shows it revoked. Confusing signals, but they seem to be changing all pages in this direction and haven't updated that one yet.
I think removing Claude Code from the $20 tier is a terrible idea, I never would've gone from nothing right into the $100/200 tier. The $20 plan let me get my feet wet and see how good it could be, and in less than a week I was on the $100 plan.
I think they need to at least have a 1 month introductory rate for the max plan at $20, or devs that decide to try out agentic coding just won't go to Anthropic.
That leads to downstream impacts, like when a company is deciding which AI coding tools to provide and the feedback management hears everyone is already used to (e.x.) Codex, then Anthropic starts losing the enterprise side of things.
FWIW their "Fin" agent on their web page tells me:
"Claude Code is not included in the standard Pro plan. It's only available with premium seats on Team and Enterprise plans.
However, Pro plans do include improved coding capabilities through our GitHub integration, which lets you connect code repositories directly to Claude. You also get access to code execution and file creation features in the Claude apps.
For the full Claude Code terminal experience, you'd need to upgrade to a Team or Enterprise plan with premium seats"
Yes, confirmed directly from Anthropic's website. Claude Code is included in the Pro plan, described as perfect for short coding sprints in small codebases with access to both Sonnet 4.6 and Opus 4.7.
Claude
You said: Is claude code included with the pro subscription
Is claude code included with the pro subscription
3:11 PM
Claude responded: Yes, Claude Code is included with the Pro plan ($20/month).
Yes, Claude Code is included with the Pro plan ($20/month). You can use it in the terminal, desktop app, VS Code, JetBrains, and on the web.
The main caveat is that Claude Code draws from the same usage limits as your regular Claude chat — so heavy coding sessions can eat into your quota faster. If you find yourself hitting limits often, the Max plans ($100–$200/month) offer significantly more headroom.
The $20/month, which is now $17, should give you a hint? Models have a knowledge cut-off and will not reflect up to date information unless you trigger a web search.
Do you understand how LLM's work and that they are always behind in their knowledge? Unless Claude does a network call to check its own website, it will give you outdated information. Its a prediction model, its not magic.
I had a similar ride, but disagree with your conclusion. Opus 4.7 is so incredibly powerful from my experience, that nothing else really matters and I think at Anthropic they know it. People will pay a lot for access to this model.
Yea, I've seen a lot of whining online, because its more expensive, but from the interactions I've had I'd say, that it's well worth it. To me it feels like another step change, similar to when 4.5 was introduced. Definitely a different beast.
EDIT: it is also surprising to me that everyone seems to believe the people at Anthropic are simply incompetent and recklessly risking their good reputation, while very few consider the possible good reasons they might have for taking such drastic measures. And I don't think it's because of financial pressures in their case
I can’t say I’ve used it extensively enough to draw a conclusion, but it did seem similar to GPT 5.4 in Codex.
When I threw it at a difficult issue in an iOS app, it like GPT came up with wrongly guessed explanations. It only found the issue after I had it instrument the app and add extensive logs. Usually GPT 5.4 is the same.
Only that with GPT 5.4 it’s at least included in my subscription, while sending 3-4 messages to Opus 4.7 for this blew through my $20 plan limits and consumed $10 of extra usage on top. At that point I can’t help but bring up how much more expensive it is.
> Only that with GPT 5.4 it’s at least included in my subscription, while sending 3-4 messages to Opus 4.7 for this blew through my $20 plan limits and consumed $10 of extra usage on top. At that point I can’t help but bring up how much more expensive it is.
Rest assured OpenAI won’t want to leave that kind of money on the table…
I've had completely the opposite experience. I've asked for it to research things and it's just told me to "paste xyz into google". Just now I revisited a chat that's 5 days old and asked it to check again (because what I was looking for might have changed), and it said "no".
It's funny how experiences can be so different. I wonder if this comes down to context. My interactions so far were fairly high-level and in some cases it having a strong opinion was actually super beneficial to the outcome. To me it seemed opinionated, but in a very good way. I can see how this could backfire though and have heard similar reports.
Opus 4.7 may be incredible but for how long? And they may have Mythos but I feel like they will put it out if pressed too much by their competitors. And again for how long will they keep the advantage?
At the speed everything is advancing I don’t think it’s such an advantage. They all catch each other up pretty fast. That’s why I prefer to pay Cursors and have access to all of them instead of being lock to a single one (even if that means to lose some discounted credits). If they opened Mythos today at a good price that would be something but that’s not the case and it won’t happen.
Incredible, powerful, but I couldn't believe how fast I hit the limits compared to how it was with Opus 4.6. They removed Opus 4.6 completely from CC. I would prefer it with the previous limits.
That's not how you keep your customers. None of these agents have a moat, I moved away from Cursor when they started doing what Anthropic is doing now, and never went back even when I was a paying customer since the start.
they need the devs on board for that to matter, i can get whatever i want done with lesser models already. It is quite literally about just who is not gonna give me the shittiest experience, and at anthropic it sure seems they are determined to annoy everyone since they started gaining in popularity.
The age of AI seems to forget some lessons from Google (and history in general).
- Rapid changes hurts the trust of your brand and product. In Google case, using a new service product became something you’ll think multiple times as you are more likely to axe it than rivals or specialized equivalents.
- While models currently has no clear winner. Anthropic’s core product is coding. But just as Skype, IE, Netscape their can always be another game changer you cannot count.
- The Pro plan is already limited for true agentinc workflows. The limits now are so bad that a business that relies on it would need bigger plans.
- Anthropic is already in a delicate situation where many devs are frustrated. Dropping or crippling the use even more just means this sector (which I can only assume is a big chunk) would switch to competitors tool that already try to compete.
- Local models, whether as Google sees it “edge” or even further would also take bigger part in the future.
What a way to ruin goodwill with the very community they are trying to court. I am on a Pro subscription to use with Claude Code, but it sounds like the days of using it are numbered. I guess I will be trying the latest offering from OpenAI and Google tomorrow and if they are satisfactory I might just switch. Moreover, I have been recommending Anthropic's API solutions up to now to friends and clients. Based on this dumb move I will be now starting with this anecdote and then giving a very hedged recommendation.
Realistically the future of all this is that open models become good enough that LLM as a service becomes a commodity with a race to the bottom in terms of cost. Given where we are today I can easily see open weight models in 2-3 years making Anthropic and OpenAI irrelevant for everyday development work (I justify this like so: if my coding agent is 10x smarter than I am, how would I understand if it did all the right things? I want someone of roughly my intelligence for coding. I can see use cases for like independent pharma work or some such where supergenius level intelligence is justified, but for coding ability for mere mortals to reason about the code is probably more important).
The valuation is obviously based on the premise of their capturing the white collar economy. OpenAI's charter says so openly. And Chinese robots will come for blue workers next.
It would signal quite a fundamental pivot if their "Pro" plan excludes coding but supports personal productivity (Cowork). Quite surprising given most people attribute Anthropic's success to their elevation of coding above everything else. To have casual users locked out of that would be a major hit you would think.
Makes me curious about the internal thinking. One theory being they are in a capacity crisis and knocking Pro users off Claude Code is an emergency brake getting pulled. But an opposite theory is it's a revenue move and they think they have the lock in to pull it off. Especially if they are building up to IPO.
Interestingly the Team subscription which is still $20/month/seat still includes Claude Code. But you need minimum 5 seats. So it could be a way to force people off individual plans and into enterprise plans where possibly things scale better for them, especially IPO/wise. When one user wants it in a company, probably they go buy 5 seats.
I have to assume they're compute constrained and thus need to either raise prices or cut their lowest-margin products (which amounts to more or less the same thing, but with different optics), or turn away new users.
My assumption is that people are able to very easily saturate Pro with Claude Code and therefore even though the quotas are lower (more than proportionally) the utilization of those quotas is higher enough that Pro is less profitable.
I think there is a definite possibility that they aren't compute constrained, but rather trying to improve a sorry cash flow situation before IPO.
Of course, I don't have real insight into available compute, but the vibe slope seems to have dropped a bit, at the same time as new GPUs are being shoved into datacenters as fast as possible.
I just switched from the $10 Copilot subscription to a $20 Claude subscription to get general AI and coding in one bill. I guess I'll try out GPT Codex.
gpt allows you to wire their models into other CLI tools, I'm advising everyone I know to lean that direction. Not trying to become hostage to something like claude's ecosystem for the rest of my development career.
Either they baited people with code and flexible usage limits until march and this was planned or they realized that they did too good of an product and it costs them too much.
One thing is clear, Anthropics communications and leadership is horrible. You don't launch or remove features like this. How this is communicated and handle is something like mom+pop shop would do.
Why would you even want a Claude subscription if not for Claude Code? Anthropic is obviously the best for programming, but probably nowhere else. Seems like a good way to onboard people to the Claude Code experience...everyone who's working seriously with it needs Opus, anyway. But, maybe that's the rub, if the Pro plan includes no Opus usage (which I think has always been the case), you might have a worse impression of Claude Code. Codex 5.4 is better than Sonnet, but not better than Opus.
I dunno, I'm no business genius, but I think we're starting to see these companies try to find ways to make money instead of losing it.
On LMArena, Claude Opus is ranked as the best at everything except image and video generation, which it does not support. That may be inaccurate, but it's plausible
The pro plan does include Opus usage. I've noticed the limits on the web client are a bit higher than through CC, but probably more because of the increased token usage of agentic coding in general.
Claude web is actually pretty good for dealing with random projects outside of code. I have a Home Assistant MCP server [1] behind a Cloudflare tunnel exposed to it that makes maintaining automations a lot easier.
I have been using https://claude.ai and, initially, it was good, but, unfortunately, it keeps getting worse. I had it search for contact information for a certain public entity, and in Claude's response, all emails were being replaced with [email protected] or something like that. They also added an absolutely horrendous automatic markdown in the text input, so now you can't even properly enter your prompt. It actively gets in my way and prevents me from typing what I want. Fuck you Anthropic.
I have a Claude Pro tier subscription; Claude Code, as of right now, is still functional for me. If Anthropic does boot Pro-tier users off Claude Code, I will be cancelling my subscription.
They would probably grandfather existing users in for at least a year or something, you have to imagine. Even if this "test" goes very well and points to removal
This test makes perfect sense with their actions the last few weeks, they think they've done enough to transition into the general public and away from devs and our goodwill no longer is something they should be concerned with.
Its funny that openai, who in my eyes went for the general public rather than devs initially, seems to be semi pivoting and catching all the fallout from anthropic's recent behavior.
It is a massive bummer, up until those few weeks ago, i was hard pulling for anthropic for quite some time, now i just dont care and hope something dope emerges quickly that signals i wont ever have to consider either of them.
They already effectively halved it with the introduction of Opus 4.7 and the new tokenizer that basically gives you about half as much usage for the same price.. Convenient to price based on tokens, and leave what a token is a moving target..
This is a risky move. I might have paid $20/month for my personal projects but the Max subscription is a bit steep.
Now I'm going to learn more about local models. I don't need to be as good as a frontier model. Good enough and free from all this drama is a win for me
Same, I'm currently unemployed and the $20 help me to initiate many small projects. The recent taxing in the token made me start testing local models on my machine. Tho, Claude works better for the front end part imo.
"free of this drama" and free is great option for companies, of course most use API billing but let's not forget that there are places that budget is limited and being good enough is just perfect.
I can't find an announcement yet, however the pricing page now shows it's not included, and various support articles have removed any mention of the the Pro plan including access to claude code.
See [1] and [2] for an example of a support article that's had claude code removed as a Pro feature.
I guess this is the beginning of the end for subsidised model access, at least from Anthropic.
Because 'claude -p' is a backdoor for any third party client to use subsidized token pricing, and they've flipped course again to decide they do want the OpenClaw type users as long as they're on the $100 plan.
It was obvious, which is why their earlier decision to attempt to restrict 3p access was hugely unpopular. They're now trying to walk that back but on the condition that those users need to be on the higher plan.
Now they're hurting their popularity with those who actually don't mind using Claude Code. I've been quietly swallowing a lot of performance degradation over the past few weeks as I get the resource crunch, but I'm definitely not going back to copy-pasting between browser and editor. And I have no intention of upgrading to Max or doing per token usage.
The question isn't whether anyone could have missed it, but whether Claude Code has release gates that allow people to require that obvious problems should be resolved. From their release velocity it's pretty clear that they do not.
My usage of Claude Code in the pro plan is definitely metered. Every couple hours I have to wait an hour or two and the last few weeks I've hit my weekly limit on Wednesday.
With GLM and Kimi getting better and better, with both still providing low-cost coding plans with higher quotas, and with how trivial it is to switch to them even within Claude CLI, I'm not sure what makes Anthropic think their users would rather pay 5x than switch to the competition.
I signed up for a 3 month plan with Z.ai so I could try out GLM-5.1. That was a few weeks ago when it was still $27 for 3 months, now it's $48 for 3 months. I hit limits at least as much as I do with Claude. I hit a weekly limit at one point and it said I wasn't going be able to get access again for 6 days so I must've somehow hit their weekly limit on day one of the week. And that after several timeouts.
Never mind GLM and Kimi, even GPT 5.4 offers a great plan for $20/mo. Even if it gets increased rate limits after May, it's still quite likely that casual users won't be hitting them on a regular basis.
Why is management at Anthropic trying so hard to ruin their reputation with developers? I missed the OpenClaw hype but it was something that kept me excited about my yearly subscription.
It makes no sense to do one of the higher tier plans unless they are directly generating you money.
They care about developers from companies that are on their team/enterprise plans or using bedrock.
Individual users barely matter. That's probably also the same group that decides to switch to Codex/Kimi/[whatever the hottest agent on any given day] on a whim, which Anthropic doesn't necessarily want to do business with.
feel like its beyond optimistic on their part, just starting to hear their name be blended with companies desires on job listings, and they are destroying the goodwill of the devs who surely are the main reason their name has landed there. They aren't dug in like a microsoft, maybe they get some staying power for nocode people who feel trapped, but im done with their nonsense already and won't recommend them anywhere. Other stuff is good enough already to match.
A fraction of them do. Many just use whatever the employer provides to get their job done. HN users only represent a small sample of the overall software developers which is nowhere nearly enthusiastic about new things.
At my company, devs were the ones pushing for the Claude subscription. Left to management, we would have only had GitHub Copilot – we already have an existing relationship with them and the tool is good enough.
If Anthropic is intent on losing the goodwill of the devs, they might not be happy with the consequences. Their product is quite commoditized at this point – the latest GPT, Gemini or GLM is just as good for most enterprise tasks.
It is over for the little guy - home enthusiasts and vibe coders. Too many of them saturating resources for Max users.
IF you cannot afford few hundred dollars subscription go out and breathe fresh air. But if you can, watch where the ball is rolling - few thousand dollars subscriptions and even less programmers.
Agree about psychological impact outpacing likely actual impact, but that’s a relatively temporary phenomena as we are all adapting to the new way things work.
Productivity wise employment is far more than code production productivity in a vacuum, and productivity gains are rarely captured by employees (see famous chart on worker productivity where that correlation changed around 1970). I wouldn’t expect to see much in the next 1-2 years besides noticing effective teams increasing velocity of features.
I think people in forums like complaining about things and aren’t representative of the broader set of people who are just using the tools, so no real paradox. For vast majority of tech jobs, $200/mo is still an absolute steal in terms of what these tools offer. Only the dullest of companies would not realize this.
Fwiw in the 80s-90s computers also didn’t really register in productivity metrics. Qualitative changes occur long before accurate measurement catches up.
Because most people work for someone else and don't decide their own salaries. It's not doubling productivity, but even a 10-20% boost to productivity for a team of engineers means that, as a business, even $1k per month per seat is perfectly acceptable. For consumers and hobbyists that basically kills access.
Truly makes no sense. I pay for the $200/month plan and end up using about $3k/month worth of API costs. I imagine that the only reason they haven’t cut me off is because my habits serve as good training data for them.
Guess they’ve decided to move in the direction of allocating compute primarily to power users and enterprise.
But power users are not a sticky customer base. I just bought the ChatGPT Pro plan and would immediately switch over if the model performance is better and/or I get more compute.
I would love someone to play devil's advocate against this perspective:
While these tools stand to enable the democratization of productive capability in software engineering and other tasks (creating a renaissance for solopreneurs, let's say), what seems more likely to actually happen is that entrenched capital will become the only player with real access to this "knowledge as a utility" (was it Altman who called it that?).
We already see this playing out in two fronts: 1) the gradual reduction of services and 2) the DRAM market, where local-first tools (i.e., potential disruptors of the emerging "knowledge monopoly" created by the big AI firms) are being stifled by supply shortages. How many promising small-to-medium-sized competitors are being snuffed out of existence (or never starting) due to the insanity of the DRAM/storage/CPU (soon) markets?
The currently-subsidized access that we have to the big Opus-like models will, in parallel, be gradually be taken away until only the big players can afford it. And in the end what we will have is hyper-productive skeleton crews at a few consolidated firms performing (or selling expensive access to) basically all of the knowledge labor for society, with very little potential for disruption due to the hardware and "knowledge" scarcity engineered (in part, maybe) by this monopoly.
Not necessarily a closely held belief – just a hunch – which is why I want to see what parts of the picture I might be missing.
Devils advocate here - pro and max tier customers for all the major inference providers are loss leaders from the data we have been able to figure out, and reverse engineer. They are effectively a marketing exercise.
The real profitability is selling tokens to enterprise, and enterprise demand is growing so fast that they are short on the total amount of tokens they can generate per minute, and are prioritising rationally - enterprise gets a better experience - instead of optimizing for their lowest paying (and most loss leading) customers.
We are in a hardware crunch right now but that won't be forever, and eventually (likely 2028) we will get experiences like we got in January from pro-sumer accounts again.
Not only because of cost. Mythos has only been released to some of the big tech players because it's "too dangerous" [0] for us little people.
It's easy to see this becoming a permanent position; the latest models and smarts are reserved for establishment members only, the riff-raff get the cast-offs. So the establishment is preserved and the status quo protected.
[0] I'm putting scare/irony quotes around this, but if the reporting is accurate, there is something to this; we built the internet on string and duct tape, it's not hard to see how a very smart AI could cut it to ribbons.
In periods of massive inflation, only the most wealthy survive.
But there's competition out there -- the open-source chinese models. In their current form, I assume that will turn off many people but new models -- based on those -- are likely to appear. Also, OAI and Google will release new models and pick up the lost customers.
Claude has become practically unusable for Pro users in the past few days. The Opus 4.7 blew through an entire 5 hour limit in one question and didn’t even finish answering it. Zero value delivered.
Opus 4.6 is giving 2, maybe 3 questions before blowing through the Pro 5 hour limit as well. We are forced to use Sonnet which makes the same mistakes over and over and then to start trying with other companies. To make matters worse, it reuses old code as we try to survive between credit expiry so it re-introduced issues into the code with the limited credits, that we had already fixed on our own and with other models.
Anthropic in just a few days has gotten me to try GLM 5.1, the new Kimi, and back to OpenAI. OpenAI also seems to introduce new bugs without being carefully micromanaged. The advantage Claude has is that the models are more careful and can refactor code instead of leading to bloat as they go. But the throttling happening now is breaking things and making the entire subscription unusable. I really hope they fix it soon.
I'm starting to think I've been A/B tested, because this was my experience for almost a year with Claude ever since I tried it for coding. Meanwhile, my coworkers seemed to be able to use it for long periods of time without getting rate limited.
One interesting variable is that I'm located in Vietnam while my coworkers are located in Norway and Europe.
To work around this issue I used Claude for coding with a Copilot subscription which was much cheaper and had virtually no rate limiting.
Copilot gives you some set amount of credits each month, but you can also pay as you go if you run out of credit which is much better than the 5 hour window crap claude code would give me.
The only opus model available now on copilot for some reason is 4.7 and it costs 7.5x tokens, while everything else is 1x, 0.33x or free.
But I switched to using GPT 5.4 medium for a month or so which I find very reasonable.
My personal LLM coding stack is now OpenCode, Claude Sonnet for ideation on spec with OpenWhispr for voice-to-text, GLM-5.1 for the orchestrating loop, GLM-4.7 for coding, and DeepSeek R1 for review and validation. It works much, much better than the Claude Code setup I have at work for substantially less money to boot.
At this rate I fully anticipate being able to run a comparable stack on a 128GB Mac Studio using quants of newer-generation distilled OSS models in a year or two. Being able to ramble to a computer for an hour about features and technical philosophy then have it build a nearly-working app for $50 is an exciting feeling. There's still a long tail of productionization and fixing what the model didn't adhere to but it's still incredible.
Im locked in for a year of claude pro, I encountered the same issues as you a couple weeks ago, Id get like one solid plan done and really really hope it was a 1 shot because that was legit all i was gonna get out of it for those 5 hours, and it would be ~10% of weekly usage to really make me feel scared to hit send.
I got the 20$ gpt tier, and now i just use claude to craft MD plan docs instead, and then i hand them off to gpt 5.4 and it has been working great. can do about 4x as much work or so based on my feelings(not accurate). if i have just small simple stuff to do i might still fire those off with sonnet and that seems plenty viable, but as soon as its an opus tier task i swap to this workflow.
Little annoying as now im kinda trying to manage a .claude/ and an .opencode/ folder but i kinda just have the .opencode/ stuff reference the .claude/ stuff so its a little less bleh.
I've been keeping within my usage because ive been in a funk a bit, but when i was slightly more worried id sorta just juggle whether claude or gpt would handle writing some initial tests as it did seem to kinda be imbalanced otherwise. seems like gpt just spam resets weekly usage throughout the week anyway so its prolly nbd.
> Claude has become practically unusable for Pro users in the past few days. The Opus 4.7 blew through an entire 5 hour limit in one question and didn’t even finish answering it
Glad I’m not the only one!
I’ve been limited so often this week I’ve setup half a dozen token compression tools in my workflow and had to do a crash course in token optimization.
Of course, it seems to only slightly delay the inevitable and doesn’t really solve the problem.
I wouldn't be surprised if folks start complaining to California government agencies like the Department of Consumer Affairs, and they take it seriously.
There is a lot of political capital to be earned by appearing to be "tough" on AI companies.
I have to guess that they're compute limited somewhere or the new models are incredibly overusing tokens, so I guess you need to wait for new data centers to come online?
> When we launched Max a year ago, it didn't include Claude Code, Cowork didn't exist, and agents that run for hours weren't a thing. Max was designed for heavy chat usage, that's it.
Is there a wager that this is 100% foreshadowing Claude Code will be removed from the $100-200/month Max plans soon and go to something like API-only? Or only available on like a new $500-1,000/month plan? Restrict the $100-200/month ones to Claude.ai (website) or Claude desktop app only?
Either way, doesn't seem good to say it's a small test and then start justifying it in this direction.
Do they have a substantial userbase for this outside of claude code? The only two use cases for LLMs that seem to have significant traction are programming, and erotic roleplay lol. If they stop catering to devs, who is their market?
FWIW, I just heard this guy (Head of Growth) on Lenny's podcast a few days ago and one of the things he explicitly mentioned was creating intentional friction for growth. This seems to be one of those tests.
So their minimum workable offer for devs just went from $17 to $100.
Also, I don't see how the Pro subscription is relevant anymore. Nobody pays $17 a month just to chat.
I just unsubscribed. :)
Time to try Chat GPT Codex, which even works with the free subscription (don't expect crazy token allowance, of course).
I think that Anthropic has capacity problems. They went all in on acquiring new customers but now they don't have enough capacity to both serve users and train new models, so they are trying to limit user usages.
It is pure speculation of course, but I don't have any other explanations on the stuff they are pulling in the last 2 months.
Yeah, this is pretty clearly what's going on, but I wish they'd be more transparent about it. Funneling compute to Mythos and Design, while auto-setting effort levels lower and removing user control of extended thinking. I don't think the need to shuffle compute around is unique to Anthropic, though. I suspect it's part of why Sora got killed. And everyone's having uptime issues. Are we reaching the limits of the available compute?
if this is accurate, and not some "oops we made a vibe-coding mistake updating our website" I am going to hit the "cancel subscription" button so hard that my desk will break in half.
I have an unlimited-usage API billing plan through my dayjob, but for obvious reasons they don't allow piggybacking personal usage onto that. so I paid for the $20/mo personal plan as an easy and relatively cheap method of professional development / keeping my skills current. I don't particularly mind paying $20/mo, but I'm absolutely not paying $100/mo.
also, part of the reason I didn't mind paying for the personal subscription is that I liked having consistency between the tools I use for my dayjob and the ones I use for side projects. if that goes away, then I might as well switch away from Claude usage at work as well. I very much doubt Anthropic's revenue predictions for this change are taking things like that into account.
making a change like this without an announcement, just sneaky updates to product pages, is also an absolutely unforgivable thing to do, in terms of me trusting them as a company.
If true, very strange change when Codex (at both 20 & 100) is a much, much better deal for a model much better at most coding tasks, with way more usage even with the /fast mode enabled. Is losing most non-enterprise customers the right move for them?
Equally, will offering a presumably unprofitably large quota of Codex tokens at $20 to retain non-enterprise customers turn out to be the right move for OpenAI?
Would not be surprised to see OpenAI follow suit.
Or perhaps OpenAI's LLMs are just so more compute efficient that they can actually offer that sustainably...
Feels to me it's a battle between who has the most compute. OpenAI does not seem to be struggling with their x2 usage on the new 100 Plan, which is very close to unlimited usage with the best performing model on the highest reasoning setting. Not mentioning the resets every 1 million customers, or the other generous usage multipliers last months. Meanwhile Anthropic seems to be desperately trying to cut down on inference with their changes to reasoning effort and more lately, so they might be focusing on what they consider to be more valuable customers for their long-term strategy. The 20 plan with Opus had gotten so bad on CC they might've just pulled the plug to stop people from complaining about usage limits. If OpenAI can burn money longer and capture the market from the bottom, I think they'd win in the long run.
That's exactly what I fear- that Mythos/Glasswing has made anthropic confident that they can survive by only serving that type of customer. Would be sad to see.
It cracks me up when I hear takes like - 'if you're not using more than $20, the product isn't for you because you're not a real user.' If you use CC as an assistant rather than a replacement for your own thinking, follow SDD, and use the tool thoughtfully, you deliver a lot more and you don't need the 5x or 20x limit. It's a different story if you're vibe coding, but then we're not really talking about AI-assisted work - your three prompts barely count as doing any work. I've been on Pro for 2 years, but if this is how things are going, I'll look for an alternative. Luckily, there's plenty to choose from.
All I want is a reasonably priced subscription combining both coding AI and general AI in a single bill for non professional use that allows me to opt out of my data being used for training.
Google limits history to 72 hours if you opt out of training even if you pay them $20 a month which rules them out for me. I guess I'm going to try the $20 chat gpt plan.
At this point I am wondering if I need to accept that were moving to a token based model and get comfortable with opencode and manually switching models.
Seems like a pretty bad business move if it's really what they're doing. They should want devs using the product on a cheaper subscription to see the value with profitable limits on usage.
I think the only reason to do this would be that they just can't scale up to service the volume they have and need to cut down significantly on the total number of users. Seems also like a rough business proposition. Most of the pro plan users would probably migrate to a competitor at a similar price point (I know I will).
The only other possibility would be if they are losing too much money on the compute power and just can't offer it at that price anymore. But then upgrading the plan gives you more compute per dollar, so maybe they're just banking on people not actually using all of what they pay for?
I had previously thought that the inference cost of using a trained model was relatively low and that most costs went into training new models, but maybe that is less true with the more powerful newer models.
If it costs a ton more to serve Opus vs serving something like Kimi or Qwen, then I think most people just won't use the more expensive version for most things.
Claude Design was iterating on the plans page and decided to remove clutter and their review bot LGTM’d it as “minor copy change human review not required” and auto-merged it.
Maybe they're putting out a weather balloon to test sentiment. That way when they're caught they can just point at the other page to say it was just a mistake.
Next they’ll slowly reduce how much CC usage you can get out of the $100 Max plan, then introduce a new $300 “Max Plus” plan with “40x” usage.
“You asked, and we listened: Introducing Max Plus, our biggest plan yet, designed for those…” blah blah
Same. I hope we're grandfathered in. Otherwise current pro subscribers who signed up with the understanding that they could use it in Claude Code are going to be extremely pissed and go off and sign up for alternatives (or start running local models instead). I mean, I guess they could say too bad, they got your money, but this would destroy their brand among people who are currently their most loyal users.
Same. I'm not a dev but I use CC a few times in a week and it's been a great help.
However, my company paid for my annual subscription, so maybe I'll ask our lawyers for advice - the only reason they paid for this was my access to CC and with my use the next tier wouldn't make sense, AND no one will expect Anthropic to not nerf it too.
Before you have lawyers look at it, wait until you’re actually impacted. Nothing has been removed from existing subscriptions yet and their employees Tweeted that existing subscriptions aren’t impacted.
You're right. I didn't scroll down. I wonder why they didn't update the top cards that everyone see. They do it for claude Cowork but not claude code? That is not very transparent. How does it make sense? It's not like claude code is too niche to be included, it's in the main app and I know multiple non-techie people who use it.
I can't believe they are yanking tool access instead of just reducing the token quota or simply pulling Opus 4.7 access. To be fair even that would be poorly received, but at least people would have a choice of working within limits. Claude Code is their real winner, and a great ramp for newcomers coming into AI assisted development. They are playing straight into OpenAIs hands.
I would assume users who have an existing subscription will be grandfathered in.
It would seem misleading to sell monthly, or even yearly, subscriptions under the guise Claude Code comes with the subscription, for it to only be yanked out underneath you. (Although depending who you ask, Anthropic have already done actions similar to this).
Anthropic clearly doesn't understand that customers see their brand as "Claude", Google's brand as "Gemini" and OpenAI's brand as "ChatGPT." They have so many plans and exclusions that they risk customer confusion. I was surprised when I was pay $200/month for Claude Code, finding it super helpful, and then I had to pay separately to get API access for an experiment. Why are so many parts of "Claude" separate from each other, especially on a $200/month subscription.
Anthropic better get this sorted out with a proper product manager and marketing or they risk customers jumping to easier to understand platforms that are good enough.
That also was really opaque to me RE: API access. I initially thought at $200/month I could get whatever I needed. I eventually set up a OpenAI API with a few bucks to try what I wanted to.
Hmm, we just bought my wife an annual subscription at the Pro tier, largely to use Claude Code. Wonder if she'd be grandfathered in or if we'll need to get a refund.
Hey, I'm a pro, and I feel genuinely insulted. I could consider going back to Claude Desktop + MCP, but I'm getting tired of this telenovela, and will probably cancel my sub and take my business elsewhere.
I'm curious about their expectations and how they will interpret the results.
On the one hand, the people there are supposedly among the smartest on the planet. On the other hand, they consistently forget that they're dealing with LOYAL humans, and these humans prefer respectful communication beforehand instead of being messed with every other day.
My hope for reasonable behavior is to not handle it this way. Decrease limits and increase prices if you can't handle it and be _honest_ about it.
Are they just looking for a way to rationalize another hostile act? And already have expectations like:
- "minus 10% in pro signups" -> oh, let's drop those coders who won't pay anyway
- "minus X% in pro signups and plus X% in max" -> awesome, PAY UP!
Until you work for a company or government agency that is subject to any sort of technology audit. The moment offshore processes running in China comes up you'll have a never ending hole of questions to answer.
The last couple of weeks using Claude has been…interesting to say the least.
Additionally I run a constant hacking contest between GPT and Claude. It’s a toy project and it simulates an attack/defense of a small corporate network.
Claude used to win pretty handily. Suddenly it’s started to lose 90% of the time. I thought GPT had gotten better but no, looking at the logs it seems that Claude is slower and more prone to running in circles. This is still the case when switching to Opus 4.7.
I don’t know what that means but it’s undoubtedly worse.
ANthropic never wanted my money anyway... they don't allow work + personal accounts to have the same phone number. I had to close my personal account otherwise I could not complete onboarding at work.
You should be blaming your employer for forcing you to use a personal device to access company resources. You should have been given a company phone or stipend.
To help you decide if you should keep your Claude subscription, you can see how much of your code is written by Claude Code with my project (open source, local): https://github.com/gelatinousdevelopment/buildermark
At least for me Claude Code is still working on my Pro plan. I don't know if that's because the change simply hasn't propagated all the way through their systems yet (the change is now up on the main Claude pricing page and on their support pages, but not on the Claude Code landing page yet), or if it's because existing plans are grandfathered in, or what.
In general Anthropic seems to be pretty bad at clearly communicating what is going on. I have both Claude Pro for Claude Code and ChatGPT Plus for Codex, and lately I've been reaching for Codex first more and more often... at least for the hobby stuff I'm using Claude/Codex on, they seem pretty much equivalent in terms of practical capability/usefulness.
Would it really be that hard for them to just make all of the changes and then do a redeploy rather than doing them incrementally? It's not like they're just editing the raw HTML sitting on the server manually, right? Actually, don't answer that, I'm not sure I even want to know the answer.
Note that some companies, like Amazon, purchased and ran the Claude on their own hardware. They didn't change the model parameters during the Claude Opus 4.6 karma.
If Anthropic continues to getting worse, try Amazon Kiro and other companies that run Claude on their own hardware.
It might be expensive and have a worse experience compared to Claude's code, but at least the model itself is the "original flavor."
Anthropic NEED to get better at communicating with their customers. The most meaningful updates we get on changes come from employees on X. It's unprofessional and unsustainable.
Max is next. He essentially admits to it in one of his tweets/posts. Explicitly citing it as an example of how they misjudged usage relative to pricing.
It’s seems like there are a lot fishy smells coming from the timing of the mythos announcement and the reports of issues with casual users. Combine that with the mass rejection of 4.7 it kinda seems like they are burning their ‘non research’ users in order to keep the Mythos users warm.
I could be connecting unrelated dots here, but it sure as hell seems quite coincidental to me.
I see lots of speculation that Anthropic needs to cut usage because they are compute constrained. If that's the case, will they be focusing on reducing compute costs for their models?
From what I can tell Opus 4.7 is more resource-intensive than Opus 4.6 is more resource-intensive than Opus 4.5.
Oh FFS claude code is the only reason I have a pro claude subscription. I don't even use my personal subscription all that much after spending all day with claude/bedrock at work. I will absolutely cancel my pro subscription and continue to use local / Codex if claude code stops working.
I realize this duplicates a lot of sentiment already in this thread but anyone here with pull at Anthropic please understand it will undo a lot of the goodwill that made Claude so successful in the first place.
Local AI is almost impossible right now with the prices of RAM and GPUs and the sizes of decent models. No way spending even an optimistic 10k, but more likely 20k, on a setup that is good for 5-6 months makes any financial sense.
I disagree wholeheartedly. Older models do not perform anywhere near as well as newer models, and certainly not once you throw in agents that can sense check, security check, refa tor and balance, and research queries as they run behind the scenes.
Impossible to find Mac minis in some areas, and if this goes through expect it to get worse.
I settled for the AMD rough equivalent. It’s not perfect but it can still handle most of the work. Now if only extra ram would come down in price… I find I need about 5 GB more than I have
The only thing they'd need to do to enjoy the positive PR from the DoD spat is shut up and improve (or at least not worsen) product.
Even the downtime would've been fine (as GitHub shows). Instead they're pissing it all away by letting employees make random announcements on random platforms.
you could try customer support, that chat bot will happily loop you with some more non answers, but try to make you feel good about those non answers :)
But the current plans are unsustainable and prices will have to be effectively raised sooner or later:
> Engagement per subscriber is way up. We've made small adjustments along the way (weekly caps, tighter limits at peak), but usage has changed a lot and our current plans weren't built for this.
I use it on Pro and was just thinking today, there is no way $20 covers the cost of it. But I'm long term unemployed and can't afford any higher tier, so if they drop it guess I'll have to find a non-anthropic solution somehow.
Sonnet in the Claude Code harness is hard to replicate out of the box. That vertical integration is not easily replaceable but by no means impossible...
OpenCode and their Go plan will get you close if you're willing to put in the config work.
For when you do need the larger models Fireworks has a pretty generous 'Pass' that comes out to about $7 a week for some of the larger bleeding edge models.
Other than that Codex's $20 plan is still somewhat valuable though they keep reducing usage. Google's $20 plan will get you some Opus usage in Antigravity and a generous amount of Gemini. Not sure how long that will last as they've been tweaking pricing and planning language recently too.
I remember when they first added Claude Code to Pro — it was limited to Max initially — and my first thought was that it seemed kind of stupid, because at one fifth of my current limit, I would be hitting walls all the time...
I’ve found that I hit the limit just around the end of the 5-hour window, so it’s definitely been usable for me.
But I’ve mostly been using it for gitops infrastructure in my homelab. I wonder if the token usage is lighter than if I were developing an application.
It was for about the first 6 months after I subscribed, then the rate limits were tightened to the point of uselessness and pushed me to cancel and go for the Codex plan instead.
Unrelated to the Claudge Code change, I'm fascinated by people on Twitter and Bluesky posting screenshots of the answers they get from AI like it's an original source of information. It's as if some users see the AI as an authority, and derive some kind of social capital from that authority. For example, in the OP's linked Bluesky thread, one person replies with "Fin says it’s included with Pro" and attached a screenshot from "Fin AI Agent" (which I haven't heard of) that claims Claude Code is still available on the Pro tier. Is that valuable? Personally I don't trust what any AI has to say, especially when the subject is currently in flux.
Another example, I recently saw two people over on Twitter posting LLM responses at each other in a bitter argument about Vercel's security breach. They made no attempt to pretend they'd formulated the ripostes themselves, it was just screenshotting one-sided conversations... What's the point? They could've saved themselves the trouble by spawning two LLMs, naming them "John Doe" and "Fred Doe", then telling them to argue and post the name of the winner.
Disclaimer: I don't use Twitter, Bluesky, Mastodon, etc., so maybe it's not that deep.
How long until the $10 Github Copilot subscription goes away? That was a great deal for my limited personal programming. The only reason I switched from it to Claude was to get coding and general ai in a single bill.
I think Github Copilot is in the process of slowly winding down right now. They've been putting very, very long (multiple day) rate limits on users for various esoteric reasons for weeks now and just yesterday or so paused signups.
This makes sense given Anthropic’s recent downtime and resource constraints.
Opus 4.7 consumes tokens at a faster rate and folks were complaining that the Pro plan included too few credits for real work.
And Anthropic now allows `claude -p` (which invokes Claude code) for 3rd party agents like OpenClaw, which consume far more tokens by running autonomously, 24/7.
I would hope that we'd be grandfathered in since we signed up with the expectation that we could use it in Claude Code. I could see where maybe that might lead to problems down the line where they do some kind of update and "forget" that people who signed up before a certain date were supposed to be grandfathered in.
Yes, it's been a way better deal to go for a subscription than pay as you go for me in the past. I had a month where I burnt through ~3.8b tokens which was somewhere in the ballpark of $8k worth of savings.
Now though I don't dare use spend tokens for basic note taking with Sonnet because I'm hitting the limit over a couple million tokens on the 20x plan, so they've really tightened the purse strings since November.
Everyone that is upset about this should take note: you are not a (coding) customer at $20/mo. Their coding customers spend thousands per month (week!) on claude and it's growing faster than they can keep up with (source: I'm one of them, and I know many other like me. We're budgeting 10-20% of engineering salary spend on tokens). It sucks to no longer be able to code on the cheap anymore, but don't fool yourself into thinking you have any leverage here.
My leverage is i’m canceling my plan. Openai gives me codex+chatgpt for my $20. I use my claude code sparingly, but I enjoyed it and it works great when I needed it. Access to it was a reason they got my money a few months ago and it’s been a shit show of reduction of services since.
Team plan shows “Claude code” in a main bullet point still. Which would indicate it is part of the team plan regardless if it has premium seats or not.
But it seems this is all in a state of flux.
And there’s the lovely asterisk at the bottom:
> Prices and plans are subject to change at Anthropic's discretion.
I assume this has to do with the $20 tier now running out of provisioned tokens so quickly as to be not particularly useful, giving users a bad experience.
The million token context + reduced caching period + new models using more tokens made this a probably unpopular but perhaps unavoidable development.
There's a hard problem here balancing costs and experience. I'm afraid despite the bad experience for people that this is necessary and $20/month was just too big a loss to sustain.
Is there any marginal cost associated with a new subscriber?
I have always heard inference is cheap and the cost was in training, so I assumed any subscriber was making them money, just not enough to cover their insane fixed costs.
I was working in my biggest project in the last 3 years and i dont got mutch money and they go there and cut claude code from pro like wtf so why am i even paying for pro
Jesus Christ, it’s literally going to become too big of a risk to depend on Anthropic.
As someone who tries to manage usage for a small team they just added Claude Code to the Standard Team seat now they are removing it!?
Not to mention that they will ban your entire organization from a bot deciding you violated their TOS with no communication and no way to contact anyone to understand what happened.
If this is real we are switching to OpenAI or Gemini it is not worth all this non sense
This just lends more fuel to AI skeptics that this entire thing is a massive, unsustainable grift. The explanation only adds confusion and implicitly means that this was not a mistake. What is someone to take away from this?
That $20/month is not profitable? That Anthropic thinks that people are willing to pay a 400% markup without batting an eye? That Anthropic is desperately trying to clean up their burn rate? Why should we trust a company that can screw up basic PR this hard?
No particular opinion on this change, but generally pricing is a great way to separate dabblers from serious users. There isn’t a great deal of value in dabblers or what they produce, I imagine that training data isn’t worth much relative to the pro users. Similar pricing story with $100 yearly price for Apple developer accounts that people complain a lot about. The reality is if you’re serious about making something, these costs are pretty cheap.
The folks hurt most by this are serious people in developing countries and young people starting out. Occasionally a dabbler turns into a serious user but I imagine that’s far less likely than people wish it were.
The value to companies who make these changes is they don’t have low value users or low value contributions to worry about, which has its own not insignificant overhead. In the age of AI slop everywhere we’re likely to see a lot more attempts to separate the wheat from the chaff.
The dabbler/serious user distinction isn't the only framing here.
Assuming this limitation applies to team seats in the same way, at $20/mo, businesses could afford to have everybody on the plan. Plenty of folks write only a few hours of code per day—or even per week in their job. These are still professionals, not dabblers.
This is more ethical than what they've been doing, trying to keep those subscribers but limiting them to the point it's become unusable. But it's also kneecaping themselves because they'll miss out on any innovation and hype coming out of the hobbyist community.
Huh? I just don't understand why they're doing this. Feels like shooting themselves in the foot, given that Claude's individual subscribers are a large part of who is introducing all their enterprise customers to them. Plus removing access is never good for public perception.
And of course you never want the first tier it's included on because you need actual usage, so in reality you need Max 20x - the price of which I can't even see on the pricing page
The clusterfuck about the various pages not being all upgraded to reflect the same story...
I thought we now had advanced tools to which we could ask to do things like: "Remove all mention of Claude Code in the Pro, but not in the Pro Max plan".
But apparently the CGI-days called and asked the webmaster to manually edit .html files one by one?
The "5x" and "20x" no longer make sense for Max. It's supposed to be 5 times the Pro limits. But if only Max 5x has access, then they need to renamed to "Max 1x" and "Max 4x".
Trying to chat with "fin" is like trying to argue with a bowl of congealed oatmeal.
Me:
Why is your website saying Claude Code is not included in Claude Pro subscriptions?
"Fin", the slow, dopey AI agent:
That’s confusing, but it usually comes down to what “Claude Code” means in that context. Claude Pro does include Claude Code (the Code tab in the desktop app, and Claude Code access for individuals).
What Pro does not include is API access via the Claude Console. Those are separate products: Pro is for chat, and the Console is for building apps with the API.
So if the website is saying “not included,” it’s likely referring to Console/API access, not the Claude Code feature itself.
Did that clear up the confusion?
Me:
No. It clearly states on the pricing page that Claude Code is not included.
Fin:
That’s frustrating, <NAME>. The key detail is that Claude Code is included in Claude Pro, but API access via the Claude Console is not.
So if the page you’re looking at says “Claude Code is not included,” it’s almost certainly referring to Console/API access (the developer platform), not the Claude Code feature itself.
If you tell me the exact URL or screenshot of the page that says that, I can confirm which “Code” it’s talking about.
Fin:
On https://claude.com/pricing, Pro does include Claude Code. It’s listed right there under Pro’s “Everything in Free, plus” bullets: “Includes Claude Code.”
So if you’re seeing “not included” somewhere else on that page, it’s likely referring to something different than the Claude Code feature. The one thing Pro does not include is API access via the Claude Console (that’s a separate product).
Also, the Code tab in the desktop app requires a Pro (or Max/Team/Enterprise) subscription, and it may not show for everyone yet since it’s still rolling out.
Does that clear things up for you?
Me:
there is clearly an "x" at this page. escalate to a human immediately.
and then the cycle continues. I'm cancelling my Anthropic subscription, sick of worrying about limits and getting banned and simply awful experience.
People really have to internalize that these things are expensive to run, and that there isn't enough compute to go around, like actually for real for real, which is likely the source of this. My guess is its a temporary new sign up pause.
I just cancelled my plan, but still have access to Pro and Code apparently until my cycle would have renewed. Hopefully they get a clear signal from this, especially if more of us cancel with the intention to sign back up should they reverse this decision.
I’d be surprised if they’re running at less than 100% capacity after this. It’s just too useful to too many people for whom an $80/month increase is immaterial (I speculate)
And this is all exclusively about Anthropic. It's insane. On any other tech, there would be a consensus to wait until it's stable, but not AI - we go full throttle when it's AI.
Genuinely curious how people who have implemented this in serious companies are answering these questions, because my answer is to keep it the fuck out.
Saw this coming eventually. $20/month for autonomous agents running 24/7 was clearly not sustainable at API pricing. The part that's surprising is there's still no official announcement - just a quiet page edit.
The $20/mo plan never supported 24/7 autonomous agents. With Opus 4.5 and 4.6 I would hit resource limits after a reasonable amount of work, which corresponded to a variable amount of wall clock time.
This makes me think either they’re severely resource constrained and need to focus on “high value” customers, they’re bleeding money on inference, or their sales and marketing team is incompetent.
Regardless, this feels like a pretty big rug pull. Especially without a phase-out period and a real announcement. As someone using Claude Code on a personal hobby project to get a better feel for its capabilities, I’m not sure what to do now. I can’t justify the $100+/mo plans for a hobby project.
My choices are then:
- Code this project by hand, which would be fun but defeats the point of this being my agentic coding project.
- Find another model and use Codex or OpenCode or whatever.
- Put the project on a shelf till this shakes out.
This was never the case though. There's a per week and per 5 hour quota. If you exhaust either you have to wait for the reset. What they're doing makes no sense.
And yet they're very aware that Hacker News, etc exists and so the awareness and backlash would be instant. It's like they want to get a lower rating from the community. Maybe that's their solution for the resource issue: make enough people mad so they abandon their subscriptions.
“ The standard answer here is no — Anthropic does not typically refund the unused portion of annual plans , and annual subscribers won’t see prorated refunds, retaining access for the full remaining period instead.
That said, your situation is a bit different — you’re not just canceling, you’re canceling because a feature you paid for was removed. That’s worth contacting Anthropic support directly about. Their support team can check your refund eligibility , and this kind of material change to the plan is exactly the case where a support escalation could go differently than a standard cancellation.
You can reach them through the in-app support messenger at support.claude.com or via the thumbs-down feedback button. I’d recommend explaining specifically that Claude Code was a factor in your annual plan purchase. ”
Just do a chargeback its an easy W. I dont see why you'd want to continue doing business with anthropic after a change to a 1 year contract out of cycle
This does not explain the changes to documentation.
> When we do land on something, if it affects existing subscribers you'll get plenty of notice before anything changes. Will hear it from us, not a screenshot on X or Reddit.
If you don't want things like this spreading through screenshots of X and Reddit, don't run "tests" like this in the first place!
(Also "if it affects existing subscribers" is a cop-out, I need to know the pricing of Claude Code for NEW subscribers if I'm going to adopt it at a company with a growing team, or recommend it to other people, write tutorials etc.)
I can't trust Anthropic to manage their products in a way that supports my workflow.
ive been trying to make the case all year that if we're going to let employees do shit with ai, lets try claude. in the past like.. 2-3 weeks all that goodwill has basically evaporated.
local inference needs to take off asap because all of these entities actually suck and i wouldn't trust a single sla with anthropic. they are not acting like a serious company right now, this is a joke.
No serious business uses Pro or Max, they are all on Anthropic API billing.
In fact with this move it is plainly obvious that Anthropic is moving compute from prosumers towards enterprise.
That said, I seem to be caught in that 2% test if I open in a private tab. What nonsense. I wouldn't be paying for Claude if it wasn't for its quality abilities, which necessarily includes Claude Code.
A/B testing people without their informed consent is immoral, unethical, and should be illegal.
so, what i'm saying is : I think a lot of companies align themselves with the cash first and then measure whether or not the negative image/user impact is manageable .
(in fact I know they operate this way.)
Sure. Let me just A/B test whether or not you'll respond positively or negatively to having your news delivered via push notification or delayed by 10 minutes.
I'm sure you would appreciate being tested on without your consent, just so that I can make an extra quick buck at your expense. Nothing amoral or unethical about it.
It seems weird to segment this way though. Surely it’s better to just give Sonnet to your bottom tier, rather than cut out the entire Claide Code product entirely?
Give folks a taste rather than lock the whole product behind a $100/mo plan.
Hope you can still resume working on your projects without AI.
his title should be changed to Head of Corporate Bullshitting
They're hitting the physical limits of energy production and chip supply for inference capacity. There's literally nothing that can be done but reduce usage to spread it around for now.
And with no free trial period on top of that, nobody is going to want to pay $100+ just to check it out. I can't imagine the conversion rate of that test being positive.
I imagine Anthropic is trying to see how many users they can push to higher tiers with these new squeezes.
I hate to say it but I imagine it will work.
It’s going to suck for me, because I had gotten used to ridiculously cheap tokens, but I guess the era of subsidized tokens is over.
Until they go public, we are all just guessing.
I, and everyone else I have asked, see this new updated sales UI; sounds like more than 2%.
This is concerning though. If I lose my current usage allotment at this price point I will likely switch to codex
Once they get people hooked, deskilled, and paying, the money ratchet only tightens.
And the companies KNOW that theyre replacing engineers, or trying to. So each engineer replaced is X salary a year they now have available, so make it back in SaaS LLM tokens.
I thought inference was cheap so there was little marginal cost of a new subscriber.
I had a bit of an epiphany the other day thinking about these VC companies offering products to the public at unsustainable prices. It's classic anticompetitive behavior.
You imagine anticompetitive behavior to come from a monopoly because they can afford to burn money to drive competition out before they bring prices back to profitable but the whole VC burn is the same thing. People talk about it a lot without really saying it explicitly when they talk about moats. The only moat Anthropic and OpenAI have is money and they utilize it by offering products below cost.
The two companies are just trying to outlast the other one until they are the only one left.
So it's not really enshitification as much as you were previously getting the deal of a lifetime.
There are some predatory pricing laws, but they're much more narrow than most people believe. There is no law requiring things to be sold for more than it costs to produce.
I think it's funny that these topics make people angry enough to demand that we make laws to force companies to raise prices. We'll stick it to these companies by forcing them to charge us more! That will show them!
Such laws would be very bad for startups and newcomers because they'd be forced to price their new product higher than established competitors who have economies of scale. It would be a nice handout to the big companies.
This is dumping and it is international trade. Maybe you don't realize it because you're American and have internalized it as business as usual.
These companies probably need to be forced to at least try to price their products at a level that would be sustainable long term.
I think it's funny that we're getting subsidized and discounted services and this makes some people so angry that the comment section is demanding laws that would force companies to charge us more.
Based on how much money Zitron has reported that these companies are losing on every subscription, this feels more like they're just trying to survive. In other words "ohshittification."
Brilliant coinage, if it’s yours, congrats!
My take: it is not enshittification to raise the price for a product whose demand outstrips its supply. That is basic economics. There are alternatives, it’s not a monopoly. If you think it’s the best product, then pay more for it.
Personally I would be perfectly content if the price of Max went up a bit and Pro no longer worked for CC if it meant that Max was faster and more stable.
Plenty of Pro subscribers never touch claude-code.
---
> For clarity, we're running a small test on ~2% of new prosumer signups. Existing Pro and Max subscribers aren't affected.
> When we launched Max a year ago, it didn't include Claude Code, Cowork didn't exist, and agents that run for hours weren't a thing. Max was designed for heavy chat usage, that's it.
> Since then, we bundled Claude Code into Max and it took off after Opus 4. Cowork landed. Long-running async agents are now everyday workflows. The way people actually use a Claude subscription has changed fundamentally.
> Engagement per subscriber is way up. We've made small adjustments along the way (weekly caps, tighter limits at peak), but usage has changed a lot and our current plans weren't built for this.
> So we're looking at different options to keep delivering a great experience for users. We don't know exactly what those look like yet - that's what we're testing and getting feedback on right now.
> When we do land on something, if it affects existing subscribers you'll get plenty of notice before anything changes. Will hear it from us, not a screenshot on X or Reddit.
https://x.com/TheAmolAvasare/status/2046724659039932830
---
Personally I love how they have increased everyone's quotas to counteract the Opus 4.7 tokenizer change a few days ago, but are immediately regretting it and trying to cut off subscription users.
If the subscriptions are unprofitable, then just communicate honestly, raise the price or lower limits for new subscribers transparently, and grandfather in existing users. That's what GLM coding plan is doing and it works fine for them. Don't ruin your reputation with opaque messaging and hidden changes. Lol
peak siliconbromaxxing
After seeing my own issues with 4.6 and the mega-post on Github about declining metrics in a decent dataset of claude chats by Stella Laurenzo at AMD (https://github.com/anthropics/claude-code/issues/42796), I downgraded to the $100 plan. Hallucinations. Laziness. Lack of thinking. The responses on those mega-threads from Anthropic rubbed me the wrong way in a "you're holding it wrong" kinda way.
In the past week, I downgraded back to the $20 plan because the Codex $20 plan on 5.4 was working so well for me.
Then throw in other oddball events like the source code leak, and the super positive Anthropic events like their interactions with the current administration. It's a wild ride.
I can't understand removing Claude Code from $20. I'm interested to see whether this is confirmed or not.
I'm a career engineer and I went from being one of their most outspoken proponents (at least within my circle) and now.... I'm not.
Anthropic really pissed me off with their harness crap. They're well within their rights but their communication over it was enough to get me to swap. I don't need extra hurdles when there's a perfectly valid alternative right there. They don't have the advantage they think they do.
But if you’re stuck with no better model, it’s better than local models and no models.
I have to say, OpenCode’s OpenUI has taught me what modern TUIs can be like. Claude’s TUI feels more like it’s been grown than designed. I’m playing around with TUI widgets trying to recreate and improve that experience
Claude's TUI is not a TUI. It's the most WTF thing ever: the TUI is actually a GUI. A headless browser shipped the TUI that, in real-time, renders the entire screen, scrolls to the bottom, and converts that to text mode. There are several serious issues and I'll mention two that do utterly piss me off...
1. Insane "jumping" around where the text "scrolls back" then scrolls back down to your prompt: at this point, seen the crazy hack that TUI is, if you tell me the text jumping around in the TUI is because they're simulating mouse clicks on the scrollbar I would't be surprised. If I'm not mistaken we've seen people "fixing" this by patching other programs (tmux ?).
2. What you see in the TUI is not the output of the model. That is, to me, the most insane of it all. They're literally changing characters between their headlessly rendered GUI and the TUI.
> Claude’s TUI feels more like it’s been grown than designed.
"grown" or "hacked" are way too nice words for the monstrosity that Claude's TUI is.
Codex is described as a: "Lightweight coding agent that runs in your terminal". It's 95%+ Rust code. I wonder if the "lightweight" is a stab at the monstrosity that Claude's TUI is.
it goes into detail about llama-server args; quants to try; and layer/kv cache splits. I plan to try the techniques there.
Remember the old saying about boiling a frog? LLM corporations need to make most of their users pay hundreds per month, asap. This is Anthropic increasing temperature regulator under the pot just a tiny little bit. Not the first and not the last time.
Not according to their webpage: "Claude Code is included in your Pro plan. Perfect for short coding sprints in small codebases with access to both Sonnet 4.6 and Opus 4.7." [1]
[1]: https://claude.com/product/claude-code
See https://news.ycombinator.com/item?id=47854478
I think they need to at least have a 1 month introductory rate for the max plan at $20, or devs that decide to try out agentic coding just won't go to Anthropic.
That leads to downstream impacts, like when a company is deciding which AI coding tools to provide and the feedback management hears everyone is already used to (e.x.) Codex, then Anthropic starts losing the enterprise side of things.
Yes, confirmed directly from Anthropic's website. Claude Code is included in the Pro plan, described as perfect for short coding sprints in small codebases with access to both Sonnet 4.6 and Opus 4.7. Claude
You said: Is claude code included with the pro subscription Is claude code included with the pro subscription
3:11 PM Claude responded: Yes, Claude Code is included with the Pro plan ($20/month). Yes, Claude Code is included with the Pro plan ($20/month). You can use it in the terminal, desktop app, VS Code, JetBrains, and on the web.
The main caveat is that Claude Code draws from the same usage limits as your regular Claude chat — so heavy coding sessions can eat into your quota faster. If you find yourself hitting limits often, the Max plans ($100–$200/month) offer significantly more headroom.
Thats a head scratcher:)
>$17 Per month with annual subscription discount ($200 billed up front). $20 if billed monthly.
I'm not challenging your opinion, but this is an outlier in the general current public opinion about it.
EDIT: it is also surprising to me that everyone seems to believe the people at Anthropic are simply incompetent and recklessly risking their good reputation, while very few consider the possible good reasons they might have for taking such drastic measures. And I don't think it's because of financial pressures in their case
When I threw it at a difficult issue in an iOS app, it like GPT came up with wrongly guessed explanations. It only found the issue after I had it instrument the app and add extensive logs. Usually GPT 5.4 is the same.
Only that with GPT 5.4 it’s at least included in my subscription, while sending 3-4 messages to Opus 4.7 for this blew through my $20 plan limits and consumed $10 of extra usage on top. At that point I can’t help but bring up how much more expensive it is.
Rest assured OpenAI won’t want to leave that kind of money on the table…
At the speed everything is advancing I don’t think it’s such an advantage. They all catch each other up pretty fast. That’s why I prefer to pay Cursors and have access to all of them instead of being lock to a single one (even if that means to lose some discounted credits). If they opened Mythos today at a good price that would be something but that’s not the case and it won’t happen.
That's not how you keep your customers. None of these agents have a moat, I moved away from Cursor when they started doing what Anthropic is doing now, and never went back even when I was a paying customer since the start.
- Rapid changes hurts the trust of your brand and product. In Google case, using a new service product became something you’ll think multiple times as you are more likely to axe it than rivals or specialized equivalents.
- While models currently has no clear winner. Anthropic’s core product is coding. But just as Skype, IE, Netscape their can always be another game changer you cannot count.
- The Pro plan is already limited for true agentinc workflows. The limits now are so bad that a business that relies on it would need bigger plans.
- Anthropic is already in a delicate situation where many devs are frustrated. Dropping or crippling the use even more just means this sector (which I can only assume is a big chunk) would switch to competitors tool that already try to compete.
- Local models, whether as Google sees it “edge” or even further would also take bigger part in the future.
Realistically the future of all this is that open models become good enough that LLM as a service becomes a commodity with a race to the bottom in terms of cost. Given where we are today I can easily see open weight models in 2-3 years making Anthropic and OpenAI irrelevant for everyday development work (I justify this like so: if my coding agent is 10x smarter than I am, how would I understand if it did all the right things? I want someone of roughly my intelligence for coding. I can see use cases for like independent pharma work or some such where supergenius level intelligence is justified, but for coding ability for mere mortals to reason about the code is probably more important).
After all, we may be a just a data source and not their intended demographic all along.
If Anthropic’s move is confirmed, my guess is other coding agents providers might end up making similar moves
Makes me curious about the internal thinking. One theory being they are in a capacity crisis and knocking Pro users off Claude Code is an emergency brake getting pulled. But an opposite theory is it's a revenue move and they think they have the lock in to pull it off. Especially if they are building up to IPO.
Interestingly the Team subscription which is still $20/month/seat still includes Claude Code. But you need minimum 5 seats. So it could be a way to force people off individual plans and into enterprise plans where possibly things scale better for them, especially IPO/wise. When one user wants it in a company, probably they go buy 5 seats.
My assumption is that people are able to very easily saturate Pro with Claude Code and therefore even though the quotas are lower (more than proportionally) the utilization of those quotas is higher enough that Pro is less profitable.
Of course, I don't have real insight into available compute, but the vibe slope seems to have dropped a bit, at the same time as new GPUs are being shoved into datacenters as fast as possible.
Loss of customers is the wrong direction
One thing is clear, Anthropics communications and leadership is horrible. You don't launch or remove features like this. How this is communicated and handle is something like mom+pop shop would do.
I dunno, I'm no business genius, but I think we're starting to see these companies try to find ways to make money instead of losing it.
Claude web is actually pretty good for dealing with random projects outside of code. I have a Home Assistant MCP server [1] behind a Cloudflare tunnel exposed to it that makes maintaining automations a lot easier.
[1] https://github.com/homeassistant-ai/ha-mcp
Its funny that openai, who in my eyes went for the general public rather than devs initially, seems to be semi pivoting and catching all the fallout from anthropic's recent behavior.
It is a massive bummer, up until those few weeks ago, i was hard pulling for anthropic for quite some time, now i just dont care and hope something dope emerges quickly that signals i wont ever have to consider either of them.
“You asked, and we listened: Introducing Max Plus, our biggest plan yet, designed for those…” blah blah
Now I'm going to learn more about local models. I don't need to be as good as a frontier model. Good enough and free from all this drama is a win for me
Cursor was just acquired by SpaceX, so let's see what happens.
See [1] and [2] for an example of a support article that's had claude code removed as a Pro feature.
I guess this is the beginning of the end for subsidised model access, at least from Anthropic.
[1] https://support.claude.com/en/articles/8325606-what-is-the-p... [2] https://web.archive.org/web/20260420065828/https://support.c...
Wasn't this obvious from day 1 though? Can't see how anyone could've missed that.
They have now moved to be enterprise providers and don't need the cheap pro users as loss leaders anymore.
It makes no sense to do one of the higher tier plans unless they are directly generating you money.
Individual users barely matter. That's probably also the same group that decides to switch to Codex/Kimi/[whatever the hottest agent on any given day] on a whim, which Anthropic doesn't necessarily want to do business with.
Individuals are the ones that push for new tools at work though.
Source: what I witnessed at my company
If Anthropic is intent on losing the goodwill of the devs, they might not be happy with the consequences. Their product is quite commoditized at this point – the latest GPT, Gemini or GLM is just as good for most enterprise tasks.
It is over for the little guy - home enthusiasts and vibe coders. Too many of them saturating resources for Max users.
IF you cannot afford few hundred dollars subscription go out and breathe fresh air. But if you can, watch where the ball is rolling - few thousand dollars subscriptions and even less programmers.
Something tells me congitively it's making us misjudge how productive it's making us.
It's clearly massively increasing output, but did the market already soak up all that productivity and now it's not compensated?
If your salary is 50k. And Claude makes you 2x as productive, why aren't you earning 100k?
Why is it anyone can't afford $200/mo if it's truely increasing worker productivity?
There seems to be a paradox here.
Personally I switched to Z.ai and GLM quite some time ago. I've not noticed any decrease in quality or quantity of my work.
Productivity wise employment is far more than code production productivity in a vacuum, and productivity gains are rarely captured by employees (see famous chart on worker productivity where that correlation changed around 1970). I wouldn’t expect to see much in the next 1-2 years besides noticing effective teams increasing velocity of features.
I think people in forums like complaining about things and aren’t representative of the broader set of people who are just using the tools, so no real paradox. For vast majority of tech jobs, $200/mo is still an absolute steal in terms of what these tools offer. Only the dullest of companies would not realize this.
Fwiw in the 80s-90s computers also didn’t really register in productivity metrics. Qualitative changes occur long before accurate measurement catches up.
Guess they’ve decided to move in the direction of allocating compute primarily to power users and enterprise.
But power users are not a sticky customer base. I just bought the ChatGPT Pro plan and would immediately switch over if the model performance is better and/or I get more compute.
Not too expensive
I heard they disabled signups for non-business accounts too.
Best forget about using Claude Opus models in Copilot.
Guess it democratizes it if you have money, huh?
While these tools stand to enable the democratization of productive capability in software engineering and other tasks (creating a renaissance for solopreneurs, let's say), what seems more likely to actually happen is that entrenched capital will become the only player with real access to this "knowledge as a utility" (was it Altman who called it that?).
We already see this playing out in two fronts: 1) the gradual reduction of services and 2) the DRAM market, where local-first tools (i.e., potential disruptors of the emerging "knowledge monopoly" created by the big AI firms) are being stifled by supply shortages. How many promising small-to-medium-sized competitors are being snuffed out of existence (or never starting) due to the insanity of the DRAM/storage/CPU (soon) markets?
The currently-subsidized access that we have to the big Opus-like models will, in parallel, be gradually be taken away until only the big players can afford it. And in the end what we will have is hyper-productive skeleton crews at a few consolidated firms performing (or selling expensive access to) basically all of the knowledge labor for society, with very little potential for disruption due to the hardware and "knowledge" scarcity engineered (in part, maybe) by this monopoly.
Not necessarily a closely held belief – just a hunch – which is why I want to see what parts of the picture I might be missing.
The real profitability is selling tokens to enterprise, and enterprise demand is growing so fast that they are short on the total amount of tokens they can generate per minute, and are prioritising rationally - enterprise gets a better experience - instead of optimizing for their lowest paying (and most loss leading) customers.
We are in a hardware crunch right now but that won't be forever, and eventually (likely 2028) we will get experiences like we got in January from pro-sumer accounts again.
It's easy to see this becoming a permanent position; the latest models and smarts are reserved for establishment members only, the riff-raff get the cast-offs. So the establishment is preserved and the status quo protected.
[0] I'm putting scare/irony quotes around this, but if the reporting is accurate, there is something to this; we built the internet on string and duct tape, it's not hard to see how a very smart AI could cut it to ribbons.
But there's competition out there -- the open-source chinese models. In their current form, I assume that will turn off many people but new models -- based on those -- are likely to appear. Also, OAI and Google will release new models and pick up the lost customers.
Opus 4.6 is giving 2, maybe 3 questions before blowing through the Pro 5 hour limit as well. We are forced to use Sonnet which makes the same mistakes over and over and then to start trying with other companies. To make matters worse, it reuses old code as we try to survive between credit expiry so it re-introduced issues into the code with the limited credits, that we had already fixed on our own and with other models.
Anthropic in just a few days has gotten me to try GLM 5.1, the new Kimi, and back to OpenAI. OpenAI also seems to introduce new bugs without being carefully micromanaged. The advantage Claude has is that the models are more careful and can refactor code instead of leading to bloat as they go. But the throttling happening now is breaking things and making the entire subscription unusable. I really hope they fix it soon.
One interesting variable is that I'm located in Vietnam while my coworkers are located in Norway and Europe.
To work around this issue I used Claude for coding with a Copilot subscription which was much cheaper and had virtually no rate limiting.
Copilot gives you some set amount of credits each month, but you can also pay as you go if you run out of credit which is much better than the 5 hour window crap claude code would give me.
The only opus model available now on copilot for some reason is 4.7 and it costs 7.5x tokens, while everything else is 1x, 0.33x or free.
But I switched to using GPT 5.4 medium for a month or so which I find very reasonable.
At this rate I fully anticipate being able to run a comparable stack on a 128GB Mac Studio using quants of newer-generation distilled OSS models in a year or two. Being able to ramble to a computer for an hour about features and technical philosophy then have it build a nearly-working app for $50 is an exciting feeling. There's still a long tail of productionization and fixing what the model didn't adhere to but it's still incredible.
I got the 20$ gpt tier, and now i just use claude to craft MD plan docs instead, and then i hand them off to gpt 5.4 and it has been working great. can do about 4x as much work or so based on my feelings(not accurate). if i have just small simple stuff to do i might still fire those off with sonnet and that seems plenty viable, but as soon as its an opus tier task i swap to this workflow.
Little annoying as now im kinda trying to manage a .claude/ and an .opencode/ folder but i kinda just have the .opencode/ stuff reference the .claude/ stuff so its a little less bleh.
I've been keeping within my usage because ive been in a funk a bit, but when i was slightly more worried id sorta just juggle whether claude or gpt would handle writing some initial tests as it did seem to kinda be imbalanced otherwise. seems like gpt just spam resets weekly usage throughout the week anyway so its prolly nbd.
Glad I’m not the only one!
I’ve been limited so often this week I’ve setup half a dozen token compression tools in my workflow and had to do a crash course in token optimization.
Of course, it seems to only slightly delay the inevitable and doesn’t really solve the problem.
There is a lot of political capital to be earned by appearing to be "tough" on AI companies.
(Head of Growth @AnthropicAI)
> When we launched Max a year ago, it didn't include Claude Code, Cowork didn't exist, and agents that run for hours weren't a thing. Max was designed for heavy chat usage, that's it.
Is there a wager that this is 100% foreshadowing Claude Code will be removed from the $100-200/month Max plans soon and go to something like API-only? Or only available on like a new $500-1,000/month plan? Restrict the $100-200/month ones to Claude.ai (website) or Claude desktop app only?
Either way, doesn't seem good to say it's a small test and then start justifying it in this direction.
Existing subscriptions are not impacted according to Tweets from their team. It’s apparently an A/B test they’re rolling out.
If you actually wanted the $20/month Claude Code plan you may have just shot yourself in the foot.
It is pure speculation of course, but I don't have any other explanations on the stuff they are pulling in the last 2 months.
I have an unlimited-usage API billing plan through my dayjob, but for obvious reasons they don't allow piggybacking personal usage onto that. so I paid for the $20/mo personal plan as an easy and relatively cheap method of professional development / keeping my skills current. I don't particularly mind paying $20/mo, but I'm absolutely not paying $100/mo.
also, part of the reason I didn't mind paying for the personal subscription is that I liked having consistency between the tools I use for my dayjob and the ones I use for side projects. if that goes away, then I might as well switch away from Claude usage at work as well. I very much doubt Anthropic's revenue predictions for this change are taking things like that into account.
making a change like this without an announcement, just sneaky updates to product pages, is also an absolutely unforgivable thing to do, in terms of me trusting them as a company.
Would not be surprised to see OpenAI follow suit.
Or perhaps OpenAI's LLMs are just so more compute efficient that they can actually offer that sustainably...
"Codex will continue to be available both in the FREE and PLUS ($20) plans. We have the compute and efficient models to support it."
Both the compute, and the efficient models.
>We will do this every million users up to 10 million.
>Happy building!
https://x.com/sama/status/2041658719839383945
Last reset today, after the 4 million users milestone.
I think the only reason to do this would be that they just can't scale up to service the volume they have and need to cut down significantly on the total number of users. Seems also like a rough business proposition. Most of the pro plan users would probably migrate to a competitor at a similar price point (I know I will).
The only other possibility would be if they are losing too much money on the compute power and just can't offer it at that price anymore. But then upgrading the plan gives you more compute per dollar, so maybe they're just banking on people not actually using all of what they pay for?
I had previously thought that the inference cost of using a trained model was relatively low and that most costs went into training new models, but maybe that is less true with the more powerful newer models.
If it costs a ton more to serve Opus vs serving something like Kimi or Qwen, then I think most people just won't use the more expensive version for most things.
https://claude.com/product/claude-code
However, my company paid for my annual subscription, so maybe I'll ask our lawyers for advice - the only reason they paid for this was my access to CC and with my use the next tier wouldn't make sense, AND no one will expect Anthropic to not nerf it too.
I would not jump to conclusions yet.
https://web.archive.org/web/20260421141017/https://claude.co...
Edit: fixed the url thanks to scq
Might have been taken down?
It would seem misleading to sell monthly, or even yearly, subscriptions under the guise Claude Code comes with the subscription, for it to only be yanked out underneath you. (Although depending who you ask, Anthropic have already done actions similar to this).
If they rugpull Claude code from my already paid for annual subscription I’ll have to issue a chargeback
Anthropic better get this sorted out with a proper product manager and marketing or they risk customers jumping to easier to understand platforms that are good enough.
https://www.anthropic.com/news/anthropic-raises-30-billion-s...
I reckon they'll be fine. Not agreeing or disagreeing with you, but they have enough customers who won't leave.
On the one hand, the people there are supposedly among the smartest on the planet. On the other hand, they consistently forget that they're dealing with LOYAL humans, and these humans prefer respectful communication beforehand instead of being messed with every other day.
My hope for reasonable behavior is to not handle it this way. Decrease limits and increase prices if you can't handle it and be _honest_ about it.
Are they just looking for a way to rationalize another hostile act? And already have expectations like:
- "minus 10% in pro signups" -> oh, let's drop those coders who won't pay anyway
- "minus X% in pro signups and plus X% in max" -> awesome, PAY UP!
Additionally I run a constant hacking contest between GPT and Claude. It’s a toy project and it simulates an attack/defense of a small corporate network.
Claude used to win pretty handily. Suddenly it’s started to lose 90% of the time. I thought GPT had gotten better but no, looking at the logs it seems that Claude is slower and more prone to running in circles. This is still the case when switching to Opus 4.7.
I don’t know what that means but it’s undoubtedly worse.
So I pay for Codex instead.
Why not with email?
In general Anthropic seems to be pretty bad at clearly communicating what is going on. I have both Claude Pro for Claude Code and ChatGPT Plus for Codex, and lately I've been reaching for Codex first more and more often... at least for the hobby stuff I'm using Claude/Codex on, they seem pretty much equivalent in terms of practical capability/usefulness.
Should we instead use a generic coding agent with a particular model and just pay per token?
Would it really be that hard for them to just make all of the changes and then do a redeploy rather than doing them incrementally? It's not like they're just editing the raw HTML sitting on the server manually, right? Actually, don't answer that, I'm not sure I even want to know the answer.
3 hours later…
If Anthropic continues to getting worse, try Amazon Kiro and other companies that run Claude on their own hardware.
It might be expensive and have a worse experience compared to Claude's code, but at least the model itself is the "original flavor."
These days, it's hard to ask for much.
I could be connecting unrelated dots here, but it sure as hell seems quite coincidental to me.
From what I can tell Opus 4.7 is more resource-intensive than Opus 4.6 is more resource-intensive than Opus 4.5.
I realize this duplicates a lot of sentiment already in this thread but anyone here with pull at Anthropic please understand it will undo a lot of the goodwill that made Claude so successful in the first place.
I settled for the AMD rough equivalent. It’s not perfect but it can still handle most of the work. Now if only extra ram would come down in price… I find I need about 5 GB more than I have
Even the downtime would've been fine (as GitHub shows). Instead they're pissing it all away by letting employees make random announcements on random platforms.
Opus is fairly useless on Pro given the rate limits anyways.
> For clarity, we're running a small test on ~2% of new prosumer signups. Existing Pro and Max subscribers aren't affected.
https://x.com/TheAmolAvasare/status/2046724659039932830
April: "The fact that we're doing X isn't news because we're only starting to do X"
August: "The fact that we've fully rolled out X isn't news because we started X in April"
> Engagement per subscriber is way up. We've made small adjustments along the way (weekly caps, tighter limits at peak), but usage has changed a lot and our current plans weren't built for this.
https://xcancel.com/TheAmolAvasare/status/204672528250217304...
OpenCode and their Go plan will get you close if you're willing to put in the config work.
For when you do need the larger models Fireworks has a pretty generous 'Pass' that comes out to about $7 a week for some of the larger bleeding edge models.
Other than that Codex's $20 plan is still somewhat valuable though they keep reducing usage. Google's $20 plan will get you some Opus usage in Antigravity and a generous amount of Gemini. Not sure how long that will last as they've been tweaking pricing and planning language recently too.
I remember when they first added Claude Code to Pro — it was limited to Max initially — and my first thought was that it seemed kind of stupid, because at one fifth of my current limit, I would be hitting walls all the time...
But I’ve mostly been using it for gitops infrastructure in my homelab. I wonder if the token usage is lighter than if I were developing an application.
https://bsky.app/profile/mattgreenrocks.bsky.social/post/3mk...
Another example, I recently saw two people over on Twitter posting LLM responses at each other in a bitter argument about Vercel's security breach. They made no attempt to pretend they'd formulated the ripostes themselves, it was just screenshotting one-sided conversations... What's the point? They could've saved themselves the trouble by spawning two LLMs, naming them "John Doe" and "Fred Doe", then telling them to argue and post the name of the winner.
Disclaimer: I don't use Twitter, Bluesky, Mastodon, etc., so maybe it's not that deep.
Folks are assuming that only the $100 plan will include Claude Code access. I think a more likely scenario is that everyone will be able to use CC.
Opus 4.7 consumes tokens at a faster rate and folks were complaining that the Pro plan included too few credits for real work.
And Anthropic now allows `claude -p` (which invokes Claude code) for 3rd party agents like OpenClaw, which consume far more tokens by running autonomously, 24/7.
Now though I don't dare use spend tokens for basic note taking with Sonnet because I'm hitting the limit over a couple million tokens on the 20x plan, so they've really tightened the purse strings since November.
The Anthropic website has become inconsistent. Some places say Claude Code is included in the Pro plan, other pages don't.
That is the only way to avoid being held captive by Anthropic / Meta / Google.
Others in non-tech sectors are forced to use Copilot. Who knows what I would pay for a usable LLM out of my own pocket. Probably more than $200.
Business accounts are like max 6x accounts.
But it seems this is all in a state of flux.
And there’s the lovely asterisk at the bottom:
> Prices and plans are subject to change at Anthropic's discretion.
The million token context + reduced caching period + new models using more tokens made this a probably unpopular but perhaps unavoidable development.
There's a hard problem here balancing costs and experience. I'm afraid despite the bad experience for people that this is necessary and $20/month was just too big a loss to sustain.
Is there any marginal cost associated with a new subscriber?
I have always heard inference is cheap and the cost was in training, so I assumed any subscriber was making them money, just not enough to cover their insane fixed costs.
But I am just guessing.
I strongly suspect both are wrong.
I got rate limited after about 30mins of coding and was thinking, who the hell i going to work like this?
So they really seem to be running into extreme capacity issued now.
As someone who tries to manage usage for a small team they just added Claude Code to the Standard Team seat now they are removing it!?
Not to mention that they will ban your entire organization from a bot deciding you violated their TOS with no communication and no way to contact anyone to understand what happened.
If this is real we are switching to OpenAI or Gemini it is not worth all this non sense
That $20/month is not profitable? That Anthropic thinks that people are willing to pay a 400% markup without batting an eye? That Anthropic is desperately trying to clean up their burn rate? Why should we trust a company that can screw up basic PR this hard?
The folks hurt most by this are serious people in developing countries and young people starting out. Occasionally a dabbler turns into a serious user but I imagine that’s far less likely than people wish it were.
The value to companies who make these changes is they don’t have low value users or low value contributions to worry about, which has its own not insignificant overhead. In the age of AI slop everywhere we’re likely to see a lot more attempts to separate the wheat from the chaff.
Assuming this limitation applies to team seats in the same way, at $20/mo, businesses could afford to have everybody on the plan. Plenty of folks write only a few hours of code per day—or even per week in their job. These are still professionals, not dabblers.
That sucks, I guess I'll cancel my Claude account. Not paying 100 dollars. That's crazy
I thought we now had advanced tools to which we could ask to do things like: "Remove all mention of Claude Code in the Pro, but not in the Pro Max plan".
But apparently the CGI-days called and asked the webmaster to manually edit .html files one by one?
Maybe this is coming next
"We've determined that claude code is too dangerous to your code base to release, so we are withdrawing it"
I know, crazy idea. When we told you they’re getting you hooked and would rug pull you called us permanent underclass or something.
Me: Why is your website saying Claude Code is not included in Claude Pro subscriptions?
"Fin", the slow, dopey AI agent: That’s confusing, but it usually comes down to what “Claude Code” means in that context. Claude Pro does include Claude Code (the Code tab in the desktop app, and Claude Code access for individuals).
What Pro does not include is API access via the Claude Console. Those are separate products: Pro is for chat, and the Console is for building apps with the API.
So if the website is saying “not included,” it’s likely referring to Console/API access, not the Claude Code feature itself.
Did that clear up the confusion?
Me: No. It clearly states on the pricing page that Claude Code is not included.
Fin: That’s frustrating, <NAME>. The key detail is that Claude Code is included in Claude Pro, but API access via the Claude Console is not.
So if the page you’re looking at says “Claude Code is not included,” it’s almost certainly referring to Console/API access (the developer platform), not the Claude Code feature itself.
If you tell me the exact URL or screenshot of the page that says that, I can confirm which “Code” it’s talking about.
Was that explanation helpful?
Me: https://claude.com/pricing
Fin: On https://claude.com/pricing, Pro does include Claude Code. It’s listed right there under Pro’s “Everything in Free, plus” bullets: “Includes Claude Code.” So if you’re seeing “not included” somewhere else on that page, it’s likely referring to something different than the Claude Code feature. The one thing Pro does not include is API access via the Claude Console (that’s a separate product).
Also, the Code tab in the desktop app requires a Pro (or Max/Team/Enterprise) subscription, and it may not show for everyone yet since it’s still rolling out.
Does that clear things up for you?
Me: there is clearly an "x" at this page. escalate to a human immediately.
and then the cycle continues. I'm cancelling my Anthropic subscription, sick of worrying about limits and getting banned and simply awful experience.
However, if you think you'll need the extra capacity of Max, it's $100, not $200.
Since then, I had to add:
"or won't let you log in?": https://github.com/anthropics/claude-code/issues/44257
"or makes stuff up?": https://dwyer.co.za/static/claude-mixes-up-who-said-what-and...
"or when it's down?": https://status.claude.com/incidents/6jd2m42f8mld
"or when you get banned?": https://bannedbyanthropic.com/
"or installs spyware?": https://www.thatprivacyguy.com/blog/anthropic-spyware/
And this is all exclusively about Anthropic. It's insane. On any other tech, there would be a consensus to wait until it's stable, but not AI - we go full throttle when it's AI.
Genuinely curious how people who have implemented this in serious companies are answering these questions, because my answer is to keep it the fuck out.
This makes me think either they’re severely resource constrained and need to focus on “high value” customers, they’re bleeding money on inference, or their sales and marketing team is incompetent.
Regardless, this feels like a pretty big rug pull. Especially without a phase-out period and a real announcement. As someone using Claude Code on a personal hobby project to get a better feel for its capabilities, I’m not sure what to do now. I can’t justify the $100+/mo plans for a hobby project.
My choices are then:
Fun times.This was never the case though. There's a per week and per 5 hour quota. If you exhaust either you have to wait for the reset. What they're doing makes no sense.
https://claude.ai/share/1a4293bd-b2d4-41b7-a887-eb42b3ae8b6e
“ The standard answer here is no — Anthropic does not typically refund the unused portion of annual plans , and annual subscribers won’t see prorated refunds, retaining access for the full remaining period instead. That said, your situation is a bit different — you’re not just canceling, you’re canceling because a feature you paid for was removed. That’s worth contacting Anthropic support directly about. Their support team can check your refund eligibility , and this kind of material change to the plan is exactly the case where a support escalation could go differently than a standard cancellation. You can reach them through the in-app support messenger at support.claude.com or via the thumbs-down feedback button. I’d recommend explaining specifically that Claude Code was a factor in your annual plan purchase. ”