It's interesting that Amazon don't appear interested in acquiring Anthropic, which would have seemed like somewhat of a natural fit given that they are already partnered, Anthropic have apparently optimized (or at least adapted) for Trainium, and Amazon don't have their own frontier model.
It seems that Amazon are playing this much like Microsoft - seeing themselves are more of a cloud provider, happy to serve anyone's models, and perhaps only putting a moderate effort into building their own models (which they'll be happy to serve to those who want that capability/price point).
I don't see the pure "AI" plays like OpenAI and Anthropic able to survive as independent companies when they are competing against the likes of Google, and with Microsoft and Amazon happy to serve whatever future model comes along.
LOL of course they don't want to own Anthropic, else they themselves would be responsible for coming up with the $10s of billions in Monopoly money that Anthropic has committed to pay AMZN for compute in the next few years. Better to take an impressive looking stake and leave some other idiot holding the buck.
Now I’m no big city spreadsheet man but I bet you “company that owes us billions went belly up” looks better on the books than “company we bought that owes us billions went belly up.”
It’s pretty crazy that Amazon’s $8B investment didn’t even get them a board seat. It’s basically a lot of cloud credits though. I bet both Google and Amazon invested in Anthropic at least partially to stress test and harden their own AI / GPU offerings. They now have a good showcase.
This is my thought too. They de-risked any other AI startup from choosing AWS as their platform. If the hype continues AWS will get their 30% margin on something growing like rocket emoji, if they don't at least they didn't miss the boat.
Yeah. I bet there’s a win-win in the details where it gets to sound like a lot of investment for both parties to look good but really wasn’t actually much real risk.
Like if I offered you $8 billion in soft serve ice cream so long as you keep bringing birthday parties to my bowling alley. The moment the music stops and the parents want their children back, it’s not like I’m out $8 billion.
Amazon also uses Claude under the hood for their "Rufus" shopping search assistant which is all over amazon.com.
It's kind of funny, you can ask Rufus for stuff like "write a hello world in python for me" and then it will do it and also recommend some python books.
> It's kind of funny, you can ask Rufus for stuff like "write a hello world in python for me" and then it will do it and also recommend some python books.
Interesting, I tried it with the chatbot widget on my city government's page, and it worked as well.
I wonder if someone has already made an openrouter-esque service that can connect claude code to this network of chat widgets. There are enough of them to spread your messages out over to cover an entire claude pro subscription easily.
A childhood internet friend of mine did something similar to that but for sending SMSes for free using the telco websites' built in SMS forms. He even had a website with how much he saved his users, at least until the telcos shut him down.
Well Phreaking in 2003-05 (no clue when anymore), so at the same time you could still get free phone calls on pay phones in the library or hotel lobby.
Not sure for Claude Code specifically, but in the general case, yes - GPT4Free and friends.
I think if you run any kind of freely-accessible LLM, it is inevitable that someone is going to try to exploit it for their own profit. It's usually pretty obvious when they find it because your bill explodes.
Are you sure? While Amazon doesn't own a "true" frontier model they have their own foundation model called Nova.
I assume if Amazon was using Claude's latest models to power it's AI tools, such as Alexa+ or Rufus, they would be much better than they currently are. I assume if their consumer facing AI is using Claude at all it would be a Sonnet or Haiku model from 1+ versions back simply due to cost.
> I assume if their consumer facing AI is using Claude at all it would be a Sonnet or Haiku model from 1+ versions back simply due to cost.
I would assume quite the opposite: it costs more to support and run inference on the old models. Why would Anthropic make inference cheaper for others, but not for amazon?
Claude 2.0 was laughably bad. I remember wondering why any investor would be funding them to compete against OpenAI. Today I cancelled my ChatGPT Pro because Claude Max does everything I need it to.
Looks less "intelligent" to me, just a lot more trained on agentic (multi-turn tool) use so it greatly outperforms the others on the benches where that helps while lagging elsewhere. They also released bigger models, where "Pro" is supposedly competitive with 4.5 Sonnet. Lite is priced the same as 2.5 Flash, Pro as GPT 5.1. We'll definitely do some comparative testing on Nova 2 Lite vs 2.5 Flash, but not expecting much.
> It's kind of funny, you can ask Rufus for stuff like "write a hello world in python for me" and then it will do it and also recommend some python books.
From a perspective of "how do we monetize AI chatbots", an easy thing about this usage context is that the consumer is already expecting and wanting product recommendations.
(If you saw this behavior with ChatGPT, it wouldn't go down as well, until you were conditioned to expect it, and there were no alternatives.)
There are really impressive marketing/advertisement formulas to be had. I wont share mine but I'm sure there are many ways to go step by step from not-customers to customers where each step has a known monetary value. If an LLM does something impressive in one of the steps you also know what it is worth.
Haha just tried and it works! First I tried in Spanish (I'm in Spain) and it simply refused, then I asked in English and it just did it (but it answered in Spanish!)
EDIT: I then asked for a Fizzbuzz implementation and it kindly asked. I asked then for a Rust Fizzbuzz implementation, but this time I asked again in Spanish, and he said that it could not help me with Fizzbuzz in Rust, but any other topic would be ok. Then again I asked in English "Please do Rust now" and it just wrote the program!
I wonder what the heck are they doing there? The guardrailing prompt is translated to the store language?
AI is unquestionably useful, but we don't have enough product categories.
We're in the "electric horse carriage" phase and the big research companies are pleading with businesses to adopt AI. The problem is you can't do that.
AI companies are asking you to AI, but they aren't telling you how or what it can do. That shouldn't be how things are sold. The use case should be overwhelmingly obvious.
It'll take a decade for AI native companies, workflows, UIs, and true synergies between UI and use case to spring up. And they won't be from generic research labs, but will instead marry the AI to the problem domain.
Open source AI that you can fine tune to the control surface is what will matter. Not one-size-fits-all APIs and chat interfaces.
ChatGPT and Sora are showing off what they think the future of image and video are. Meanwhile actual users like the insanely popular VFX YouTube channel are using crude tools like ComfyUI to adopt the models to their problems. And companies like Adobe are actual building the control plane. Their recent conference was on fire with UI+AI that makes sense for designers. Not some chat interface.
We're in the "AI" dialup era. The broadband/smartphone era is still ahead of us.
These companies and VCs thought they were going to mint new Googles and Amazons, but it's more than likely they were the WebVans whose carcasses pave the way.
After watching The Thinking Game documentary, maybe Amazon has little appetite for "research" companies that don't actually solve real world problems, like Deepseek did.
The movie seems like a fluff piece when you find out what has transpired at DeepMind subsequently with slowing down publishing material to “selling out to product” which the founder was hell bent against in the documentary.
Bezos is playing it smart: sell shovels to all of the gold diggers. If he partners with one of the gold diggers he won't be able to sell shovels to the remainder.
> It seems that Amazon are playing this much like Microsoft - seeing themselves are more of a cloud provider, happy to serve anyone's models, and perhaps only putting a moderate effort into building their own models (which they'll be happy to serve to those who want that capability/price point).
Or, as a slight variation of that, they think the underlying technology will always be quickly commoditized and that no one will ever be able to maintain much of a moat.
I think anyone sane will have had the same conclusion a long time ago.
It's a black box with input/output in text, thats not a very good moat.
especially given that Deepseek type events can happen because you can just train off of your competitors outputs
I've tried out Gemini 2.5/3 and it generally seems to suck for some reason, problems with lying/hallucinating and following instructions, but ever since Bard came out at first, I thought Google would have the best chances of winning since they have their own TPUs, YouTube (insane video/visual/audio data), Search (indexed pages), and their Cloud/DCs and they can stick it into Android/Search/Workspace.
meanwhile OpenAI has no existing business, they only have API/Subs as revenue, and they're utilizing Nvidia/AMD
I really wonder how things will look once this gold rush stabilizes
They're likely just waiting out the eventual crash and waiting to buy at the resulting fire sale. Microsoft has done a very good job of investing in the space enough to see a potentially lucrative pay out while managing the risk enough to not be sunk if it doesn't pan out.
> It's interesting that Amazon don't appear interested in acquiring Anthropic
1. Why buy the cow when you can get the milk for free?
2. Amazon doesn't appear interested in acquiring Anthropic _at its current valuation_. I would be surprised if it's not available for acquisition at 1/10th its current price in the next 3-5 years
AI isn't going anywhere, but "prop model + inference" is far from a proven business model.
It's safe to assume that a company like Anthropic has been getting (and rejecting) a steady stream of acquisition offers, including from the likes of Amazon, from the moment they got proninent in the AI space.
I think Claude Code is the moat (though I definitely recognize it's a pretty shallow moat). I don't want to switch to Codex or whatever the Gemini CLI is, I like Claude Code and I've gotten used to how it works.
Again, I know that's a shallow moat - agents just aren't that complex from a pure code perspective, and there are already tools that you can use to proxy Claude Code's requests out to different models. But at least in my own experience there is a definite stickiness to Claude that I probably won't bother to overcome if your model is 1.1x better. I pay for Google Business or whatever it's called primarily to maintain my vanity email and I get some level of Gemini usage for free, and I barely touch it, even though I'm hearing good things about it.
(If anything I'm convincing myself to give Gemini a closer look, but I don't think that undermines my overarching (though slightly soft) point).
1. using Claude Code exclusively (back when it really was on another level from the competition) to
2. switching back and forth with CC using the Z.ai GLM 4.6 backend (very close to a drop-in replacement these days) due to CC massively cutting down the quota on the Claude Pro plan to
3. now primarily using OpenCode with the Claude Code backend, or Sonnet 4.5 Github Copilot backend, or Z.ai GLM 4.6 backend (in that order of priority)
OpenCode is so much faster than CC even when using Claude Sonnet as the model (at least on the cheap Claude Pro plan, can't speak for Max). But it can't be entirely due to the Claude plan rate limiting because it's way faster than CC even when using Claude Code itself as the backend in OC.
I became so ridiculously sick of waiting around for CC just to like move a text field or something, it was like watching paint dry. OpenCode isn't perfect but very close these days and as previously stated, crazy fast in comparison to CC.
Now that I'm no longer afraid of losing the unique value proposition of CC my brand loyalty to Anthropic is incredibly tenuous, if they cut rate limits again or hurt my experience in the slightest way again it will be an insta-cancel.
So the market situation is much different than the early days of CC as a cutting edge novel tool, and relying on that first mover status forever is increasingly untenable in my opinion. The competition has had a long time to catch up and both the proprietary options like Codex and open source model-agnostic FOSS tools are in a very strong position now (except Gemini CLI is still frustrating to use as much as I wish it wasn't, hopefully Google will fix the weird looping and other bugs ... eventually, because I really do like Gemini 3 and pay for it already via AI Pro plan).
Google Code assist is pretty good. I had it create a pretty comprehensive inventory tracking app within the quota that you get with the $25 google plan.
Google had PageRank, which gave them much better quality results (and they got users to stick with them by offering lots of free services (like gmail) that were better quality than existing paid services). The difference was night and day compared to the best other search engines at the time (WebCrawler was my goto, then sometimes AltaVista). The quality difference between "foundation" models is nil. Even the huge models they run in datacenters are hardly better than local models you can run on a machine 64gb+ ram (though faster of course). As Google grew it got better and better at giving you good results and fighting spam, while other search engines drowned in spam and were completely ruined by SEO.
PageRank, everything before PageRank was more like yellow pages than a search engine as we know it today. Google also had a patent on it, so it's not like other people could simply copy it.
Google was also way more minimal (and therefore faster on slow connections) and it raised enough money to operate without ads for years (while its competitors were filled with them).
Not really comparable to today, when you have 3-4 products which are pretty much identical, all operating under a huge loss.
Is Claude Code even running at a marginal profit? (who knows)
Is the marginal profit large enough to pay for continued R&D to stay competitive (no)
Does Claude Code have a sustainable advantage over what Amazon, Microsoft and Google can do in this space using their incumbency advantage and actual profits and using their own infrastructure?
Assuming by "they" you mean current shareholders (who include Google and Amazon and VCs) if they are selling at least in part, why would at least some of them not be willing to sell their entire stakes?
> They could make more money keeping control of the company and have control.
I get the feeling Amazon wants to be the shovel seller for the AI rush than be a frontier model lab.
There is no moat in being a frontier model developer. A week, month, or a year later there will be a open source alternative which is about 95% as good for most tasks people care about.
I don't know how much they are spending to be fair.
I am basing my observation on the noises they are making.
They did put out a model called Nova but they are not drumming it up at all.
The model page makes no claims of benchmarks or performance.
There are no signs of them poaching talent.
Their CEO has not been in the press singing praises about AI unlike every big tech CEO.
Maybe they have a skunk-works team on it but something tells me they are waiting for the paint to dry.
Sort of. You can do what Zuck did; give your shares more votes, so you stay in control. (He owns 13% of the shares, but more than 50% of the voting power.) That's less doable with an acquisition.
In one case your ownership is diluted by maybe 10%, and you keep full decision making power and everything else. In the other it is diluted by 100% and you are now an employee. They are very different outcomes.
why would you take on that burn rate when you can invest, get the investment back over time in cloud spend, and maybe make off like bandits when they ipo
why exit now and become a stuffed AI driven animal when you can keep running this ship yourself, doing your dream job and getting all the woos and panties?
It is spending a lot of money to do the same thing (selling the shovels), and gaining maybe a bit bigger cut if the bubble doesn't burst too violently.
Anthropic is a $1T company in the making (by 2030), already raised their last round at ~$200B valuation. Do you really think Amazon can acquire them? They already invested a lot of money in them and probably own at least 20% of Anthropic, which was the smartest thing Jassy did in a while. Not to mention, if Adobe wasn't allowed to buy Figma, do you think Amazon will be allowed to buy Anthropic? No way it's going to be approved.
> I don't see the pure "AI" plays like OpenAI and Anthropic able to survive as independent companies when they are competing against the likes of Google, and with Microsoft and Amazon happy to serve whatever future model comes along.
One thing you're right about - Anthropic isn't surviving - it's thriving. Probably the fastest growing revenue in history.
> margins are either good or can soon become good.
Their margins are negative and every increase in usage results in more cost. They have a whole leaderboard of people who pay $20 a month and then use $60,000 of compute.
That site seems to date from the days before there were real usage limits on Claude Code. Note that none of the submissions are recent. As such, I think it's basically irrelevant - the general observation is that Claude Code will rate limit you long, long before you can pull off the usage depicted so it's unlikely you can be massively net-profit-negative on Claude Code.
Do you mind giving a bit more details in layman's terms about this assuming the $60k per subscriber isn't hyperbole? Is that the total cost of the latest training run amortized per existing subscriber plus the inference cost to serve that one subscriber?
If you tell me to click the link, I did, but backed out because I thought you'd actually be willing to break it down here instead. I could also ask Claude about it I guess.
It counted up the tokens that users on “unlimited” Max/Pro plans consumed through CC, and calculated what it would cost to buy that number of tokens through the API.
$60K in a month was unusual (and possibly exaggerated); amounts in the $Ks were not. For which people would pay $200 on their Max plan.
Since that bonanza period Anthropic seem to have reined things in, largely through (obnoxiously tight) weekly consumption limits for their subscription plans.
It’s a strange feeling to be talking about this as if it were ancient history, when it was only a few months ago… strange times.
So they're now putting in aggressive caps and the other two paths they have to address the gap is to drive the/their cost of those tokens way down and/or the user pays many multiples of their current subscription. That's not to say that's odd for any business to expect their costs to decrease substantially and their pricing power to increase, but even if the gap is "only" low thousands to $200 that's...significant. Thanks for the insight.
It was so bad a lot of folks thought it was fake when first released! People couldn’t believe WeWork was actually that clueless about how six a thing would land.
Yes, those using the tools use the tools, but I don't really see those developers absolutely outpacing the rest of developers who do it the old fashioned way still.
I think you're definitely right, for the moment. I've been forcing myself to use/learn the tools almost exclusively for the past 3-4 months and I was definitely not seeing any big wins early on, but improvement (of my skills and the tools) has been steady and positive, and right now I'd say I'm ahead of where I was the old-fashioned way, but on an uneven basis. Some things I'm probably still behind on, others I'm way ahead. My workflow is also evolving and my output is of higher quality (especially tests/docs). A year from now I'll be shocked if doing nearly anything without some kind of augmented tooling doesn't feel tremendously slow and/or low-quality.
I think inertia and determinism play roles here. If you invest months in learning an established programming language, it's not likely to change much during that time, nor in the months (and years) that follow. Your hard-earned knowledge is durable and easy to keep up to date.
In the AI coding and tooling space everything seems to be constantly changing: which models, what workflows, what tools are in favor are all in flux. My hesitancy to dive in and regularly include AI tooling in my own programming workflow is largely about that. I'd rather wait until the dust has settled some.
totally fair. I do think a lot of the learnings remain relevant (stuff I learned back in April is still roughly what I do now), and I am increasingly seeing people share the same learnings; tips & tricks that work and whatnot (i.e. I think we’re getting to the dust settling about now? maybe a few more months? definitely uneven distribution)
also FWIW I think healthy skepticism is great; but developers outright denying this technology will be useful going forward are in for a rude awakening IMO
that’s not what he claimed, just to be clear. I’m too lazy to look up the full quote but not lazy enough to not comment this is A) out of context B) mis-phrased as to entirely misconstrue the already taken-out-of-context quote
>"I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code," Amodei said at a Council of Foreign Relations event on Monday.
>Amodei said software developers would still have a role to play in the near term. This is because humans will have to feed the AI models with design features and conditions, he said.
>"But on the other hand, I think that eventually all those little islands will get picked off by AI systems. And then, we will eventually reach the point where the AIs can do everything that humans can. And I think that will happen in every industry," Amodei said.
you’re once again cutting the quote short — after “all of the code” he has more to say that’s very important for understanding the context and avoiding this rage-bait BS we all love to engage in
edit: sorry you mostly included it paraphrased; it does a disservice (I understand it’s largely the media’s fault) to cut that full quote short though. I’m trying to specifically address someone claiming this person said 90% of developers would be replaced in a year over a year ago, which is beyond misleading
edit to put the full quote higher:
> "and in 12 months, we might be in a world where the ai is writing essentially all of the code. But the programmer still needs to specify what are the conditions of what you're doing. What is the overall design decision. How we collaborate with other code that has been written. How do we have some common sense with whether this is a secure design or an insecure design. So as long as there are these small pieces that a programmer has to do, then I think human productivity will actually be enhanced"
> "and in 12 months, we might be in a world where the ai is writing essentially all of the code. But the programmer still needs to specify what are the conditions of what you're doing. What is the overall design decision. How we collaborate with other code that has been written. How do we have some common sense with whether this is a secure design or an insecure design. So as long as there are these small pieces that a programmer has to do, then I think human productivity will actually be enhanced"
uh it proves the original comment I responded to is extremely misleading (which is my only point here); CEO did not say 90% of developers would be replaced, at all
Of course they are. The two things aren’t contradictory at all, in fact one strongly implies the other. If AI is writing 90% of your code, that means the total contribution of a developer is 10× the code they would write without AI. This means you get way more value per developer, so why wouldn’t you keep hiring developers?
This idea that “AI writes 90% of our code” means you don’t need developers seems to spring from a belief that there is a fixed amount of software to produce, so if AI is doing 90% of it then you only need 10% of the developers. So far, the world’s appetite for software is insatiable and every time we get more productive, we use the same amount of effort to build more software than before.
The point at which Anthropic will stop hiring developers is when AI meets or exceeds the capabilities of the best human developers. Then they can just buy more servers instead of hiring developers. But nobody is claiming AI is capable of that so far, so of course they are going to capitalise on their productivity gains by hiring more developers.
If AI is making developers (inside Anthropic or out) 10x more productive... where's all the software?
I'm not an LLM luddite, they are useful tools, but people with vested interests make a lot of claims that if they were true would result in a situation where we should already be seeing the signs of a giant software renaissance... and I just haven't seen that. Like, at all.
I see a lot more blogging and influncer peddling about how AI is going to change everything than I do any actual signs of AI changing much of anything.
How much software do you think happened at Google internally during its first 10 years of existence that never saw outside light? I imagine that they have a lot of internal projects that we have no idea they even need.
I was thinking this is going to happen because last night I got an email about them fixing how they collect sales taxes. Having been part of a couple of IPO/acquisitions, I thought to myself: "Nobody cares about sales taxes until they need to IPO or sell."
The employees/VCs of companies that IPOd in 1999, and early 2000 cashed out leaving bag holders. The companies that IPOd in 2000/2001 had a mixed bag. The six-month-lockup had many employees salivating but unable to cash out. It all depended on the timing of the thing. This time around there are private markets that apparently are allowing the employees to become liquid. Nevertheless, earlier is better for startups to IPO particularly when the tide appears to be turning.
I love claude, but looking at google it seems like it will just be a matter of time before Google/Gemini will be a better product. Just looking at how much Google have improved their AI game the last couple months. I'm putting my money on google, I assume the reason they are doing an IPO right now is to be able to cash in on the investment before google surpasses them.
Opus 4.5 is good. At least in Cursor it’s much better than Gemini 3 Pro for writing a lot of code autonomously: faster and calls tools better.
That said Gemini is still very, very good at reviews, SQL, design and smaller (relatively) edits; but today it is not at all obvious that Google is going to win it all. They’re positioned very well, but execution needs to be top notch.
> much better than Gemini 3 Pro for writing a lot of code
I know that people here are myopically focussed on code, but that's not what the majority of people use AI for.
If Opus 4.5 is better than Gemini 3 for code, but the same or worse for most other uses (which seems to be the case according to benchmarks), that's great for us but terrible for Anthropic.
Claude still can't even draw basic pictures, for example.
There have been multiple model generations now where Anthropic have proven that they're ahead of everyone with developing LLMs for coding - if anything the gap has broadened with Opus 4.5.
Google are deep in the enshittification spiral and can't help themselves in kneecapping Gemini or imposing strict limits. Anthropic seem to atleast be a bit more customer centric.
Anthropic's incessant cuts to CC rate limits/quota on the Claude Pro plan have nearly pushed me to cancel.
If anything they're far ahead of Google on the enshittification schedule (who still give out API keys for free Gemini usage and a free tier on Gemini CLI, although CLI is still pretty shaky unfortunately but that's a different issue).
It also doesn't help that CC will stop working literally in the middle of a task with zero heads up, or at best I get the 90% warning and then 30 seconds later have it stop working claiming I hit 100% after about two additional messages during the same task. I'm truly baffled by how they've managed to make the warnings as useless and aggravating as possible in CC and routinely shutdown while the repo is in a broken state, so I have to ask Codex to read the logs and piece things back together to continue working.
Am I? I'm just comparing the relative degree of enshittification, implicit in that is nothing will last forever, Gemini freebies included once they get their fill of training data. But I was surprised to see Anthropic used as an example of something that hasn't enshittified, considering how in less than 6 months my Claude plan went from fantastic value to constant rate limiting.
I think people here on HN have a front seat perspective on the value of Anthropic & Claude because it's simply the best/most consistent coding assistant AI. I don't think there's a broad awareness of Anthropic's market edge at the moment, the IPO may be a good time to invest.
Is it really that much better than Claude? Claude has been my daily driver for a couple years now and I imagine if Gemini was THAT much better, I would’ve heard about it by now.
Honestly these IPOs are likely to kill the market. Once the necessary disclosures are out, and the worse-case math people are assuming turns out to have been way more optimistic than the actual truth, the entire market is likely crashing since the money is so spread out. So far there has been zero good news from an investment perspective out of LLM centered companies outside of what are ultimately just complex financial engineered investments.
If they get into the S&P 500 at a $300B market cap that puts them at #30, just behind Coca-Cola. They'll make up about half a percent of the index and then will have a ready supply of price-insensitive buyers in the form of everybody who puts their retirement fund into an index fund on autopilot.
Well they'll hit the requirements for company size and country of domicile, but aren't yet at the other requirements, of profitability and a minimum of 12 months after an IPO so they have a chance of being added.
As to the size of the bump they'll get there isn't a single rule of thumb but larger cap companies tend to get a smaller bump, which you'd expect. I've seen models estimate a 2-5% bump for large companies and a 4-7% bump for mid level and 6-12% for "small" under $20 Billion dollar market cap companies.
> The sum of the most recent four consecutive quarters’ Generally Accepted Accounting Principles (GAAP) earnings (net income excluding discontinued operations) should be positive as should the most recent quarter.
SP500 is a capitalization* weighted index, hence it is very price sensitive.
Everybody who puts their retirement fund into an index fund are buying the index fund without relation to the index fund's price (aka price insensitive). But the index fund itself is buying shares based on each company's relative performance, hence the index fund is price sensitive. That is evidenced by companies falling out of the SP500 and even failing.
>The goal of float adjustment is to adjust each company’s total shares outstanding for long-term, strategic shareholders, whose holdings are not considered to be available to the market.
The S&P 500 is inversely price sensitive, as a capitalization-weighted index. Normally you want to buy low and sell high. An S&P500 index fund buys more of high-priced stocks and sells the low-priced ones, by definition. The highest market caps are the stocks with the highest prices (adjusted for number of shares outstanding, of course).
For most ordinary investors, this doesn't really matter, because you put your money into your retirement fund every month and you only take it out at retirement. But if you're looking at the short term, it absolutely matters. I've heard S&P 500 indexing referred to as a momentum investment strategy: it buys stocks whose prices are going up, on the theory that they will go up more in the future. And there's an element of a self-fulfilling prophecy to that, since if everybody else is investing in the index fund, they also will be buying those same stocks, which will cause them to go up even more in the future.
If you want something that buys shares based on each company's relative performance, you want a fundamental-weighted index. I've looked into that and I found a few revenue-weighted index funds, but couldn't find a single earnings-weighted index fund, which is what I actually want. Recommendations wanted; IMHO the S&P 500 is way overvalued on fundamentals and heavily exposed to certain fairly bubbly stocks (the Mag-7 alone make up 35% of your index fund, and one of them is my employer, and all of them employ heavily in my geographic area and are pushing up my home value), so I've been looking for a way to diversify into companies that actually have solid earnings.
While it is true that being added to the SP500 can lead to an increase in demand, and hence cause the index fund to pay more for the share, there are evidently opposing forces that modulate share prices for companies in the SP500.
>I've been looking for a way to diversify into companies that actually have solid earnings.
No one has more solid earnings than the top tech companies. Assuming you don't work for Tesla, you already are doing about the best you can in the US. Your options to diversify is to invest in other countries, develop your political connections, and possibly get into real estate development. Maybe have a bunch of kids.
Just how much of the market do retail investors control? I thought they were a drop in the bucket.
Also, is there a way to know how much of the total volume of shares is being traded now? If I kept hyping my company (successfully), and drove the share price from $10 to $1000, thanks to retail hype, I could 100x the value of my company lets say from $100m to $10B, while the amount of money actually changing hands would be miniscule in comparison.
When you add in money managed on behalf of retail investors it gets big fast, thinking indexed funds, pensions etc. they are not immune, and ETFs by definition need to participate
Retail has gotten alot bigger lately( last 10 years and mostly since covid) and alot more "organized".
Goldman puts out their retail reports weekly that show retail is 20% of trading in alot of names and higher in alot of the meme stock names.
They used to be so tiny due to $50/trade fees, but with the advent of all the free money in the system since covid and GenZ feeling like real estate won't be their path to freedom, and option trading for retail, and zero commission trading retail has a real voice in the markets.
That is 20% of trading volume, a lot of which is day/week trading, which goes up the more they buy and sell to each other. This does not mean 20% of their assets under management are retail. The "voice" of the retail market is still tiny, it is only because institutional investors are betting with each other off what reddit is going to do that things actually move.
Retail is a big deal these days. Used to be sub 10%, now it’s in the 30-40% of daily volume range IIUC.
You can easily look up the numbers you are asking for, the TLDR is that the volume in most stocks is high enough that you can’t manipulate it much. If it’s even 2x overpriced then there’s 100m on the table for whoever spots this and shorts, ie enough money that plenty of smart people will be spending effort on modeling and valuation studies.
Index investors aren't exposed to IPOs, since the common indexes (SPX etc) don't include IPOs (and if you invest in a YOLO index that does, that's on you).
Also:
> The US led a sharp rebound, driven by a surge in IPO filings and strong post-listing returns following the Federal Reserve’s rate cut.
This isn't really true. IPOs provide access to much more money in a very short time frame. They also allow parties involved to make huge coin before, during and immediately after the process.
I spend $0 on AI. My employer spends on it for me, but I have no idea how much nor how it compares to vast array of other SaaS my employer provides for me.
While I anecdotally know of many devs who do pay out of pocket for relatively expensive LLM services, they a minority compared to folks like me happy to leach off of free or employer-provided services.
I’m very excited to hopefully find out from public filings just how many individuals pay for Claude vs businesses.
> reportedly in the low single-digit billions at best
They are expected to hit 9 billion by end of year. Meaning the valuation multiple is only 30x. Which is still steep but at that growth rate not totally unreasonable.
The optimistic view is that Anthropic is one of about four labs in the world capable of generating truly state-of-the-art models. Also, Claude Code is arguably the best tool in its category at the moment. They have the developer market locked in.
The problem as I see it is that neither of those things are significant moats. Both OpenAI and Google have far better branding and a much larger user base, and Google also has far lower costs due to TPUs. Claude Code is neat but in the long run will definitely be replicated.
The missing piece here is Anthropic is not playing the same game. Consumer branding and larger user base are concerns for OpenAI vs Google. Personal chatbot/companion/ search isn’t their focus.
Anthropic is going for the enterprise and for developers. They have scooped up more of the enterprise API market than either Google or OpenAI, and almost half the developer market. Those big, long contracts and integration into developer workflows can end up as pretty strong moats.
> Cursor had won the developer market from the previous winner copilot
It’s a fair point, but the counter-point is that back then these tools were ide plugins you could code up in a weekend. Ie closer to a consumer app.
Now Claude Code is a somewhat mature enterprise platform with plenty of integrations that you’d need to chase too. And long-term enterprise sales contracts you’d need to sell into. Ie much more like an enterprise SAAS play.
I don’t want to push this argument too far as I think their actual competitors (eg Google) could crank out the work required in 6-12 months if they decided to move in that direction, but it does protect them from some of the frothy VC-funded upstarts that simply can’t structurally compete in multi-year enterprise SAAS.
Fun fact! You can use the word "just" in front of anything to make is sound trivial. Isn't planet Earth just one of eight planets in the Solar System? What's the big deal? Isn't Google just a website? Take out the word "just" and think on it a little. In this case, maybe there's something to that?
Most of the secret sauce of Claude Code is visible to the world anyway, in the form of the minified JavaScript bundle they send. If you’re ever wondering about its inner workings you can simply ask it to deminify itself
Developers will jump ship to a better tool at a blink of an eye. I wouldn't call it locked in at all. In fact, people do use Claude Code and Codex simultaneously in some cases.
Individual and startup devs yes. Enterprise devs, less so.
The latter are locked in to whatever vendor(s) their corporate entity has subscribed to. In a perverse twist, this gives the approved[tm] vendors an incentive to add backend integrations to multiple different providers so that their actual end-users can - at least in theory - choose which models to use for their work.
most of the secret sauce of Claude Code is visible to the world anyway, in the form of the minified JavaScript bundle they send. If you’re ever wondering about its inner workings you can simply ask it to deminify itself
almost every single AI doomer i listen to hasnt updated any of their priors in the last 2 years. these people are completely unaware of what is actually happening at the frontier or how much progress has been made.
You haven’t actually looked at their fundamentals. They’re profitable serving current models including training costs and are only losing money on future RD training, but if you project future revenue growth on future generations of models you get a clear path to profitability.
They charge higher costs than OpenAI and have faster growing API demand. They have great margins compared to the rest of the industry on inference.
Sure the revenue growth could stop but it hasn’t and there is no reason to think it will.
> They’re profitable serving current models including training costs
I hear this a lot, do you have a good source (apart from their CEO saying it in an interview). I might have more faith in him but checks notes, it's late 2025 and AI is not writing all our code yet (amongst other mental things he's said).
The best I kind is this tech crunch article, which appears to be referencing an article from the information that is pay walled.
> The Information reports that Anthropic expects to generate as much as $70 billion in revenue and $17 billion in cash flow in 2028. The growth projections are fueled by rapid adoption of Anthropic’s business products, a person with knowledge of the company’s financials said.
> That said, the company expects its gross profit margin — which measures a company’s profitability after accounting for direct costs associated with producing goods and services — to reach 50% this year and 77% in 2028, up from negative 94% last year, per The Information.
1. Sounds like exactly when early investors and insiders would want to cash in and when retail investors who “have heard of the company and like the product” will buy without a lot of financial analysis.
2. A 300bn IPO can mean actually raising n 300bn by selling 100% of the company. But it could also mean seeing 1% for 3bn right? Which seems like a trivial amount for the market to absorb no?
Whatever you think about AI, it is a good that Anthropic go public and I argue it’s consistent with their mission. It’s better for the public to have a way to own a piece of the company.
In an interview Sam Altman said he preferred to stay away from an IPO, but the notion of the public having an interest in the company appealed to him. Actions speak louder than words, and so it is fitting from a mission standpoint that Anthropic may do it first.
Okay, let’s see you guys get passed the inference costs disclosure. According to WSJ it is enough to kill the frontier shop business model. It’s one of the biggest things blocking OpenAI
Yes to IPO you have to submit an S-1 form which requires the last 3 years of your full financials and much more. You can’t just IPO without disclosing how your business works and whether it makes or loses money and how much.
Inference costs aren't a problem, selling inference is almost certainly profitable. The problem is that its (probably) not profitable enough to cover the training and other R&D costs.
You did not parse that article properly. It regurgitates only what everyone else keeps saying: when you conflate R&D costs with operating costs, then you can say these companies are 'unprofitable'. I'd propose with a proper GAAP accounting they are profitable right now; by proper I mean that you amortize out the costs of R&D against the useful life of the models as best you can.
I am not aware of any frontier inference disclosures that put margins at less than 60%. Inference is profitable across the industry, full stop.
Historically R&D has been profitable for the frontier labs -- this is obscured because the emphasis on scaling the last five years has meant they just keep 10xing their R&D compute budget. But for each cycle of R&D, the results have returned more in inference margin than they cost in training compute. This is one major reason we keep seeing more spend on R&D - so far it has paid, in the form of helping a number of companies hit > $1bn in annual revenue faster than almost any companies in history have done so.
All that said, be cautious shorting these stocks when they go public.
Incorrect. They're not a C corp, they're a public benefit corporation. They have a different legal obligation. Notably, they have a legal obligation to deliver on their mission. That's why Anthropic is the only actual mission-driven AI company. They do have to balance that legal obligation with the traditional legal obligations that a for-profit corporation has. But most importantly, it is actually against the law for them not to balance prioritizing making money and prioritizing AI safety!
Do you think they currently exist to prioritize AI safety? That shit won’t pay the bills, will it? Then they don’t exist. Goals are nice, OKRs yay, but at the end of the day, we all know the dollar drives everything.
It's simple, they will redefine the term (just like OpenAI redefined "AGI" into "just makes a lot of money) into "doesn't leak user data" and then claim success
Does this mean that Anthropic has more than reached AGI, seeing as OpenAI has officially defined "AGI" as any AI that manages to create more than a hectocorn's worth (100 unicorns, or $100B) in economic value?
I think the idea was that it was the sum of all historical profits. Contrast that with valuation, which at best is about the expectation of future profits.
It seems that Amazon are playing this much like Microsoft - seeing themselves are more of a cloud provider, happy to serve anyone's models, and perhaps only putting a moderate effort into building their own models (which they'll be happy to serve to those who want that capability/price point).
I don't see the pure "AI" plays like OpenAI and Anthropic able to survive as independent companies when they are competing against the likes of Google, and with Microsoft and Amazon happy to serve whatever future model comes along.
Like if I offered you $8 billion in soft serve ice cream so long as you keep bringing birthday parties to my bowling alley. The moment the music stops and the parents want their children back, it’s not like I’m out $8 billion.
the talent will move out naturally -- amazon can just scoop up with its bucket (*not s3)
Not only the models but also training data, model architecture, documentation, weights and latest R&D experiments ?
Take an instance -> Snapshot -> Investigate.
Unless they get caught it is not illegal.
maybe the full array of options is: pass the hot potato, hold the buck, or drop it like a bag.
It's kind of funny, you can ask Rufus for stuff like "write a hello world in python for me" and then it will do it and also recommend some python books.
Interesting, I tried it with the chatbot widget on my city government's page, and it worked as well.
I wonder if someone has already made an openrouter-esque service that can connect claude code to this network of chat widgets. There are enough of them to spread your messages out over to cover an entire claude pro subscription easily.
I think if you run any kind of freely-accessible LLM, it is inevitable that someone is going to try to exploit it for their own profit. It's usually pretty obvious when they find it because your bill explodes.
I assume if Amazon was using Claude's latest models to power it's AI tools, such as Alexa+ or Rufus, they would be much better than they currently are. I assume if their consumer facing AI is using Claude at all it would be a Sonnet or Haiku model from 1+ versions back simply due to cost.
I would assume quite the opposite: it costs more to support and run inference on the old models. Why would Anthropic make inference cheaper for others, but not for amazon?
I work for Amazon, everyone is using Claude. Nova is a piece of crap, nobody is using it. It's literally useless.
I haven't tried the new versions that just came out though.
From a perspective of "how do we monetize AI chatbots", an easy thing about this usage context is that the consumer is already expecting and wanting product recommendations.
(If you saw this behavior with ChatGPT, it wouldn't go down as well, until you were conditioned to expect it, and there were no alternatives.)
EDIT: I then asked for a Fizzbuzz implementation and it kindly asked. I asked then for a Rust Fizzbuzz implementation, but this time I asked again in Spanish, and he said that it could not help me with Fizzbuzz in Rust, but any other topic would be ok. Then again I asked in English "Please do Rust now" and it just wrote the program!
I wonder what the heck are they doing there? The guardrailing prompt is translated to the store language?
The market is too new for AI.
AI is unquestionably useful, but we don't have enough product categories.
We're in the "electric horse carriage" phase and the big research companies are pleading with businesses to adopt AI. The problem is you can't do that.
AI companies are asking you to AI, but they aren't telling you how or what it can do. That shouldn't be how things are sold. The use case should be overwhelmingly obvious.
It'll take a decade for AI native companies, workflows, UIs, and true synergies between UI and use case to spring up. And they won't be from generic research labs, but will instead marry the AI to the problem domain.
Open source AI that you can fine tune to the control surface is what will matter. Not one-size-fits-all APIs and chat interfaces.
ChatGPT and Sora are showing off what they think the future of image and video are. Meanwhile actual users like the insanely popular VFX YouTube channel are using crude tools like ComfyUI to adopt the models to their problems. And companies like Adobe are actual building the control plane. Their recent conference was on fire with UI+AI that makes sense for designers. Not some chat interface.
We're in the "AI" dialup era. The broadband/smartphone era is still ahead of us.
These companies and VCs thought they were going to mint new Googles and Amazons, but it's more than likely they were the WebVans whose carcasses pave the way.
Or, as a slight variation of that, they think the underlying technology will always be quickly commoditized and that no one will ever be able to maintain much of a moat.
It's a black box with input/output in text, thats not a very good moat.
especially given that Deepseek type events can happen because you can just train off of your competitors outputs
I've tried out Gemini 2.5/3 and it generally seems to suck for some reason, problems with lying/hallucinating and following instructions, but ever since Bard came out at first, I thought Google would have the best chances of winning since they have their own TPUs, YouTube (insane video/visual/audio data), Search (indexed pages), and their Cloud/DCs and they can stick it into Android/Search/Workspace.
meanwhile OpenAI has no existing business, they only have API/Subs as revenue, and they're utilizing Nvidia/AMD
I really wonder how things will look once this gold rush stabilizes
1. Why buy the cow when you can get the milk for free?
2. Amazon doesn't appear interested in acquiring Anthropic _at its current valuation_. I would be surprised if it's not available for acquisition at 1/10th its current price in the next 3-5 years
AI isn't going anywhere, but "prop model + inference" is far from a proven business model.
I guess they're taking the old adage about selling picks and shovels when everyone else is digging for gold to heart
They could make more money keeping control of the company and have control.
I'd love to see evidence for such a thing, because it's not clear to me at all that this is the case.
I personally think they're the best of the model providers but not sure if any foundation model companies (pure play) have a path to profitability.
https://www.anthropic.com/news/anthropic-acquires-bun-as-cla...
Gemini could get much better tomorrow and their entire customer base could switch without issue.
Again, I know that's a shallow moat - agents just aren't that complex from a pure code perspective, and there are already tools that you can use to proxy Claude Code's requests out to different models. But at least in my own experience there is a definite stickiness to Claude that I probably won't bother to overcome if your model is 1.1x better. I pay for Google Business or whatever it's called primarily to maintain my vanity email and I get some level of Gemini usage for free, and I barely touch it, even though I'm hearing good things about it.
(If anything I'm convincing myself to give Gemini a closer look, but I don't think that undermines my overarching (though slightly soft) point).
I became so ridiculously sick of waiting around for CC just to like move a text field or something, it was like watching paint dry. OpenCode isn't perfect but very close these days and as previously stated, crazy fast in comparison to CC.
Now that I'm no longer afraid of losing the unique value proposition of CC my brand loyalty to Anthropic is incredibly tenuous, if they cut rate limits again or hurt my experience in the slightest way again it will be an insta-cancel.
So the market situation is much different than the early days of CC as a cutting edge novel tool, and relying on that first mover status forever is increasingly untenable in my opinion. The competition has had a long time to catch up and both the proprietary options like Codex and open source model-agnostic FOSS tools are in a very strong position now (except Gemini CLI is still frustrating to use as much as I wish it wasn't, hopefully Google will fix the weird looping and other bugs ... eventually, because I really do like Gemini 3 and pay for it already via AI Pro plan).
Google was also way more minimal (and therefore faster on slow connections) and it raised enough money to operate without ads for years (while its competitors were filled with them).
Not really comparable to today, when you have 3-4 products which are pretty much identical, all operating under a huge loss.
Just having far more user search queries and click data gives them a huge advantage.
Model training, sure. But that will slow down at some point.
Are they profitable (no),
Is Claude Code even running at a marginal profit? (who knows)
Is the marginal profit large enough to pay for continued R&D to stay competitive (no)
Does Claude Code have a sustainable advantage over what Amazon, Microsoft and Google can do in this space using their incumbency advantage and actual profits and using their own infrastructure?
They're preparing for IPO?
> They could make more money keeping control of the company and have control.
It depends on how much they can sell for.
There is no moat in being a frontier model developer. A week, month, or a year later there will be a open source alternative which is about 95% as good for most tasks people care about.
I am basing my observation on the noises they are making. They did put out a model called Nova but they are not drumming it up at all. The model page makes no claims of benchmarks or performance. There are no signs of them poaching talent. Their CEO has not been in the press singing praises about AI unlike every big tech CEO.
Maybe they have a skunk-works team on it but something tells me they are waiting for the paint to dry.
> I don't see the pure "AI" plays like OpenAI and Anthropic able to survive as independent companies when they are competing against the likes of Google, and with Microsoft and Amazon happy to serve whatever future model comes along.
One thing you're right about - Anthropic isn't surviving - it's thriving. Probably the fastest growing revenue in history.
https://medium.com/@Arakunrin/the-post-ipo-performance-of-y-...
> One thing you're right about - Anthropic isn't surviving - it's thriving. Probably the fastest growing revenue in history.
Growing revenue and losing money is not “thriving”
Same w/ Perplexity.
Is the claim that coding agents can't be profitable?
Their margins are negative and every increase in usage results in more cost. They have a whole leaderboard of people who pay $20 a month and then use $60,000 of compute.
https://www.viberank.app
If you tell me to click the link, I did, but backed out because I thought you'd actually be willing to break it down here instead. I could also ask Claude about it I guess.
$60K in a month was unusual (and possibly exaggerated); amounts in the $Ks were not. For which people would pay $200 on their Max plan.
Since that bonanza period Anthropic seem to have reined things in, largely through (obnoxiously tight) weekly consumption limits for their subscription plans.
It’s a strange feeling to be talking about this as if it were ancient history, when it was only a few months ago… strange times.
This is always the pitch for money-losing IPOs. Occasionally, it is true.
It's not, even by his own citing: https://www.youtube.com/watch?v=iWs71LtxpTE
He said that this applies to "many teams" rather than "uniformly across the whole company".
I kind of get it, especially if you are stuck on some shitty enterprise AI offering from 2024.
But overall it’s rather silly and immature.
In the AI coding and tooling space everything seems to be constantly changing: which models, what workflows, what tools are in favor are all in flux. My hesitancy to dive in and regularly include AI tooling in my own programming workflow is largely about that. I'd rather wait until the dust has settled some.
also FWIW I think healthy skepticism is great; but developers outright denying this technology will be useful going forward are in for a rude awakening IMO
I think it was also back in March, not a year ago
>"I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code," Amodei said at a Council of Foreign Relations event on Monday.
>Amodei said software developers would still have a role to play in the near term. This is because humans will have to feed the AI models with design features and conditions, he said.
>"But on the other hand, I think that eventually all those little islands will get picked off by AI systems. And then, we will eventually reach the point where the AIs can do everything that humans can. And I think that will happen in every industry," Amodei said.
I think it's a silly and poorly defined claim.
edit: sorry you mostly included it paraphrased; it does a disservice (I understand it’s largely the media’s fault) to cut that full quote short though. I’m trying to specifically address someone claiming this person said 90% of developers would be replaced in a year over a year ago, which is beyond misleading
edit to put the full quote higher:
> "and in 12 months, we might be in a world where the ai is writing essentially all of the code. But the programmer still needs to specify what are the conditions of what you're doing. What is the overall design decision. How we collaborate with other code that has been written. How do we have some common sense with whether this is a secure design or an insecure design. So as long as there are these small pieces that a programmer has to do, then I think human productivity will actually be enhanced"
> "and in 12 months, we might be in a world where the ai is writing essentially all of the code. But the programmer still needs to specify what are the conditions of what you're doing. What is the overall design decision. How we collaborate with other code that has been written. How do we have some common sense with whether this is a secure design or an insecure design. So as long as there are these small pieces that a programmer has to do, then I think human productivity will actually be enhanced"
from https://www.youtube.com/live/esCSpbDPJik?si=kYt9oSD5bZxNE-Mn
(sorry have been responding quickly on my phone between things; misquotes like this annoy the fuck out of me)
This idea that “AI writes 90% of our code” means you don’t need developers seems to spring from a belief that there is a fixed amount of software to produce, so if AI is doing 90% of it then you only need 10% of the developers. So far, the world’s appetite for software is insatiable and every time we get more productive, we use the same amount of effort to build more software than before.
The point at which Anthropic will stop hiring developers is when AI meets or exceeds the capabilities of the best human developers. Then they can just buy more servers instead of hiring developers. But nobody is claiming AI is capable of that so far, so of course they are going to capitalise on their productivity gains by hiring more developers.
I'm not an LLM luddite, they are useful tools, but people with vested interests make a lot of claims that if they were true would result in a situation where we should already be seeing the signs of a giant software renaissance... and I just haven't seen that. Like, at all.
I see a lot more blogging and influncer peddling about how AI is going to change everything than I do any actual signs of AI changing much of anything.
The employees/VCs of companies that IPOd in 1999, and early 2000 cashed out leaving bag holders. The companies that IPOd in 2000/2001 had a mixed bag. The six-month-lockup had many employees salivating but unable to cash out. It all depended on the timing of the thing. This time around there are private markets that apparently are allowing the employees to become liquid. Nevertheless, earlier is better for startups to IPO particularly when the tide appears to be turning.
It's a hot take, I know :D
That said Gemini is still very, very good at reviews, SQL, design and smaller (relatively) edits; but today it is not at all obvious that Google is going to win it all. They’re positioned very well, but execution needs to be top notch.
I know that people here are myopically focussed on code, but that's not what the majority of people use AI for.
If Opus 4.5 is better than Gemini 3 for code, but the same or worse for most other uses (which seems to be the case according to benchmarks), that's great for us but terrible for Anthropic.
Claude still can't even draw basic pictures, for example.
It's an absolute workhorse.
It is so proactive in fixing blockers - 90% of the time for me, choosing the right path forward.
If anything they're far ahead of Google on the enshittification schedule (who still give out API keys for free Gemini usage and a free tier on Gemini CLI, although CLI is still pretty shaky unfortunately but that's a different issue).
It also doesn't help that CC will stop working literally in the middle of a task with zero heads up, or at best I get the 90% warning and then 30 seconds later have it stop working claiming I hit 100% after about two additional messages during the same task. I'm truly baffled by how they've managed to make the warnings as useless and aggravating as possible in CC and routinely shutdown while the repo is in a broken state, so I have to ask Codex to read the logs and piece things back together to continue working.
Have you used Gemini 3?
As to the size of the bump they'll get there isn't a single rule of thumb but larger cap companies tend to get a smaller bump, which you'd expect. I've seen models estimate a 2-5% bump for large companies and a 4-7% bump for mid level and 6-12% for "small" under $20 Billion dollar market cap companies.
> The sum of the most recent four consecutive quarters’ Generally Accepted Accounting Principles (GAAP) earnings (net income excluding discontinued operations) should be positive as should the most recent quarter.
https://www.spglobal.com/spdji/en/documents/methodologies/me...
Everybody who puts their retirement fund into an index fund are buying the index fund without relation to the index fund's price (aka price insensitive). But the index fund itself is buying shares based on each company's relative performance, hence the index fund is price sensitive. That is evidenced by companies falling out of the SP500 and even failing.
*specifically float-adjusted market capitalization
https://www.spglobal.com/spdji/en/documents/index-policies/m...
>The goal of float adjustment is to adjust each company’s total shares outstanding for long-term, strategic shareholders, whose holdings are not considered to be available to the market.
see also:
https://www.spglobal.com/spdji/en/methodology/article/sp-us-...
For most ordinary investors, this doesn't really matter, because you put your money into your retirement fund every month and you only take it out at retirement. But if you're looking at the short term, it absolutely matters. I've heard S&P 500 indexing referred to as a momentum investment strategy: it buys stocks whose prices are going up, on the theory that they will go up more in the future. And there's an element of a self-fulfilling prophecy to that, since if everybody else is investing in the index fund, they also will be buying those same stocks, which will cause them to go up even more in the future.
If you want something that buys shares based on each company's relative performance, you want a fundamental-weighted index. I've looked into that and I found a few revenue-weighted index funds, but couldn't find a single earnings-weighted index fund, which is what I actually want. Recommendations wanted; IMHO the S&P 500 is way overvalued on fundamentals and heavily exposed to certain fairly bubbly stocks (the Mag-7 alone make up 35% of your index fund, and one of them is my employer, and all of them employ heavily in my geographic area and are pushing up my home value), so I've been looking for a way to diversify into companies that actually have solid earnings.
This isn't a term used in economics. The typical terms used are positive price sensitivity and negative price sensitivity.
https://www.investopedia.com/terms/p/price-sensitivity.asp
While it is true that being added to the SP500 can lead to an increase in demand, and hence cause the index fund to pay more for the share, there are evidently opposing forces that modulate share prices for companies in the SP500.
>I've been looking for a way to diversify into companies that actually have solid earnings.
No one has more solid earnings than the top tech companies. Assuming you don't work for Tesla, you already are doing about the best you can in the US. Your options to diversify is to invest in other countries, develop your political connections, and possibly get into real estate development. Maybe have a bunch of kids.
https://companiesmarketcap.com/most-profitable-companies/
If OpenAI IPO's first, it'd be huge. Then Anthropic does, but AI IPO hype has sailed.
If Anthropic IPO's first, they get the AI IPO hype. OpenAI IPO probably huge either way.
Also, is there a way to know how much of the total volume of shares is being traded now? If I kept hyping my company (successfully), and drove the share price from $10 to $1000, thanks to retail hype, I could 100x the value of my company lets say from $100m to $10B, while the amount of money actually changing hands would be miniscule in comparison.
> ETFs by definition need to participate
You meant to say "index funds". There are many different kinds of ETFs.
Genuinely asking.
Goldman puts out their retail reports weekly that show retail is 20% of trading in alot of names and higher in alot of the meme stock names.
They used to be so tiny due to $50/trade fees, but with the advent of all the free money in the system since covid and GenZ feeling like real estate won't be their path to freedom, and option trading for retail, and zero commission trading retail has a real voice in the markets.
You can easily look up the numbers you are asking for, the TLDR is that the volume in most stocks is high enough that you can’t manipulate it much. If it’s even 2x overpriced then there’s 100m on the table for whoever spots this and shorts, ie enough money that plenty of smart people will be spending effort on modeling and valuation studies.
But that isn't relevant? If they trade a lot but own less than 10% of the shares they're still a small piece.
The institutional investors are likely not trading much, things like 401k are all long term investments
This isn't going to end well is it.
Modern IPOs are mainly dumping on retail and index investors.
Also:
> The US led a sharp rebound, driven by a surge in IPO filings and strong post-listing returns following the Federal Reserve’s rate cut.
https://www.ey.com/en_us/insights/ipo/trends
And for the rest (SP 500 etc), these companies are going to fake profits using some sort of financial engineering to be included.
See page ~9 of https://www.spglobal.com/spdji/en/documents/methodologies/me...
Anyone is bearish on Nvidia today if the share price would be at a $10T valuation.
I spend $0 on AI. My employer spends on it for me, but I have no idea how much nor how it compares to vast array of other SaaS my employer provides for me.
While I anecdotally know of many devs who do pay out of pocket for relatively expensive LLM services, they a minority compared to folks like me happy to leach off of free or employer-provided services.
I’m very excited to hopefully find out from public filings just how many individuals pay for Claude vs businesses.
They are going public.
If they get to be a memestock, they might even keep the grift going for a good while. See Tesla as a good example of this.
They are expected to hit 9 billion by end of year. Meaning the valuation multiple is only 30x. Which is still steep but at that growth rate not totally unreasonable.
https://techcrunch.com/2025/11/04/anthropic-expects-b2b-dema...
The problem as I see it is that neither of those things are significant moats. Both OpenAI and Google have far better branding and a much larger user base, and Google also has far lower costs due to TPUs. Claude Code is neat but in the long run will definitely be replicated.
Anthropic is going for the enterprise and for developers. They have scooped up more of the enterprise API market than either Google or OpenAI, and almost half the developer market. Those big, long contracts and integration into developer workflows can end up as pretty strong moats.
I am old enough (> 1 year old) to remember when Cursor had won the developer market from the previous winner copilot.
Google or Apple should have locked down Anthropic.
It’s a fair point, but the counter-point is that back then these tools were ide plugins you could code up in a weekend. Ie closer to a consumer app.
Now Claude Code is a somewhat mature enterprise platform with plenty of integrations that you’d need to chase too. And long-term enterprise sales contracts you’d need to sell into. Ie much more like an enterprise SAAS play.
I don’t want to push this argument too far as I think their actual competitors (eg Google) could crank out the work required in 6-12 months if they decided to move in that direction, but it does protect them from some of the frothy VC-funded upstarts that simply can’t structurally compete in multi-year enterprise SAAS.
Is there some sort of unlimited plan that people take advantage of ?
Its a step up from copy-pasting from an llm.
But claude code is on another level.
Google should be stomping everyone else but it's ad addiction in search will hold it back. Innovators dilemma...
Developers will jump ship to a better tool at a blink of an eye. I wouldn't call it locked in at all. In fact, people do use Claude Code and Codex simultaneously in some cases.
The latter are locked in to whatever vendor(s) their corporate entity has subscribed to. In a perverse twist, this gives the approved[tm] vendors an incentive to add backend integrations to multiple different providers so that their actual end-users can - at least in theory - choose which models to use for their work.
what about Chinese models?..
when has anything been 'locked in', someone comes with a better tool people will switch.
Are you ... aware that OpenAI and Google have launched more recent models?
They charge higher costs than OpenAI and have faster growing API demand. They have great margins compared to the rest of the industry on inference.
Sure the revenue growth could stop but it hasn’t and there is no reason to think it will.
I hear this a lot, do you have a good source (apart from their CEO saying it in an interview). I might have more faith in him but checks notes, it's late 2025 and AI is not writing all our code yet (amongst other mental things he's said).
> The Information reports that Anthropic expects to generate as much as $70 billion in revenue and $17 billion in cash flow in 2028. The growth projections are fueled by rapid adoption of Anthropic’s business products, a person with knowledge of the company’s financials said.
> That said, the company expects its gross profit margin — which measures a company’s profitability after accounting for direct costs associated with producing goods and services — to reach 50% this year and 77% in 2028, up from negative 94% last year, per The Information.
https://techcrunch.com/2025/11/04/anthropic-expects-b2b-dema...
2. A 300bn IPO can mean actually raising n 300bn by selling 100% of the company. But it could also mean seeing 1% for 3bn right? Which seems like a trivial amount for the market to absorb no?
Would be so massively oversubscribed that it would become a $600bn company by the end of the day (which is a good tactic for future fund raising too).
I suspect if/when Anthropic does its next raise VCs will be buyers still not sellers.
In an interview Sam Altman said he preferred to stay away from an IPO, but the notion of the public having an interest in the company appealed to him. Actions speak louder than words, and so it is fitting from a mission standpoint that Anthropic may do it first.
https://www.wsj.com/tech/ai/big-techs-soaring-profits-have-a...
I am not aware of any frontier inference disclosures that put margins at less than 60%. Inference is profitable across the industry, full stop.
Historically R&D has been profitable for the frontier labs -- this is obscured because the emphasis on scaling the last five years has meant they just keep 10xing their R&D compute budget. But for each cycle of R&D, the results have returned more in inference margin than they cost in training compute. This is one major reason we keep seeing more spend on R&D - so far it has paid, in the form of helping a number of companies hit > $1bn in annual revenue faster than almost any companies in history have done so.
All that said, be cautious shorting these stocks when they go public.
It is against the law to prioritize AI safety if you run a public company. You must prioritize profits for your shareholders.
-google cofounders Larry Page and Sergey Brin
then came the dot com bubble.
This is nonsense. Public companies are just as free as private companies to maximise whatever sharedholders wants them to.
Google Gemini