> OpenAI is projecting that its total revenue for 2030 will be more than $280 billion
For context, that is more than the annual revenue of all but 3 tech companies in the world (Nvidia, Apple, Google), and about the same as Microsoft.
OpenAI meanwhile is projected to make $20 billion in 2026. So a casual 1300% revenue growth in under 4 years for a company that is already valued in the hundreds of billions.
Must be nice to pull numbers out of one's ass with zero consequence.
The metaphor for the original post was more like "You're already wearing a raincoat and umbrella, and you're forecasting a flood warning?" So, the flood warning (project revenue) may be completely incorrect, but it's not incongruous with the fact that I'm wearing a raincoat and umbrella (current investor valuation). :-)
Investors are valuing it at ~$500B, which already projects massive revenue growth. OpenAI is saying "actually we are going to grow 10x faster than that". And all of this is without bringing up the “profit” word.
How much money was WeWork supposed to bring in when they were valued at $50 billion and it dropped to $10b when they put out their S-1 and faced some public scrutiny for the first time? This happened before covid and the switch to WFH. Were their investors unaware of their actual finances?
I like the little blurb at the end which said that Codex had 1.5 million users. So, if you can get each of them to pony up a mere $186k a piece, they can hit those revenue numbers.
I have used AI a bit, like it for a bunch of use cases. But god damn, these numbers are so big. Gotta wonder, are the returns even worth it? RAM prices up, electricity prices up, hard disk prices up… Maybe this is the price to pay for “progress”, but it sure is wild
> Must be nice to pull numbers out of one's ass with zero consequence.
Seems accurate?
What they are saying is if Microsoft ends up buying the rest of their shares then i.e. Microsoft's total revenue by 2030 will be more than $280 billion.
Nvidia gives money to OpenAI so they can buy GPUs that don't exist yet with memory that doesn't exist yet so they can plug them into their datacenters that don't exist yet powered by infrastructure that doesn't exist yet so they can all make profit that is mathematically impossible at this point - Stolen from someone else.
I was a paying customer ($20 a month) until AI prompted a layoff in my dying field that is web design and front end design coding. Now everytime chatGPT yells at me about memory i tell it fine Im just gonna use Gemini! I bet a lot of ppl are doing the same thing as both sit at the top of the iPhone charts.
Today I got a feature request from another team in a call. I typed into our slack channel as a note. Someone typed @cursor and moments later the feature was implemented (correctly) and ready to merge.
The tools are good! The main bottleneck right now is better scaffolding so that they can be thoroughly adopted and so that the agents can QA their own work.
I see no particular reason not to think that software engineering as we know it will be massively disrupted in the next few years, and probably other industries close behind.
It really doesn't matter how "good" these tools feel, or whatever vague metric you want - they hemorrhage cash at a rate perhaps not seen in human history. In other words, that usage you like is costing them tons of money - the bet is that energy/compute will become vastly cheaper in a matter of a couple of years (extremely unlikely), or they find other ways to monetize that don't absolutely destroy the utility of their product (ads, an area we have seen google flop in spectacularly).
And even say the latter strategy works - ads are driven by consumption. If you believe 100% openAI's vision of these tools replacing huge swaths of the workforce reasonably quickly, who will be left to consume? It's all nonsense, and the numbers are nonsense if you spend any real time considering it. The fact SoftBank is a major investor should be a dead giveaway.
the interesting thing from a practitioner perspective is how much the actual cost of using AI has dropped even while these capex numbers are astronomical. generating images is down to $0.003 each, TTS is 8x cheaper than it was a year ago, and you can produce a full AI-generated video for under $2 in API costs.
the disconnect between infrastructure spending and unit economics reminds me of the fiber optic overbuild in the early 2000s. enormous capital was wasted on capacity that took a decade to fill, but the people who built on top of cheap bandwidth did extremely well. the value capture for AI is likely going to be similar - not in the model providers but in the applications that use cheap inference as a building block.
its going to get cheaper but the new hardware will still be out of limits for consumers and they need to provide returns on the shares aka wealth transfer from taxpayers + subscribers to shareholders.
but even by some miracle they get to 60% margins are there even enough subscribers to make OpenAI as profitable as Microsoft?
> After previously boasting $1.4 trillion in infrastructure commitments, OpenAI is now telling investors that it plans to spend $600 billion by 2030.
does the word "commitment" have a different meaning in this context? How do you cut a commitment >50%? OpenAI's partners are making decisions based on the previous commitment because.. OpenAI committed to it. I must be completely wrong because how does this not set off a severe chain reaction?
edit: as others have pointed out, the article is misleading. $1.4T was over 8 years or by 2034. 2030 is halfway to 2034 and $600B is not too far from half of $1.4T.
> how does this not set off a severe chain reaction?
Just like you and me, Sam Altman can say anything he likes to say. To pump the investors' confidence, to make the US administration believe he's serious about AGI, or just to make himself feel good. It's not legally binding in any way.
You should never read it as "OpenAI committed to..." but as "Altman said these words..." and words mean very little today.
I think TSMC laughed them out of the room when they announced the original numbers. So maybe there’s no reaction now because everyone already knew not to trust OpenAI’s promises.
It’s interesting that they felt the need to leak this to the press.[0] Some investors or partners (or LPs, board members, etc. of those) are getting spooked by the spending plans and rightfully questioning if the return is there. Putting it in public my feel like a stronger commitment (though I doubt it.)
Even with the revised numbers, I cannot believe that they’ll have $280bn in revenue by 2030.
[0]: You can tell by the reason the sources are granted anonymity: because the information is private, not because they aren’t authorized to speak on the matter
A trillion here, a trillion there and all the AI companies are also telling us they're planning on wiping out 2/3 of jobs in the next 10 years? Nothing about the economics of the AI boom makes any sense.
I'm not saying it's not possible, but if we wipe out 2/3 of jobs with AI, who is going to be buying *all the stuff*?
Unemployed people aren't much of a demographic, and you can't just say UBI because that doesn't make sense either. You think the billionaires are going to allow themselves to be taxed heavily enough to support UBI just so that there's a market for people to buy stuff from them? That's nonsense.
Not trying to creep anybody out, but I just don't see a stable outcome for a society that doesn't need 2/3 of the population.
>I'm not saying it's not possible, but if we wipe out 2/3 of jobs with AI, who is going to be buying all the stuff?
Money is just a proxy for access to resources. If a machine that is capable of replacing almost all jobs is really created then money will matter much less than access to said machine.
Taken to the extreme to make the point, if you had a genie that could grant your every wish, what would you need money for ?
OpenAI is not going to pay off my mortgage, it’s not going to replace my roof, it’s not going to fix my car, and so on. Money is still going to be very necessary for goods and services.
> If you had a genie that could grant your every wish, what would you need money for ?
The things that a magic AI Genie will never be able to give you no matter how far into the AGI/Singularity things get. Such as Land, Energy, Precious Metals, Political and Social Capital, etc.
We're already there. Most of us have jobs that are just made up to fill the gaps after steam power and automation. In the future, we'll have jobs that fill up the AI gap. It's UBI, but more arbitrary so we can tell ourselves we're useful while group X is not.
Anthropic is running a similar marketing campaign as AWS/Devops tools which were trying to replace in-house IT. Pitch to the few that you can be 10/100x as productive and valuable on the hopes that they will push their organizations in this direction.
Depends. The basics are testable. An explanation of scarcity is available in Basic Economics and should be required reading (Sowell)… but whatever this VC nightmare thing is… I agree.
The market is spooked by capex projections generally. Interesting that Microsoft, despite some apparent hesitation in 2025, seems to be still going all in on AI spend over the next several years according to the most recent earnings call.
this is the underreported second-order risk. Micron, Samsung, SK Hynix all allocated HBM capacity based on hyperscaler capex projections. NAND fabs are similarly committed. a 57% reduction in projected OpenAI spend (.4T -> B) doesn't just affect NVIDIA orders -- it ripples into the memory suppliers who shifted capacity to HBM and away from commodity DRAM/NAND. if multiple hyperscalers revise down simultaneously you get a situation similar to the 2019 crypto ASIC overhang: companies tooled up for demand that evaporated. not predicting that, but the purchasing commitments question is real.
These numbers were always out of line with basic infrastructure constraints. People were talking like the US would build 50 new nuclear power plants in 10 years. And I believe we will not see $600B either, there are basic infrastructure, permitting, and power delivery limits.
It's not unforeseeable that the US demarcates Special Economic Zones without environmental oversight or labor regulations to speed up the construction.
However, we are all going to be paying higher energy costs for these ridiculous infrastructure claims. Utilities typically price out energy three years in advance. If they were protecting for twice as many energy sinks, that represents an enormous amount of generation capacity which needs to be accounted for in projections.
I saw a report that previous capacity pricing was $28/MWh/day. Latest numbers have shot up to $300.
Absolutely, and that's why we should be applying higher infrastructure fees to the permitting of data centers. The problem is that local governments want the tax revenue and are willing to screw over their constituents. This also goes in line with the decline of local newspapers, there is an epidemic of fraud and abuse of power happening in local governments across the country.
This article is bad. It is mixing up capex and opex. OpenAI is projecting more spending on compute through their income statement now than they were 6 months ago.
This is more complicated than just hand wavy spending expectation resets. Other companies were taking these “commitments” and gearing up for capital investments to meet all that demand which is now vaporizing. That creates a big mess as the hype AI hype machine starts to unravel.
This looks very much like a careful move to deflate the bubble without popping it, but we’ve likely passed that point.
From another comment I wrote here but I am gonna paste a quote I found from Intelligent Investor (page 13) from Isaac Newton during the hottest stock of his time in his country, South Sea company.
The great physicist muttered that he "could calculate the motions of the heavenly bodies, but not the madness of the people"
There seems to be a lot of madness happening in the world again as well. A lot of OpenAI claims make no sense except if we consider the world to have gone mad.
The bubbly nature of openAI and just doing whatever they think like doing with 0 regards to anything or everything including financials is a form of madness.
I was reading another comment and actually opened up the Intelligent Investor book to read the quote from there. I highly recommend that book although truth be told that I haven't read more than the first 50-100 pages as I quickly felt like passive investment is a great vehicle personally.
Will it continue to transform the economy radically? Yes.
Will that translate to the model-makers somehow capturing the entire value of the transformed economy? No.
There were a few key moments that revealed this. When OpenAI initially declared "there is no moat," I wasn't sure whether to believe them. GPT 3.5 and 4 were so much better than the competition, it felt like them saying that they had no moat was some sort of attempt to avoid regulation or scrutiny. But then, lo and behold, Claude and Gemini caught up; there really was no moat.
But up until then, while it was clear that there was no moat around OpenAI, it was unclear if there was a moat around big tech. Mistral was meh. Even Meta's were meh. We also had no idea how much these models actually cost to run. It wasn't until the "DeepSeek moment," and especially once these open source models actually started being hosted on third party services, that it became clear that this was actually a competitive landscape.
And as has already been demonstrated, because the interface for all of these models is just plain language, the cost of switching models is basically non-existent.
"there is no moat" usually mean "we have no moat" or "we want you to believe we have no moat". There are always moats, like being directly in front of eyes and thumbs (Apple) or having extensive data (Google) along hardware production capabilities, datacenters, and tons of money.
For context, that is more than the annual revenue of all but 3 tech companies in the world (Nvidia, Apple, Google), and about the same as Microsoft.
OpenAI meanwhile is projected to make $20 billion in 2026. So a casual 1300% revenue growth in under 4 years for a company that is already valued in the hundreds of billions.
Must be nice to pull numbers out of one's ass with zero consequence.
Such a weird sentence. The correct causality should be: It's valued in the hundreds of billions because the investors expect a 1300% revenue growth.
Another example is how Isaac Newton lost money on some other bubble as well: https://www.smithsonianmag.com/smart-news/market-crash-cost-... [ The market crash which cost newton fortune]
So even if NEWTON, the legendary ISAAC NEWTON could lose money in bubble and was left holding umbrellas when there was no rain.
From the book Intelligent investor, I want to get a quote so here it goes (opened the book from my shelf, the page number is 13)
The great physicist muttered that he "could calculate the motions of the heavenly bodies, but not hte madness of the people"
This quote seems soo applicable in today's world, I am gonna create a parent comment about it as well.
Also, For the rest of Newton's life, he forbade anyone to speak the words "South Sea" in his pressence.
Newton lost more than $3 Million in today's money because of the south sea company bubble.
I'm three of them and I never spent a cent on any llms, I doubt I'm the only one
> Must be nice to pull numbers out of one's ass with zero consequence.
Seems accurate?
What they are saying is if Microsoft ends up buying the rest of their shares then i.e. Microsoft's total revenue by 2030 will be more than $280 billion.
He is counting on hundreds of husbands: https://xkcd.com/605/
they'll probably fix it just like they did fix strawberry
their estimates will drop by ~20x which will be their max
as underdog in the race they'll grab fraction of even that
Garbage in, garbage out, same as before.
The tools are good! The main bottleneck right now is better scaffolding so that they can be thoroughly adopted and so that the agents can QA their own work.
I see no particular reason not to think that software engineering as we know it will be massively disrupted in the next few years, and probably other industries close behind.
And even say the latter strategy works - ads are driven by consumption. If you believe 100% openAI's vision of these tools replacing huge swaths of the workforce reasonably quickly, who will be left to consume? It's all nonsense, and the numbers are nonsense if you spend any real time considering it. The fact SoftBank is a major investor should be a dead giveaway.
the disconnect between infrastructure spending and unit economics reminds me of the fiber optic overbuild in the early 2000s. enormous capital was wasted on capacity that took a decade to fill, but the people who built on top of cheap bandwidth did extremely well. the value capture for AI is likely going to be similar - not in the model providers but in the applications that use cheap inference as a building block.
but even by some miracle they get to 60% margins are there even enough subscribers to make OpenAI as profitable as Microsoft?
https://x.com/sama/status/1986514377470845007
> After previously boasting $1.4 trillion in infrastructure commitments, OpenAI is now telling investors that it plans to spend $600 billion by 2030.
does the word "commitment" have a different meaning in this context? How do you cut a commitment >50%? OpenAI's partners are making decisions based on the previous commitment because.. OpenAI committed to it. I must be completely wrong because how does this not set off a severe chain reaction?
edit: as others have pointed out, the article is misleading. $1.4T was over 8 years or by 2034. 2030 is halfway to 2034 and $600B is not too far from half of $1.4T.
Just like you and me, Sam Altman can say anything he likes to say. To pump the investors' confidence, to make the US administration believe he's serious about AGI, or just to make himself feel good. It's not legally binding in any way.
You should never read it as "OpenAI committed to..." but as "Altman said these words..." and words mean very little today.
Even with the revised numbers, I cannot believe that they’ll have $280bn in revenue by 2030.
[0]: You can tell by the reason the sources are granted anonymity: because the information is private, not because they aren’t authorized to speak on the matter
A trillion here, a trillion there and all the AI companies are also telling us they're planning on wiping out 2/3 of jobs in the next 10 years? Nothing about the economics of the AI boom makes any sense.
I'm not saying it's not possible, but if we wipe out 2/3 of jobs with AI, who is going to be buying *all the stuff*?
Unemployed people aren't much of a demographic, and you can't just say UBI because that doesn't make sense either. You think the billionaires are going to allow themselves to be taxed heavily enough to support UBI just so that there's a market for people to buy stuff from them? That's nonsense.
Not trying to creep anybody out, but I just don't see a stable outcome for a society that doesn't need 2/3 of the population.
Money is just a proxy for access to resources. If a machine that is capable of replacing almost all jobs is really created then money will matter much less than access to said machine. Taken to the extreme to make the point, if you had a genie that could grant your every wish, what would you need money for ?
The things that a magic AI Genie will never be able to give you no matter how far into the AGI/Singularity things get. Such as Land, Energy, Precious Metals, Political and Social Capital, etc.
I don’t imagine they’re pretty.
Then when the labor market is nice and hollowed out, the tokens will go up in price several-fold.
Everyone else has been less explicit, likely because it's just not politically a good idea to keep pronouncing it.
It's part of Anthropics marketing though. Maybe to push the idea you can't beat us so join us?
what if… MBAs turned from economics to a religion and no one noticed?
They will have no choice. Proletariat must not be hungry and agitated. Free legal MJ for everyone!
If you want some light reading about current events, 1984, brave new world, and atlas shrugged will mostly get you caught up on current events.
OpenAI...not so sure, they need an IPO soon while public still is high off the double bull run post 2020
90% chance in 6-12 months spending expectations drop to $0.
But this time draw it for spending expectations.
I saw a report that previous capacity pricing was $28/MWh/day. Latest numbers have shot up to $300.
If they didn't appropriately account for risk that the expectation would not pan out, well, that's on them.
This looks very much like a careful move to deflate the bubble without popping it, but we’ve likely passed that point.
Seeing the same setup in 2008 and now. Enjoy your subsidized $200/month codex because its going to go up in the future.
https://news.ycombinator.com/item?id=46439545
Both numbers are fictional. No one really expects any of this to be true.
The people who claim to believe this are simply lying.
The great physicist muttered that he "could calculate the motions of the heavenly bodies, but not the madness of the people"
There seems to be a lot of madness happening in the world again as well. A lot of OpenAI claims make no sense except if we consider the world to have gone mad.
The bubbly nature of openAI and just doing whatever they think like doing with 0 regards to anything or everything including financials is a form of madness.
I was reading another comment and actually opened up the Intelligent Investor book to read the quote from there. I highly recommend that book although truth be told that I haven't read more than the first 50-100 pages as I quickly felt like passive investment is a great vehicle personally.
Will it continue to transform the economy radically? Yes.
Will that translate to the model-makers somehow capturing the entire value of the transformed economy? No.
There were a few key moments that revealed this. When OpenAI initially declared "there is no moat," I wasn't sure whether to believe them. GPT 3.5 and 4 were so much better than the competition, it felt like them saying that they had no moat was some sort of attempt to avoid regulation or scrutiny. But then, lo and behold, Claude and Gemini caught up; there really was no moat.
But up until then, while it was clear that there was no moat around OpenAI, it was unclear if there was a moat around big tech. Mistral was meh. Even Meta's were meh. We also had no idea how much these models actually cost to run. It wasn't until the "DeepSeek moment," and especially once these open source models actually started being hosted on third party services, that it became clear that this was actually a competitive landscape.
And as has already been demonstrated, because the interface for all of these models is just plain language, the cost of switching models is basically non-existent.
AI made “basically zero” difference in U.S. economic growth last year. https://www.youtube.com/watch?v=zZHN0-ZNe_4&t=399s