GDP adjustments are warranted, but it is more stark than both the estimates suggest.
The megaprojects of the previous generations all had decades long depreciation schedules. Many 50-100+ year old railways, bridges, tunnels or dams and other utilities are still in active use with only minimal maintenance
Amortized Y-o-Y the current spends would dwarf everything at the reported depreciation schedule of 6(!) years for the GPUs - the largest line item.
The side effects of spending funds on these mega projects is also something to consider. NASA spending has created a huge pile of technologies that we use day to day: https://en.wikipedia.org/wiki/NASA_spin-off_technologies.
> NASA spending has created a huge pile of technologies that we use day to day
We're a little too early to know if that's the case here too. I do foresee a chance at a reality where AI is a dead end, but after it we have a ton of cheap GPU compute lying about, which we all rush to somehow convert into useful compute (by emulating CPU's or translating traditional algorithms into GPU oriented ones or whatever).
If all AI progress somehow immediately halted, the models that have currently been built will still have more economic impact than the Internet.
Not least because the slower the frontier advances, the cheaper ASICs get on a relative basis, and therefore the cheaper tokens at the frontier get.
We have a massive scaffolding capability overhang, give it ten years to diffuse and most industries will be radically different.
Again, all of this is obvious if you spend 1k hours with the current crop, this isn’t making any capability gain forecasts.
Just for a dumb example, there is a great ChatGPT agent for Instacart, you can share a photo of your handwritten shopping list and it will add everything to your cart. Just following through the obvious product conclusions of this capability for every grocery vendor’s app, integrating with your fridge, learning your personal preferences for brands, recipe recommendation systems, logistics integrations with your forecasted/scheduled demand, etc is I contend going to be equivalent engineering effort and impact to the move from brick and mortar to online stores.
i feel a lot of people in tech have this incuriously deterministic attitude about llms right now… previous <expensive capital project> revolutionized the world, therefore llms will! despite there really nothing to show for it so far other than writing rote code is a bit easier and still requires active baby sitting by someone who knows what they are doing
They’re already far more useful than that, and I suspect harness engineering alone could add another OOM of productivity, without any underlying change in the models available today.
You have to agree that it's totally possible that none of those things you are envisioning getting built out actually end up working as products, right?
AI (LLM) progress would stop, and then everything people try to do with those last and most capable models would end up uninteresting or at least temporary. That's the world I'm calling a "dead end".
No matter how unlikely you think that is, you have to agree that it's at least possible, right?
> then everything people try to do with those last and most capable models would end up uninteresting
I believe that some of my made up examples won’t end up getting built, but my point is that there is _so much_ low hanging fruit like this.
Of course, anything is _possible_, but let’s talk likelihood.
In my forecast the possible worlds where progress stops and then the existing models don’t end up making anything interesting are almost exclusively scenarios like “Taiwan was invaded, TSMC fabs were destroyed, and somehow we deleted existing datacenters’ installed capacity too” or “neo-Luddites take over globally and ban GPUs”, all of this gives sub-1% likelihood.
You can imagine 5-10% likelihood worlds where the growth rate of new chips dramatically decreases for a decade due to a single black-swan event like Taiwan getting glassed, but that’s a temporary setback not a permanent blocker.
Again, I’m just looking at all the things that can obviously be built now, and just haven’t made it to the top of the list yet. I’m extremely confident that this todo list is already long enough that “this all fizzles to nothing” is basically excluded.
I think if model progress stops then everyone investing in ASI takes a big haircut, but the long-term stock market progression will look a lot like the internet after the dot com boom, ie the bloodbath ends up looking like a small blip in the rear view mirror.
I guess, a question for you - how do you think about coding agents? Don’t they already show AI is going to do more than “end up uninteresting”?
> Of course, anything is _possible_, but let’s talk likelihood.
The problem with talking likelihood is that it's an interpretation game. I understand you think it's wholly unlikely that it all fizzles out, I could read that from your first post. I hope it's also clear that I do think it's likely.
That's the point where we have to just agree to disagree. We have no rapport. I have no reason to trust your judgment, and neither do you mine.
However I do feel a lot of this comes down to facts about the world now, eg whether Claude Opus is doing anything interesting, which are in principle places where you could provide some evidence or ideas, along the lines of the detail that I gave you.
My read so far is you are just saying “maybe it fizzles out” which is not going to persuade anyone who disagrees. Sure, “maybe”, especially if you don’t put probabilities on anything; that statement is not falsifiable.
> The problem with talking likelihood is that it's an interpretation game
I am open to updating my model in response to a causal argument, if you care to give more detail. I view likelihoods as the only way to make these sorts of conversations concrete enough that anyone could hope to update each other’s model.
Even if chatbot LLM's stop at their current capability, There's a whole ecosystem of scientific language models(in drug discovery, chemistry, materials design, etc), and engineering language models(software, chip design, etc) that are very valuable in their fields.
And even if chatbot LLM's seem to be a dead end, them and other machine learning algo's will be happy to use the data centers to create/discover a lot of stuff.
AI progress may fizzle out, but everything it produced so far would still be there. Models are just big bags of floats - once trained, they're around forever (well, at least until someone deletes them), same is true about harnesses they run in (it's just programs).
But AI proliferation is not stopping soon, because we've not picked up even the low hanging fruits just yet. Again, even if no new SOTA models were to be trained after today, there's years if not decades of R&D work into how to best use the ones we have - how to harness the big ones, where to embed the small ones, and of course, more fundamental exploration of the latent spaces and how they formed, to inform information sciences, cognitive sciences, and perhaps even philosophy.
And if that runs out or there is an Anti AI Revolution, we can still run those weather models and route planners on the chips once occupied by LLMs - just don't tell the proles that those too are AI, or it's guillotine o'clock again.
> there's years if not decades of R&D work into how to best use the ones we have - how to harness the big ones, where to embed the small ones, and of course, more fundamental exploration of the latent spaces and how they formed, to inform information sciences, cognitive sciences, and perhaps even philosophy.
I think my sense of "dead end" would entail none of those directions panning out into anything interesting. You would "explore the latent spaces" only to find nothing of value. Embedding the LLM models wouldn't end up doing anything useful for whatever reason, and philosophy would continue on without any change.
I think there is little chance it is a "dead end", it's here to stay but at least LLMs seem to have hit the diminishing returns curve already, despise what investors might think, and so far none of the big providers actually makes money for all that investment
I think for many, if LLMs and AI only improves marginally in the next 5-10 years it is effectively a dead end. The capital expenditure necessitates AI does something exponentially more valuable than what it does now.
I think we are saying the same thing.i just think the pull back on AI will be dramatic unless something amazing happens very soon.
I just don’t see it. Both professionally and personally I’m producing so much more now. Back burner projects that weren’t worth months of my time are easily worth a few hours and $20 or whatever.
You’re probably already experienced at your job and using AI to enhance that, or at least using that experience to keep the AI results clean. That’s something you or a company would want to pay for but it has to be a lot more than today’s prices to make it profitable. Companies want to get more out of you, or get a better price/performance ratio (an AI that delivers cheaper than the equivalent human).
But current gen AIs are like eternal juniors, never quite ready to operate independently, never learning to become the expert that you are, they are practically frozen in time to the capabilities gained during training. Yet these LLMs replaced the first few rungs of the ladder so human juniors have a canyon to jump if they want the same progression you had. I’m seeing inexperienced people just using AI like a magic 8 ball. “The AI said whatever”. [0] LLMs are smart and cheap enough to undercut human juniors, especially in the hands of a senior. But they’re too dumb to ever become a senior. Where’s the big money in that? What company wants to pay for the “eternal juniors” workforce and whatever they save on payroll goes to procuring external seniors which they’re no longer producing internally?
So I’m not too sure a generation of people who have to compete against the LLMs from day 1 will really be producing “so much more” of value later on. Maybe a select few will. Without a big jump in model quality we might see “always junior” LLMs without seniors to enhance. This is not sustainable.
And you enhancing your carpentry skills for your free time isn’t what pays for the datacenters and some CEO’s fat paycheck.
[0] I hire trainees/interns every year, and pore through hundreds of CVs and interviews for this. The quality of a significant portion of them has gone way down in the past years, coinciding with LLMs gaining popularity.
So what. Fluctuations over a year or two are meaningless. Do you really believe that the constant-dollar price of an LLM token will be higher in 20 years?
I can see a world where energy costs rise at a rate faster than overall inflation, or are a leading indicator. In that scenario then yes I could see LLM token costs going up.
This is thoroughly debunked at this point. The frontier labs are profitable on the tokens they serve. They are negative when you bake in the training costs for the next generation.
Lol are people like you going to be enough to support the large revenues? Nope.
A firm that see's rising operating expenses but no not enough increase in revenue will start to cut back on spending on LLMs and become very frugal (e.g. rationing).
The shovels and labour used to make those things where not depreciated.
The GPUs are the shovels, not the project. AI at any capability will retain that capbibilty forever. It only gets reduced in value by superior developments. Which are built upon technologies that the previous generation developed.
Calling the GPUs the shovels is bonkers because a) shovels are cheap, GPUs are not. And b) when you build a bridge the bridge doesn’t need shovels to be passable. Without GPUs, the datacenter is useless, the model is useless, etc.
If anything, the GPUs are the steel that the bridge is made of. Each beam can be replaced, but if too many fail the bridge is impassible. A bridge with a 6 year lifespan for each beam is insane.
You’re taking the metaphor way too literally. The people who made the most profit weren’t literally selling shovels, they were the ones providing logistics and support services to the gold miners, like hauling tons of equipment over tens of miles of mountain or providing the sales channel for the gold. They siphoned off most of the profit from the ventures that depended on them (like LLMs depend on GPUs) because the miners had no other choice, to the point where even the most productive mines often weren’t profitable at all.
A less literal example is the conquistadors: their shovels were ships, horses, gunpowder, and steel. You can look at Spanish records from the Council of the Indies archive and any time treasures were discovered, the price of each skyrocketed to the point where only the wealthiest hidalgos and their patrons could afford to go on such adventures. I.e. the cost of a ship capable of a cross Atlantic voyage going from 100k pieces of eight to over a million in the span of only a few years (predating the treasure fleet inflation!)
Gold rushes create demand shocks, and anyone who is a supplier to that demand makes bank, regardless of whether its GPUs or “shovels”.
> You can look at Spanish records from the Council of the Indies archive and any time treasures were discovered, the price of each skyrocketed to the point where only the wealthiest hidalgos and their patrons could afford to go on such adventures.
Today this is real estate. And it's something people keep forgetting when arguing that ${whatever breakthrough or just more competition} will make ${some good or service} cheaper for consumers: prices of other things elsewhere will raise to compensate and consume any average surplus. Money left on the table doesn't stay there for long.
GPUs don't really have six year lifespans, though. The hardware itself lasts far longer than that, even hardware that's been used for cryptomining in terrible makeshift setups is absolutely fine for reuse.
Each of these GPUs pull up to a kilowatt of power. The average commercial power cost is 13.4 ¢/kWh. That means running a single H100 full tilt 24/7 is a power operationing cost of $1,100 per card per year.
In three years the current generation of GPUs will be 50% or more faster. In six years your talking more than 100% faster. For the same energy costs.
If you're running a GPU data center on six year old GPUs, your cost to operate per sellable unit of work is double the cost of a competitor.
One thing I am not entirely sure if there will be huge efficiency gains. Just looking at TDP that is the power consumption of say 3090 and 5090 and the increase is substantial then compare it to performance and the performance lift stops looking that great...
3x increase in compute for a 1.5x increase in tdp is pretty good considering the underlying process had barely changed. In anycase, consumer GPUs aren't a good metric as they operate with different economic constraints.
H100 to GB200 saw a 50x increase in efficiency, for example.
Fair, I was hand waving to make a point. “If it generates more than $1100 + (resale price * WACC) + opportunity cost from physical space/etc” would have been more accurate.
But the point is — you don’t decommission profit generators just because a competitor has a lower cost structure. You run things until it is more profitable for you to decommission them.
If my data center sells a pflop at $5 because of our electricity use and the data center a state over with newer GPUs sells it at $2.50/pflop, it doesn't matter how much economic benefit it generates, my customers are all going to the data center a state over.
In context of datacenter using AI workloads, it's cheaper to replace them after few years with faster, more energy efficient ones, because the power cost is major factor
"Inference consumes 60–90% of total AI lifecycle costs." So shovel is not the right analogy, more like GPU = coal burning engine. And yes, coal was a big railroad expense, more so than financing construction debt.
Not really. The base training data cutoff will quickly render models useless as they fail to keep up with developments.
Translating some Farsi news articles about the war was hilarious, Gemini Pro got into a panic. ChatGPT either accused me of spreading fake news, or assumed this was some sort of fantasy scenario.
Karpathy - and others - consider the pre-training knowledge as much a liability as an asset. If we could just retain the emergent reasoning and language capability without the hazy recollections the models would likely be stronger.
That's definitely true for some of them, but for others it's not so clear, like the Apollo or Manhattan projects? Those of course also have lasting impact but it's more in terms of knowledge, which at least arguably we are also accruing with these data centers.
RS-25 - It was designed as HG-3 during the 60s for Saturn-V and manufactured for the Space Shuttle and refurbished for SLS and just launched last month.
Vehicle assembly building - Built for Saturn-V launches been in active use and continues today .
Crawler-transporters - Hanz and Franz were built in 1966 for Apollo and still used for launches.
There are plenty of other examples from Apollo program of actual hardware being repurposed and used for later missions.
In other mega space projects, Hubble is still doing active research, 35 years after launch, voyager is sending data close to 50 years later.
It is a whole another topic whether they should be used, how NASA is funded , and this is why makes programs like SLS or the shuttle are so expensive and so forth.
The point is these mega projects had a long lifetime of value, albeit with higher maintenance costs for the tech heavy ones like Apollo than say a bridge or a dam does.
I think there's more nuance to it. The real asset is the models that are being created.
Imagine this world: the bubble "pops" in a couple years. The GPUs stick around for a few more years after that. At the end, we pretty much don't train new foundation models anymore - no one wants to spend the money on the hardware needed to make a real advance.
People continue to refine, distill, and optimize the existing foundation models for the next century or two, just like people keep laying new track over old railway right of ways.
I’m not sure tax depreciation rates are the best measure here. Those GPUs will be used for much longer than 6 years, and the returns from the businesses will be an order of magnitude longer.
The jury is still out on this. Those tax based deprecation schedules are largely a relic of traditional data centers, where workloads are fairly moderate compared to AI use cases. Additionally, power and rack space constraints can complicate things quite a bit. If next gen chips are significantly more efficient and you are currently constrained by power availability, you might pull your old servers and replace them with the newer ones regardless of how much useful life you have left.
Azure ran K-80/P-100 fleets a bit longer for 8-9 years . Google does 9 years for TPUs .
In the current generation There are plenty of questions around
- viability of training to inference cascades (the key to extended life) given custom ASICs hitting production like cerebras did early this year.
- energy efficiency of older chips in tight energy environments , just new grid capacity constraints favor running newer efficient chips ignoring perhaps short term(< 1 year) price shock due to war.
- higher MBTF , compared to older GPUs modern nodes are 8 GPU clusters built on 2/3 nm processors depending on HBM memory, the tolerances are much lower especially for training.
- new DCs being spun up are being by up less than ideal conditions due to permitting, part supply and other constraints which will impact operating environment.
Not withstanding, all these issues and even taking a generous 10 year useful life . The expenses dwarf every mega project before it .
It will become more expensive to fix than replace. Also more energy intensive than newer generation to operate. MBTF is significant the older the fleet gets higher the failure rates .
A typical node today is 8 GPU node today , you have to keep replacing failed GPUs by cannibalizing parts from other GPUs as nobody is selling new GPUs of that model anymore at higher frequencies.
In addition to outright failure there are higher error rates in computation in graphics it tends to be flickers or screen artifacts and so on.
Azure operated K-80s and P-100s for 9 and 7 years respectively but they were running at 2 GPU nodes and of course were much simpler compared to today’s HBM behomouths on 2/5 nm processor nodes . Google operates their custom ASIC TPUs for about 8-9 years .
With custom inference ASICs like cerebras hitting production the cascading of training NVIDIA chips to inference to get the 5-6 year useful life is also not clear.
GPUs do have a use in warfare though. I mean, LLMs are basically offensive weapons disguised as software engineers.
Sure, LLMs can kind of put together a prototype of some CRUD app, so long as it doesn’t need to be maintainable, understandable, innovative or secure. But they excel at persisting until some arbitrary well defined condition is met, and it appears to be the case that “you gain entry to system X” works well as one of those conditions.
Given the amount of industrial infrastructure connected to the internet, and the ways in which it can break, LLMs are at some point going to be used as weapons. And it seems likely that they’ll be rather effective.
FWIW, people first saw TNT as a way to dye things yellow, and then as a mining tool. So LLMs starting out as chatbots and then being seen as (bad) software engineers does put them in good company.
Imagine comparing something that has a useful life of 100+ years vs a thing that is worn out, much less durable, and needs replacing much more often and can become obselete from innovation within its own product category.
Comical. China can continue innovating on GPUs and all this existing spend to stock up on compute is a waste. Again, comical. Moreover China has energy capacity that the US does not. Meaning all those GPU's that deliver less performance per watt? Yep going in the bin.
So yeah.. carry on telling me how this is going to yield some supreme advantage lmao.
They’re unclassified public cloud GPUs today, much the same as the massive industrial base of the United States was churning out harmless consumer widgets in 1939. Those widget makers happened to be reconfigurable into weapon makers, and so wartime production exploded from 2% to 40% of GDP in 5 years [1]. But the total industrial output of course didn’t expand by nearly that much.
I think it’s maybe plausible that private compute feels similar in the next do-or-die global war.
The United States has almost no domestic capability to produce advanced semiconductors. There is no abundance of industrial capacity cranking out GPUs that can be quickly diverted from AI companies into weapon systems.
Even if private compute was at a level of maturity where you could use it for classified workloads, knowing that the infrastructure is being managed by someone in India or China, securely getting data into and out of that infrastructure is still a mostly unsolvable problem.
My point is the existing private DCs can be reconfigured for a different use. Building new gpus is not required to on-shore compute. We already have it. Obviously if the military started contracting out compute onto the hyperscalar clusters it would involve a host of changes. I wasn’t aware that they were letting India and China manage their infrastructure… That seems exceedingly unlikely? That relationship would obviously be severed if the compute was reconfigured for the military.
yields are constantly improving on monthly basis, according to executives around 7% per month, so the capability is definitely there, but yields still needs some time
On the topic of warfare, wars are fought differently now. Compute will be mentioned in the same breath as total manufacturing output if a global war between superpowers erupts. In highly competitive industries this is already the case. Compute will be part of industrial mobilization in the same way that physical manufacturing or transportation capacity were mobilized in WWII. I’m not an expert on military computing but my intuition is that FLOPS are probably even more easily fungible into wartime compute than widget makers, and the US was able to go widgets->weapons on an unbelievable scale last time.
There are plenty of military uses for computing, but I also find it hard to believe anything but a handful of datacenters are or could be a major factor in anything but a completely 1 sided war. They are very vulnerable targets that are easy to locate and require large amounts of power and cooling. I also just don't see the application, encryption capabilities far exceed the compute available needed for decryption and computing precision and speed with even 20 year old tech far exceeds the precision of anything you would want to control. Even with tangible banefits, say 10% more or less casualties than there would be otherwise, in an exchange with anything resembling a peer military force im not sure it matters because everybody already loses.
Only half of the rail capacity that existed during the railroad boom times was still in use by the 1970s. Lots of it was never really used at all after various railroads went bankrupt. But your point still stands.
That said, I'm pretty sure in a compute-hungry AI world you aren't going to retire GPUs every 6 years anymore. Even if compute capacity jumps such that current H100s only represent 10% of total compute available in 6 years, you're still running those H100s until they turn to dust.
I just think it's hard to compare localized railroad infrastructure to globalized AI capacity and say one was more rational than the other on a % of GDP basis until the history actually plays out.
If you compare global investment in nuclear weapons it would dwarf the manhattan project and AI thus far, and yet, 99.99999% of nuclear weapons investment is just "wasted" capacity in that it has never been "used." But the value it has created in other ways (MAD-enabled peace) has surely been profitable on net. Nobody would have predicted this at the time.
Playing armchair internet pessimist about the "new thing" always makes you feel smart but is usually not a good idea since you always mis-price what you don't know about the future (which is almost everything).
This seems to show the railroads peaking around 9% of GDP. While that's lower than some of the other unsourced numbers I've seen, it's much higher than the numbers I was able to find support for myself at
The modern concept of GDP didn't exist back then, so all these numbers are calculated in retrospect with a lot of wiggle room. It feels like there's incentive now to report the highest possible number for the railroads, since that's the only thing that makes the datacenter investment look precedented by comparison.
But doesn't that overstate it in the other direction? Talking about investments in proportion to GDP back when any estimate of GDP probably wasn't a good measure of total economic output?
We're talking about the period before modern finance, before income taxes, back when most labor was agricultural... Did the average person shoulder the cost of railroads more than the average taxpayer today is shouldering the cost of F-35? (That's another line in Paul's post.)
The F-35 case is interesting. Lockheed Martin can, given peak rates seen in 2025, produce a new F-35 approximately every 36 hours, as they fill orders for US allies arming themselves with F-35's. US pilot training facilities are brimming with foreign pilots. It's the most successful export fighter since the F-16 and F-4, and presently the only means US allies have to obtain operational stealth combat technology.
What that means for the US is this: if the US had to fight a conventional war with a near-peer military today, the US actually has the ability to replace stealth fighter losses. The program isn't some near-dormant, low-rate production deal that would take a year or more to ramp up: it's a operating line at full rate production that could conceivably build a US Navy squadron every ~15 days, plus a complete training and global logistics system, all on the front burner.
If there is any truth to Gen Bradley's "Amateurs talk strategy, professionals talk logistics" line, the F-35 is a major win for the US.
> Lockheed Martin can, given peak rates seen in 2025, produce a new F-35 approximately every 36 hours ... it's a operating line at full rate production that could conceivably build a US Navy squadron every ~15 days, plus a complete logistics and training system, all on the front burner.
That's amazing. I had no idea the US was still capable of things like that.
I wonder if there's a way to get close to that, for things that aren't new and don't have a lot of active orders. Like have all the equipment setup but idle at some facility, keep an assembly teams ready and trained, then cycle through each weapon an activate a couple of these dormant manufacturing programs (at random!) every year, almost as a drill. So there's the capability to spin up, say F-22 production quickly when needed.
Obviously it'd cost money. But it also costs a lot of money to have fighter jets when you're not actively fighting a way. Seems like manufacturing readiness would something an effective military would be smart to pay for.
"I had no idea the US was still capable of things like that."
It's more than just the US though. It's the demand from foreign customers that makes it possible. It's the careful balance between cost and capability that was achieved by the US and allies when it was designed.
Without those things, the program would peter out after the US filled its own demand, and allies went looking for cheaper solutions. The F-35 isn't exactly cheap, but allies can see the capability justifies the cost. Now, there are so many of them in operation that, even after the bulk of orders are filled in the years to come, attrition and upgrades will keep the line operating and healthy at some level, which fulfills the goal you have in mind.
Meanwhile, the F-35 equipped militaries of the Western world are trained to similar standards, operating similar and compatible equipment, and sharing the logistics burden. In actual conflict, those features are invaluable.
There are few peacetime US developed weapons programs with such a record. It seems the interval between them is 20-30 years.
It took a while to reach full production rate for the F-35. Partly because the supply chain (mostly US based because of the Buy American Act) had to come up to speed[0]. But also because there were running-changes being made to the plane, necessitating changes to the production line to accommodate them.
The F-22 production tooling is supposedly in storage at Sierra Army Depot. Why there and not at the boneyard at Davis-Monthan is an interesting question[1]. Spooling production of the F-22 back up will take less time than originally, but still won't be quick (a secure factory floor large enough has to be found, workforce knowledge has been lost, adding upgrades, etc.)
[0] Scattered across as many congressional districts as possible.
[1] I was at Sierra in the 80's on TDY and it was all Army and Army civilians. A USAF guy like me really stood out.
That's the problem with going too far using "money" or "GDP" - you can roughly compare the WWII 45% of GDP spent with today - https://www.davemanuel.com/us-defense-spending-history-milit... because even by WWII much was "financialized" in such a way that it appears on GDP (though things like victory gardens, barter, etc would explicitly NOT be included without effort - maybe they do this?).
As you get further and further into the past you have to start trying to measure it using human labor equivalents or similar. For example, what was the cost of a Great Pyramid? How does the cost change if you consider the theory that it was somewhat of a "make work" project to keep a mainly agricultural society employed during the "down months" and prevent starvation via centrally managed granaries?
You don't even need to go that far back to run into issues, when I read Pride and Prejudice, I think Mr. Darcy was one of the richest people in England at around £10,000/year, but if you to calculate his wealth in today's terms it wasn't some outrageous sum (Wikipedia is telling me ~£800,000/year). The thing is that the economy was totally different back then -- labor cost practically nothing, but goods like furniture for instance were really expensive and would be handed down for generations.
With £800K today, you may not even be able to afford the annual maintenance for his mansion and grounds. I knew somebody with a biggish yard in a small town and the garden was ~$40K/yr to maintain. Definitely not a Darcy estate either.
Thinking about it, an income of £800K is something like the interest on £10m.
£10,000 per year for Mr Darcy is 10,000 gold sovereigns per year. A gold sovereign at spot price today is about $1,100. So that’s over 10 million dollars per year in gold-equivalent wealth. Plenty to maintain his estate with.
Alternatively, £10,000 is 200,000 sterling silver shillings per year (20 shillings per pound) for him. A sterling shilling today is about $13.50 at spot price. So that’s $2.7million per year in silver-equivalent wealth. Still plenty!
Newsflash, old antique furniture from around that time is still really expensive even today. It was a hand-crafted specialty product, not run-of-the-mill IKEA stuff. If you compare the prices of single consumer goods while adjusting for inflation, they generally check out at least wrt. the overall ballpark. The difference is that living standards (and real incomes) back then for the average person were a lot lower.
Inflation is by definition the change in prices of a general basket of goods. Some things will outrun the basket and some things will underrun it. In general consumer durables have underrun, things like TVs and yes, sofas, are way way cheaper now than ever before. I'm not really sure why you would exclude IKEA type furniture, in most cases it's probably as good or better than a really old hand crafted one. If back then you needed to get an ultra luxury sofa but now you can get an IKEA one for the same general quality then that's a massive win for affordability even if the ultra luxury category still exists.
~£800,000/year when compared to median value in current UK? Outrageous is relative sure, but for most people out there it should be no surprise they would feel that as an outrageously odd distribution of wealth.
The point is that ~£800,000/year is high, even possibly "very high" but it is not "most wealthy man in Britain" high, and certainly nowhere near "hire as many people as worked for Darcy".
The big change is the end of any sort of backing in money. The Minneapolis Fed calculated consumer price index levels since 1800 here. [1] Of course that comes with all the asterisks we're speaking of here for data going back that far, but their numbers are probably at least quite reasonable. They found that from 1800 to 1950 the CPI never shifted more than 25 points from the starting base of 51, so it always stayed within +/- ~50% of that baseline. That's through the Civil War, both World Wars, Spanish Flu, and much more.
Then from 1971 (when the USD became completely unbacked) to present, it increased by more than 800 points, 1600% more than our baseline. And it's only increasing faster now. So the state of modern economics makes it completely incomparable to the past, because there's no precedent for what we're doing. But if you go back to just a bit before 1970, the economy would have of course grown much larger than it was in the past but still have been vaguely comparable to the past centuries.
And I always find it paradoxical. In basic economic terms we should all have much more, but when you look at the things that people could afford on a basic salary, that does not seem to be the case. Somebody in the 50s going to college, picking up a used car, and then having enough money squirreled away to afford the downpayment on their first home -- all on the back of a part time job was a thing. It sounds like make-believe but it's real, and certainly a big part of the reason boomers were so out of touch with economic realities. Now a days a part time job wouldn't even be able to cover tuition, which makes one wonder how it could be that labor cost practically nothing in the past, as you said. Which I'm not disputing - just pointing out the paradox.
It is notable that the median monthly rent was $35/month on a median income of $3000, so ~15% of income spent on rental housing. But it's interesting reading that report because a significant focus was on the overcrowding "problem". Housing was categorized by number of rooms, not number of bedrooms. The median number of rooms was 4, and the median number of occupants >4 per unit (or more than 1 person per room). I don't think it's a stretch to say that the amount of space and facilities you get for your money today is roughly equivalent. Yes, greater percentage of your income goes to housing, and yet we have far more creature comforts today then back in 1950--multiple TVs, cellphones, appliances, and endless amounts of other junk. We can buy many more goods (durable and non-durable) for a much lower percentage of our income.
What an interesting paper you found! Home ownership stats in contemporary times are quite misleading because of debt. Most home owners now are still paying rent in the form of a mortgage to a bank. In the 50s most home owners genuinely owned their homes 'free and clear'. The exact rate was 56% in the 1951 per your paper (which was a local low), and now it's at 40% which is a local high. And the contemporary demographics are all messed up - it's largely driven by older to elderly individuals in non-urban low-income states.
As for number of occupants, the 50s had a sustainable fertility rate. That means, on average, every woman was having at least 2 kiddos. So a median 4 occupant house would be husband, wife, and 2 children living in a place with a master bedroom, kids room, a combined kitchen/dining room, and a living room. Bathrooms, oddly enough, did not count as rooms. So in modern parlance it'd mostly be a 2/2 for up to 14% of one person's median income, and 0% in most cases as most people 'really' owned their homes.
We definitely have lots more gizmos, but I feel like that's an exchange that relatively few people would make in hindsight.
I sometimes feel that the facts are all out there, but half the people pick one half the facts as causal and the other half pick the other half. Are home prices rising because people have fewer kids (and therefore more to spend on housing) or are people having fewer kids because house prices are rising (and therefore less to spend on kids)?
I suspect that it's a complex mixture of all possibilities, and you can only really look at trends and your own life - the one thing you can have something resembling understanding and control.
> Are home prices rising because people have fewer kids (and therefore more to spend on housing) or are people having fewer kids because house prices are rising (and therefore less to spend on kids)?
Maybe a false dichotomy? My suspicion is that home prices rise because more credit becomes available (and not only homes prices but the price of other assets). If you think about it in broader terms this explains what happens to the fruits of our increased productivity - lenders extend more credit as productivity rises thereby claiming the benefit for themselves. The working person is still stuck with a 40 hour week because despite being more productive they have more debt to service.
There's something there, definitely - reading "ordinary man's guide to the financial life" from different eras is informative; many of the older ones work really hard to convince you that a home loan is something worth getting and "you'll pay it off faster than you think" - now we have guides talking about "good debt" and "never pay it off".
I posted just that on the Twitter feed but then I realized that railroad started at the beginning of an industrial revolution where labor was a far larger portion of GDP compared to industrial production. So it kind of makes sense that the first enabling technology consumed far more GDP than current investments do, even on a marginal basis.
Wild graphic. US spending on one flying killing machine (the F-35) is comparable to total spending on the Marshall plan to reconstruct Europe after WWII, or the interstate highway system, or all datacenters combined. Priorities!
And this is why I hate log scale graphs. Even in the cases where it does have a useful effect, 90%+ of people are still going to interpret it in a linear way and therefore make it massively misleading.
> Makes it a little less dramatic. But also shows what a big *'n deal the railroads were!
It also makes it more dramatic, consider the programs on the list and what they have in common.
* The Apollo program. A government-funded science project. No return on investment required.
* The Manhattan Project. A government-funded military project. No return on investment required.
* The F-35 program. A government funded military project. No return on investment required.
* The ISS. A government funded science project. No return on investment required.
* The Interstate Highway System. A government funded infrastructure project. No return on investment required.
* The Marshall Plan. A government funded foreign policy project. No return on investment required.
The actual return on investment for these projects is in the very long term of decades; Economic development, national security, scientific progress that benefits the entire country if not the entire world.
Consider the Marshall Plan in particular. It's a massive money sink, but it's nature as a government project meant it could run at losses without significant economic risk and could aim for extremely long term benefits. It's been paying dividends until January last year; 77 years.
And that dividend wasn't always obvious; Goodwill from Europe towards the US is what has prevented Europe from taking similar actions as China around the US' Big Tech companies. Many of whom relied extensively on 'Dumping' to push European competitors out of business, a more hostile Europe would've taken much more protectionist measures and ended up much like China, with it's own crop of tech giants.
And then there's the two programs left out. The railroads and AI datacenters. Private enterprise that simply does not have the luxury of sitting on it's ass waiting for benefits to materialize 50 years later.
As many other comments in this thread have already pointed out: When the US & European railroad bubbles failed, massive economic trouble followed.
OpenAI's need for (partial) return on investment is as short as this year or their IPO risks failure. And if they don't, similar massive economic trouble is assured.
The search term is the "Railway Mania", which predominant describes the UK's railroad bubble, with smaller similar booms on mainland europe. (You will have to look up French and German sources for the best info on those)
The bubble failed in the sense that massive commitments for new railways were made, and then the 1847 economic crisis caused investment to dry up, which collapsed the bubble and put a halt to the railroad construction boom. Those railway commitments never materialized, and stock market crashes followed.
I'm also being a little cheeky with what "massive economic trouble" entails; While the stock market was heavy on railroads and crashed right into a recession, the world in the mid-1800s was much less financialized so the consequences in absolute terms were less pronounced than a similar bubble-collapse would be today. As such, the main historical comparison is structural.
(Similarly, the AI bubble is likely to burst "by itself" unless OpenAI's IPO is truly catastrophically bad. What's more likely is that a recession happens and then the recession triggers a stock market collapse, which then intensify eachother. And so these historical examples of similar situations may prove illustrative.)
> and then the 1847 economic crisis ... the world in the mid-1800s was much less financialized so the consequences in absolute terms were less pronounced than a similar bubble-collapse would be today.
And yet 1848 was a very interesting year! Revolutionary even.
You're actually arguing those highly technical engineering projects provided nothing to humanity investing labor in them because they were not a financial success?
Just confirms my suspicion HN is not a forum for intellectual curiosity. It's been entirely subsumed by MBAs and wannabe billionaires.
> You're actually arguing those highly technical engineering projects provided nothing to humanity investing labor because they were not a financial success?
No. Re-read the comment.
I specifically say "No return on investment required" not "Has no return on investment". It didn't matter whether these projects earned back their money in the short term, or whether it takes the longer term of many decades.
The ISS hasn't earned back it's $150 billion, and it won't for a pretty long time yet. Doesn't mean it's not a good thing for humanity. Just means that it'd be a bad idea to have the project ran & funded by e.g. SpaceX. The project would've failed, you just can't get ROI on $150 billion within the timeframe required. SpaceX barely survived the cost of developing it's rockets. (And observe how AI spending is currently crushing the profitability of the newly-merged SpaceX-xAI.)
I'm not even saying "AI doesn't provide anything to humanity", I was saying that AI needs trillions of dollars in returns that do not appear to exist, and so it's likely to collapse.
The railroads and the interstate are arguably the biggest and broadest impact, especially in 2nd order effects (everything West of the Mississippi would be vastly different economically without them).
I am not an ai-booster, but I would not be surprised at AI having a similar enabling effect over the long term. My caveat being that I am not sure the massive data center race going on right now will be what makes it happen.
I agree that AI will probably have bigger effects that we could possibly predict right now. But unlike past booms/bubbles, I suspect the infrastructure being built now won't be useful after it resolves. The railroads, interstate system, and dotcom fiber buildout are all still useful. AI will need to get more efficient to be useful as established technology, so the huge datacenters will be overbuilt. And almost none of the Nvidia chips installed in datacenters this year will still be in use in 5 years, if they're even still functional.
The era of the AI data center will be brief because the models will get better and the computers will get more powerful, particularly on the desktop, laptop and phone/tablet . The transition will be like going from mainframe computers to personal computers.
> I would not be surprised at AI having a similar enabling effect over the long term.
The big difference is that the current AI bubble isn't building durable infrastructure.
Building the railroads or the interstate was obscenely expensive, but 100+ years down the line we are still profiting from the investments made back then. Massive startup costs, relatively low costs to maintain and expand.
AI is a different story. I would be very surprised if any of the current GPUs are still in use only 20 years from now, and newer models aren't a trivial expansion of an older model either. Keeping AI going means continuously making massive investments - so it better finds a way to make a profit fast.
GPUs are consumables, not infrastructure. Model weights are the lasting thing.
It's always like that with software. You can still run an OS or a program made 20 years ago, in some cases that program may in fact have no modern replacements available (think niche domains) - meanwhile, in those 20 years, you've probably churned through 5-10 generations of computing hardware.
And I'm not an AI doomer, but hell no, give me another space program/station over this every single time and pretty please. We are not pioneering new engineering science or creating a pipeline of hard research and innovation that will spread in and better our everyday lives for the decades to come. We are overbuilding boring data centers packed with single-purpose chips that WILL BE obsolete within a couple years, for what? For the unhinged hope that LLM chatbots will somehow develop intelligence, and/or that people by the billions will want to pay a hefty price for dressed-up plagiarism machines. There is no indication that LLMs are a pathway to meaningful and transformative AI. Without that, there is no technical merit for the data centers being built currently to constitute future-proof infrastructure like highways and railroad networks did. There is no economical framework in which this somehow trickles down to or directly empowers the individual. This is a sham of ludicrous proportions, a sickening waste.
>There is no indication that LLMs are a pathway to meaningful and transformative AI.
Reality check, they are already astoundingly meaningful and transformative AI. They can converse in natural language, recall any common fact off the top of their heads, do research online and synthesize new information, translate between different human languages (and explain the nuances involved), translate a vague hand wavey description into working source code (and explain how it works), find security vulnerabilities, and draw SVGs of pelicans on bicycles. All in one singularly mind-blowing piece of tech.
The age of computers that just do what you tell them to, in plain language, is upon us! My God, just look at the front page! Are we on the same HN?
> Reality check, they are already astoundingly meaningful and transformative AI
The onus of the proof regarding their meaningful and transformative nature is on you.
The largest niche LLMs have so far managed to carve for themselves is software code, with the jury still on the fence as whether the productivity needle actually moved in one direction or the other, and the other, literal jury, enshrining the fact that vibe-coded software is not copyrightable and becomes a public good, that should give pause to any company living of selling software or software-related services as whether they want to poison their well.
Web search hasn't been disrupted very much either with users being quick to realise how hallucinogenic LLM summaries are (with the fact that it's baked in the tech and practically unsolvable being one of the reasons I don't consider LLMs a significant stepping stone towards actual AI).
The age of computers that respond to voice orders was 10 years ago, with Siri, Alexa, Google Assistant, nobody could care less then, and the fact the same systems became less capable after re-inventing themselves on top of LLMs probably won't have people care more now.
We are in such different universes that I fear that this will not be a productive discussion; to my eyes LLMs are the most obviously socially transformative technology in my lifetime, up there with "internet" and "smartphones".
You say the largest niche is software production. Okay, let's talk about that. If the jury is still out then the jury is asleep. When ChatGPT first came out - the GPT3 days, years ago, before "vibe code" was even a term - an artist friend of mine who never wrote a line of code in his life straight-up vibe coded 3d visuals to accompany a performance of the band he was in. In Processing, which he'd never heard of until ChatGPT suggested it to him. Do you realize what this means? Normies can use computers now. Actually use, not just consume. You can describe what you want and the computer will do it - will even ask you for clarification if your specification is too ambiguous. Hell, it will even educate you about the subject matter, meeting you at exactly your level, in your favorite writing style.
If you are still thinking in terms of whether vibe coded software is "copyrightable" or whether LLMs are useful for "selling software", you are a blacksmith scoffing that cars are pointless because they don't need horseshoes. Your entire framework is obsolete.
You are so focused on productivity that you missed the boat on the shape of the problem.
Vibe coded app are just throwaway codes that you don't understand and can't maintain. Most of our technology isn't creating new things but incremental improvement.
You are so focused on productivity when programming 's bottleneck is never about how many features you implement but how much you can understand your codebase.
Nobody cares about your internet slops but they care about verification of facts which unfortunately require human judgement.
LLM are just a different version of library code we already have, except without quality control by default.
>I am not an ai-booster, but I would not be surprised at AI having a similar enabling effect over the long term. My caveat being that I am not sure the massive data center race going on right now will be what makes it happen.
Maybe? It seems as if the tech is starting to taper off already and AI companies are panicking and gaslighting us about what their newest models can actually do. If that's the case the industry is probably in trouble, or the world economy.
What rail, road or bridge in the US lasts 50 years? The maintenance of rail over 6 years costs more than replacing all the GPUs in a data center, even at their current markup.
Look up deterioration curve and maintenance curve (J shaped) for hard infra. TLDR is asset stays in good condition for 75% of lifetime i.e. decades with light maintenance (flat part of J). By roads I mean highways where most of the expense / work is in building out the base / sub base (i.e. ballasts for rail), that last decades. US is uniquely bad maintained/prevention but even then major assets do not deteriorate on GPU timeline.
Railroad looks huge on the GDP (estimate) chart because the US transcontinental railroad was built in the mid 1800’s when the US economy was relatively tiny.
The railroad buildout was a lot more, idk, tangible. Most of that money was spent employing millions of people to smelt iron, lay track, build bridges, blow up mountains, etc. It’s a lot more exciting than a few freight loads of overpriced GPUs.
It seems a little silly to put 71 years of private-and-public-sector infrastructure development alongside something highly targeted like the Manhattan Project. It might make more sense to compare the Manhattan Project to the first transcontinental railroad, as a similar targeted but enormously ambitious project amounting to a major technical milestone.
Likewise I don't think it makes sense to compare post-ChatGPT hyperscaler data center construction with all 19th-century US railroad construction. Why not include the already considerable infrastructure of pre-AI AWS/Azure? The relevant economic change isn't "AI," it's having oodles of fast compute available online and a market demanding more of it. OTOH comparing these data centers to the Manhattan Project is wrong in the opposite direction: we should really be comparing a specific headline-grabber like Stargate.
This categorization is just a confusing mishmash. The real conclusion to draw here is that we tend to spend more on long-term and broadly-defined things than we do on specific projects with specific deadlines. Indeed.
This seems like a total category error. The Railroads are the only example that actually seems comparable, in being an infrastructure build out that's mostly done by a variety of private companies. Examples of things that would be worth comparing to the datacenter boom are factory construction and utilities (electrification in the first half of the 20th century, running water, gas pipes.)
For some reason this reminds me of people at work who walk up and say we did x bazillion things in n time, and then pause and expect us to express shock at how amazing that is and how much more productive they are than other teams. So what. Without a proper comparison to something equivalent I can't evaluate whether it's exceptional. I could treat each molecule as a thing and tell people how incredibly many things I eat on average per minute, but if I explain no one would find this to be exceptional.
Fwiw, Railroads were the reason for some of the biggest bank collapses in history. Panic of 1873 was literally called "The Great Depression" (until a greater depression hit). 20 years later was the Panic of 1893. Both were due to over-investment and a bubble bursting, and they took out tons of banks and businesses.
We're seeing exactly the same thing with AI, as there is massive investment creating a bubble without a payoff. We know that the value will lower over time due to how software and hardware both gets more efficient and cheaper. And so far there's no evidence that all this investment has generated more profit for the users of AI. It's just a matter of time until people realize and the bubble bursts.
And when the bubble does burst, what's going to happen? Most of the investment is from private capital, not banks. We don't know where all that private capital is coming from, so we don't know what the externalities will be when it bursts. (As just one possibility: if it takes out the balance sheets of hyperscalers and tech unicorns, and they collapse, who's standing on top of them that collapses next? About half the S&P 500 - so 30% of US households' wealth - but also every business built on top of those mega-corps, and all the people they employ) Since it's not banks failing, they probably won't be bailed out, so the fallout will be immediate and uncushioned.
Have you seen video of a slime mold searching for food? It grows like crazy in a bunch of simultaneous search paths, expending tons of energy following a rough directional gradient looking for food. Once one of the branches finds the food all of the other search paths shrivel up and die off. I think slime molds are much better analogies for these situations than bubbles.
Lol a bit dramatic at the end. There will be a correction in stocks that were priced in for growth related to AI.
But what I see is the two big costs for America:
1) Less money being invested into risky AI projects in general, in both public (via cash flows from operations) and private markets
2) The large tech firms who participated in large capex spend related to AI projects won't be trusted with their cash balances - aka having to return more cash and therefore less money for reinvestment
All the hype and fanfare that draws in investment at al comes with a cost - you gotta deliver. People have an asymmetric relationship between gains and losses.
> We're seeing exactly the same thing with AI, as there is massive investment creating a bubble without a payoff.
...
And so far there's no evidence that all this investment has generated more profit for the users of AI.
If you look around a bit, you will find evidence for both. Recent data finds pretty high success in GenAI adoption even as "formal ROI measurement" -- i.e. not based on "vibes" -- becomes common: https://knowledge.wharton.upenn.edu/special-report/2025-ai-a... (tl;dr: about 75% report positive RoI.)
The trustworthiness, salience and nuances of this report is worth discussing, but unfortunately reports like this gets no airtime in the HN and the media echo chamber.
Preliminary evidence, but given this weird, entirely unprecedented technology is about 3+ years old and people are still figuring it out (something that report calls out) this is significant.
75% report positive ROI (and the VPs are much more "optimistic" than the middle managers who are closer to the work) - but how much ROI? 1%? The fact that they don't quote a figure at all is pretty telling. And that's the ROI of the people buying the AI services, which are often heavily subsidized. If it costs a billion dollars to give a mid-sized company a 1% ROI, that doesn't sound sustainable.
I would love to see another report that isn't a year old with actual ROI figures...
Can't say why they don't report exact numbers, but it may be because a) of confidentiality and b) RoI is very context dependent and c) there is a wide spectrum of RoI by different dimensions, including some 9% even reporting negative RoI. This may make it hard to cite a single number, but the majority report "moderate" to "significant" RoI, whatever that means to them.
I'll add that I've seen mentions of similar reports from other sources like McKinsey and co. e.g. this one that claims actual revenue increase: https://www.mckinsey.com/featured-insights/week-in-charts/ge... -- I tend not to take these reports at face value, but I'm seeing multiple of them from various sources that tend to align.
As an aside, I just wanted to say, these are the kinds of discussions I was hoping to see here!
It’s not easy to quantify because you’re basically substituting or augmenting labor. How do you quantify an ROI on employees? You can look at profit of a project they’re hired to execute. But with AI, it’s mixed with the employees, so how do you distinguish the ROI of the two? With time, we might be able to make comparisons, but outside of very specific scenarios it’s difficult to quantify.
> The trustworthiness, salience and nuances of this report is worth discussing, but unfortunately reports like this gets no airtime in the HN and the media echo chamber.
It honestly just isn't that interesting. (Being most notable for people misunderstanding and misrepresenting the chart on page 46 of the report as being "ROI" rather than "ROI measurement")
In terms of ROI figures, it's really just a survey with the question "Based on internal conversations with colleagues and senior leadership, what has been the return on investment (ROI) from your organization's Gen AI initiatives to date?".
This doesn't mean much. It's not even dubiously-measured ROI data, it's not ROI data at all, it's just what the leadership thinks is true.
And that's a worrying thing to rely on, as it's well documented (and measured by the report's next question) that there's a significant discrepancy in how high level leadership and low-level leadership/ICs rate AI "ROI".
One of the main explanations of that discrepancy being Goodhart's law. A large amount of companies are simply demanding AI productivity as a "target" now, with accusations of "worker sabotage" being thrown around readily. That makes good economy-wide data on AI ROI very hard to get.
That's fair, it is survey based but it is apparently based on formal internal measurements. The full report (https://ai.wharton.upenn.edu/wp-content/uploads/2025/10/2025... -- slides 43 onwards) mentions that for 75% of them have "integrated formal ROI measurement."
There is little discussion of what that means, however, but we really can't expect concrete numbers for what is going to be sensitive business data,and given that the report tracks it across multiple industries and functions ranging from IT to operations to legal to sales, it may be hard to put into sensible numbers, or how the measurements may be flawed or biased.
The other categorical error is that the American people paid the railroads a monumental subsidy to get the job done. We gave them almost 10% of the territory.
Given the size of some of these data centers, the incentives packages that local governments often give their developers, and the impact on the electric grid that can, in some cases, raise costs for other ratepayers, I'd say the comparison could be similar.
The one Google's putting in KC North is 500 acres [0] and there were $10 billion in taxable revenue bonds put up by the Port Authority to help with the cost.
This for a company that could pay for that in cash right now.
The problem is that once built, railroads provided economic value right off the bat.
I would love to hear about the economic value being generated by these LLMs. I think a couple years is enough time for us to start putting some actual numbers to the value provided.
Equating this buildout with LLMs is also a category error. Waymo (self-driving cars) depends on the same infrastructure, and there are a variety of other robotics programs which are actually functioning, you can see them in operation. They all require a lot of GPUs to train and run the models which operate the robotics.
The answers to both of those questions are pretty guarded trade secrets. Amazon and Google just to name a couple examples are very profitable companies and I would not bet on them investing all this money without real use cases where profit is likely. Amazon is adding thousands of new robots to their factories every year.
So your argument basically boils down to, the datacenter build out is not a waste of resources because if it was, these companies wouldn’t be building them.
I mean, your argument is that Google has had increasing revenue and profit for a decade, to the point that they have $400B in revenue + profit this year, and that they are going to lose money because they plan to spend $180B on capital projects for new data centers next year, because you know their business better than they do.
It's not clear that Waymo is an improvement over existing infrastructure so much as ensuring that fewer humans benefit from each car ride (which was already pathetically low).
Is Waymo a good example when Google has third world people sitting at a screen operating the vehicle on the other side of the world, how can it performance be trusted?
And it’s probably useless at the end of the day because everything will reduce down from a centralized location to your desktop/laptop/tablet/phone. OpenAI, Microsoft, Meta, Google, Oracle dreams of a centralized computing location will not hold up.
There's a pretty big missing case in this comparison: nuclear weapons.
The US spent ~$12 trillion in ~2024 dollars on nuclear weapons between 1940 and 1996, and the vast majority of that spending was in the 1950s and early 1960s.
This is called paltering, which is lying telling by telling the truth.
The delivery systems are included when coming up with that number. So all those submarines, bombers, and ICBMs are also counted. All 3 systems of course are still valuable and useful without nuclear weapons.
I think all misgivings about AI would go away fast, if it solved one important problem for humanity. Carbon nanotubes for space elevators, sustainable nuclear fusion, or something in that ilk.
There's a video by Siliconversations [0] about it. Medicine is first and foremost limited by high-quality data, not intelligence. If OpenAI built a superhuman AGI tomorrow, it would not change a thing about the state of cancer treatment, at least not for a while.
Trying to design a cancer cure by setting a trillion alight on AI is like trying to achieve UBI by funneling citizen's taxes into Polymarket, so they may operate their free supermarket.
I don't think the above poster is talking about finding novel treatments, but rather that they're talking about aiding in diagnosis and navigating existing treatment options.
We always wish that our doctors would stay up to date on all of the current medical literature as they practice, and some of them do. In theory, AI systems could greatly accelerate a person's ability to retrieve and extract insights from the current body of knowledge.
Of course, that is highly fraught, but, in theory, I think I see what they're going for.
How can we be sure of that when we don't even know what improved "intelligence" might look like in this context? Especially given the increased importance of "big data" (genomics, proteomics, metabolomics etc.) to the field and the sheer amount of obscure data that's currently buried in all sorts of archival sources and might be resurfaced with some "intelligence".
Yes. But unfortunately that domain suffers from ambiguity which LLMs are bad at.
Medical treatment has never been about asking questions and getting perfect answers. Excellent doctors and nurse practitioners have a great intuition for which questions to ask based on cues during patient assessment.
What exactly does “personalized medical treatment” entail?
Writing prescriptions?
Ok, I can see how AI could theoretically do that (assuming it doesn’t hallucinate and kill a bunch of people). Oh and don’t think it’ll be so easy to give AI the legal authority to prescribe controlled substances. And insurance companies may take issue with expensive prescriptions written by a chat bot.
Perform surgeries? Stitch wounds?
That’s decades away. And that also opens a legal can of worms. Maybe the AI lawyers can figure something out.
None of those seem like they had an capital investment equivalent to 1% of the GDP. Apparently railroad was the only technological investment that was higher than AI when measured as a percentage of the GDP.
Is this an appropriate spend and risk? I'm starting to feel as if we have been collectively glamoured by AI and are not making sound decisions on this.
I’ve had similar thoughts, but I’ve come around to this buildout being rational. All of the big ai labs are still jockeying for compute and are having trouble keeping up with inference demand.
Does anyone know what's included in "datacenter capex"? In particular, does that include spending for associated power generation? Because whether or not the AI craze pans out, if we've built a whole bunch of power plants (and especially solar, wind, hydro, etc) that would be a big win.
You can't run a data center on solar or wind (even w/ batteries included). Everything they're building runs on gas & coal like what Musk got running for xAI.
You can and _must_ if you want competitive costs. Musk famously overpaid in order to get speed of deployment.
I was reading geohot's musings about building a data center and doing so cost effectively and solar is _the_ way to get low energy costs. The problem is off-peak energy, but even with that... you might come off ahead.
And that dude is anything but a green fanatic. But he's a pragmatist.
That’s because Rs let NIMBYs and the fossil fuel lobby call the shots, and Ds let NIMBYs and degrowthers call the shots. I bet China isn’t powering their datacenters with gas turbines
It's not totally clear that the gigantic push to run rail lines through undeveloped parts of North America "ahead of demand" for reasons of genocide (aka "white settlement"), especially the transcontinental routes, was the smartest investment, even leaving aside the horrific crime it represents. We probably would have gotten greater ROI connecting more developed places on a piecemeal basis and extending the rail network more slowly in the West (and probably even more rapidly in the developed East) instead of founding new towns along brand-new rail lines. There is a reason the federal government was so involved in the finance of these things: left alone, private Eastern capital would not have done things the way they were done, which was chiefly to "open the frontier" aka accelerate the genocide.
I really dislike the term hyperscaler. Comes off very insincere. They came up with it themselves, didn't they? What's the official definition supposed to be now? Companies that are setting up as many GPU/TPU server clusters as possible for a demand that's yet to exist?
Hyperscale exists as a term pre-LLM-hype. It mainly exists to describe the kind of datacenteres that companies like google and amazon have been building for at least a decade now: very large, very highly integrated and customised hardware, with a focus on cloud deployment and management strategies. This is to distinguish from just a large datacenter built with commodity server parts from a set of vendors (i.e. the kinds of servers 99% of people will be able to lay their hands on. Another way to put it is that if you're not writing your own BIOS/BMC/etc, you're probably not hyperscaling).
>The term “hyperscale” first emerged in the late 1990s, heralding a paradigm shift in the world of computing. It was primarily used to describe the awe-inspiring scale and capabilities of data centers...
Does anyone have any plans for what to do with all these chips and things once they are obsolete? I can't imagine they are all just going to go to some scrap heap.
There’s a joke that in a couple of years, after spending trillions of dollars, burning mountains of coal to run country-sized datacenters and boiling all the oceans, we finally achieve AGI.
Then the first question we ask it is: 'How do we fix climate change?'
And it answers: 'you can start by unplugging me'
If we're counting, the USA was already pretty deep in the hole. Anybody that has experienced crippling debt understands there's a point of no return where you just embrace it.
Gentle reminder that the cost of producing well-formatted graphs is much, much lower than it used to be. We grew up in a world where the mere existence of this graph would prove that someone put a great deal of effort into making it, and now it does not. I have no specific reason to doubt the information, but if you want to have reliable epistemic practices, you can no longer treat random graphs you find on social media as presumptively true.
Crypto's a funny one economically as while there are some real costs like mining, a lot of the money is just swapped around like A makes a coin, B pays A a million for some but the million isn't really spent, just swapped between crypto idiots.
https://x.com/paulg/status/2045120274551423142
Makes it a little less dramatic. But also shows what a big **'n deal the railroads were!
The megaprojects of the previous generations all had decades long depreciation schedules. Many 50-100+ year old railways, bridges, tunnels or dams and other utilities are still in active use with only minimal maintenance
Amortized Y-o-Y the current spends would dwarf everything at the reported depreciation schedule of 6(!) years for the GPUs - the largest line item.
We're a little too early to know if that's the case here too. I do foresee a chance at a reality where AI is a dead end, but after it we have a ton of cheap GPU compute lying about, which we all rush to somehow convert into useful compute (by emulating CPU's or translating traditional algorithms into GPU oriented ones or whatever).
Not least because the slower the frontier advances, the cheaper ASICs get on a relative basis, and therefore the cheaper tokens at the frontier get.
We have a massive scaffolding capability overhang, give it ten years to diffuse and most industries will be radically different.
Again, all of this is obvious if you spend 1k hours with the current crop, this isn’t making any capability gain forecasts.
Just for a dumb example, there is a great ChatGPT agent for Instacart, you can share a photo of your handwritten shopping list and it will add everything to your cart. Just following through the obvious product conclusions of this capability for every grocery vendor’s app, integrating with your fridge, learning your personal preferences for brands, recipe recommendation systems, logistics integrations with your forecasted/scheduled demand, etc is I contend going to be equivalent engineering effort and impact to the move from brick and mortar to online stores.
AI (LLM) progress would stop, and then everything people try to do with those last and most capable models would end up uninteresting or at least temporary. That's the world I'm calling a "dead end".
No matter how unlikely you think that is, you have to agree that it's at least possible, right?
I believe that some of my made up examples won’t end up getting built, but my point is that there is _so much_ low hanging fruit like this.
Of course, anything is _possible_, but let’s talk likelihood.
In my forecast the possible worlds where progress stops and then the existing models don’t end up making anything interesting are almost exclusively scenarios like “Taiwan was invaded, TSMC fabs were destroyed, and somehow we deleted existing datacenters’ installed capacity too” or “neo-Luddites take over globally and ban GPUs”, all of this gives sub-1% likelihood.
You can imagine 5-10% likelihood worlds where the growth rate of new chips dramatically decreases for a decade due to a single black-swan event like Taiwan getting glassed, but that’s a temporary setback not a permanent blocker.
Again, I’m just looking at all the things that can obviously be built now, and just haven’t made it to the top of the list yet. I’m extremely confident that this todo list is already long enough that “this all fizzles to nothing” is basically excluded.
I think if model progress stops then everyone investing in ASI takes a big haircut, but the long-term stock market progression will look a lot like the internet after the dot com boom, ie the bloodbath ends up looking like a small blip in the rear view mirror.
I guess, a question for you - how do you think about coding agents? Don’t they already show AI is going to do more than “end up uninteresting”?
The problem with talking likelihood is that it's an interpretation game. I understand you think it's wholly unlikely that it all fizzles out, I could read that from your first post. I hope it's also clear that I do think it's likely.
That's the point where we have to just agree to disagree. We have no rapport. I have no reason to trust your judgment, and neither do you mine.
However I do feel a lot of this comes down to facts about the world now, eg whether Claude Opus is doing anything interesting, which are in principle places where you could provide some evidence or ideas, along the lines of the detail that I gave you.
My read so far is you are just saying “maybe it fizzles out” which is not going to persuade anyone who disagrees. Sure, “maybe”, especially if you don’t put probabilities on anything; that statement is not falsifiable.
> The problem with talking likelihood is that it's an interpretation game
I am open to updating my model in response to a causal argument, if you care to give more detail. I view likelihoods as the only way to make these sorts of conversations concrete enough that anyone could hope to update each other’s model.
And even if chatbot LLM's seem to be a dead end, them and other machine learning algo's will be happy to use the data centers to create/discover a lot of stuff.
But AI proliferation is not stopping soon, because we've not picked up even the low hanging fruits just yet. Again, even if no new SOTA models were to be trained after today, there's years if not decades of R&D work into how to best use the ones we have - how to harness the big ones, where to embed the small ones, and of course, more fundamental exploration of the latent spaces and how they formed, to inform information sciences, cognitive sciences, and perhaps even philosophy.
And if that runs out or there is an Anti AI Revolution, we can still run those weather models and route planners on the chips once occupied by LLMs - just don't tell the proles that those too are AI, or it's guillotine o'clock again.
I think my sense of "dead end" would entail none of those directions panning out into anything interesting. You would "explore the latent spaces" only to find nothing of value. Embedding the LLM models wouldn't end up doing anything useful for whatever reason, and philosophy would continue on without any change.
I think we are saying the same thing.i just think the pull back on AI will be dramatic unless something amazing happens very soon.
Why would I pull back?
But current gen AIs are like eternal juniors, never quite ready to operate independently, never learning to become the expert that you are, they are practically frozen in time to the capabilities gained during training. Yet these LLMs replaced the first few rungs of the ladder so human juniors have a canyon to jump if they want the same progression you had. I’m seeing inexperienced people just using AI like a magic 8 ball. “The AI said whatever”. [0] LLMs are smart and cheap enough to undercut human juniors, especially in the hands of a senior. But they’re too dumb to ever become a senior. Where’s the big money in that? What company wants to pay for the “eternal juniors” workforce and whatever they save on payroll goes to procuring external seniors which they’re no longer producing internally?
So I’m not too sure a generation of people who have to compete against the LLMs from day 1 will really be producing “so much more” of value later on. Maybe a select few will. Without a big jump in model quality we might see “always junior” LLMs without seniors to enhance. This is not sustainable.
And you enhancing your carpentry skills for your free time isn’t what pays for the datacenters and some CEO’s fat paycheck.
[0] I hire trainees/interns every year, and pore through hundreds of CVs and interviews for this. The quality of a significant portion of them has gone way down in the past years, coinciding with LLMs gaining popularity.
Perhaps if we used something exotic like solid gold cookware, there might be some amazing benefits that people would love.
But it would be far from practical without being wildly subsidized…
With AI, it feels too much like the “grownups” are acting worse than the kids…
A firm that see's rising operating expenses but no not enough increase in revenue will start to cut back on spending on LLMs and become very frugal (e.g. rationing).
The GPUs are the shovels, not the project. AI at any capability will retain that capbibilty forever. It only gets reduced in value by superior developments. Which are built upon technologies that the previous generation developed.
If anything, the GPUs are the steel that the bridge is made of. Each beam can be replaced, but if too many fail the bridge is impassible. A bridge with a 6 year lifespan for each beam is insane.
A less literal example is the conquistadors: their shovels were ships, horses, gunpowder, and steel. You can look at Spanish records from the Council of the Indies archive and any time treasures were discovered, the price of each skyrocketed to the point where only the wealthiest hidalgos and their patrons could afford to go on such adventures. I.e. the cost of a ship capable of a cross Atlantic voyage going from 100k pieces of eight to over a million in the span of only a few years (predating the treasure fleet inflation!)
Gold rushes create demand shocks, and anyone who is a supplier to that demand makes bank, regardless of whether its GPUs or “shovels”.
Today this is real estate. And it's something people keep forgetting when arguing that ${whatever breakthrough or just more competition} will make ${some good or service} cheaper for consumers: prices of other things elsewhere will raise to compensate and consume any average surplus. Money left on the table doesn't stay there for long.
In three years the current generation of GPUs will be 50% or more faster. In six years your talking more than 100% faster. For the same energy costs.
If you're running a GPU data center on six year old GPUs, your cost to operate per sellable unit of work is double the cost of a competitor.
H100 to GB200 saw a 50x increase in efficiency, for example.
Nvidia only advertises 25x efficiency. And that is their word...
But the point is — you don’t decommission profit generators just because a competitor has a lower cost structure. You run things until it is more profitable for you to decommission them.
Not necessarily. Depends entirely on the value of the transport that the bridge enables.
Not really. The base training data cutoff will quickly render models useless as they fail to keep up with developments.
Translating some Farsi news articles about the war was hilarious, Gemini Pro got into a panic. ChatGPT either accused me of spreading fake news, or assumed this was some sort of fantasy scenario.
For coding I care mostly about reasoning ability which is uncorrelated with cut off
RS-25 - It was designed as HG-3 during the 60s for Saturn-V and manufactured for the Space Shuttle and refurbished for SLS and just launched last month.
Vehicle assembly building - Built for Saturn-V launches been in active use and continues today .
Crawler-transporters - Hanz and Franz were built in 1966 for Apollo and still used for launches.
There are plenty of other examples from Apollo program of actual hardware being repurposed and used for later missions.
In other mega space projects, Hubble is still doing active research, 35 years after launch, voyager is sending data close to 50 years later.
It is a whole another topic whether they should be used, how NASA is funded , and this is why makes programs like SLS or the shuttle are so expensive and so forth.
The point is these mega projects had a long lifetime of value, albeit with higher maintenance costs for the tech heavy ones like Apollo than say a bridge or a dam does.
Imagine this world: the bubble "pops" in a couple years. The GPUs stick around for a few more years after that. At the end, we pretty much don't train new foundation models anymore - no one wants to spend the money on the hardware needed to make a real advance.
People continue to refine, distill, and optimize the existing foundation models for the next century or two, just like people keep laying new track over old railway right of ways.
In the current generation There are plenty of questions around
- viability of training to inference cascades (the key to extended life) given custom ASICs hitting production like cerebras did early this year.
- energy efficiency of older chips in tight energy environments , just new grid capacity constraints favor running newer efficient chips ignoring perhaps short term(< 1 year) price shock due to war.
- higher MBTF , compared to older GPUs modern nodes are 8 GPU clusters built on 2/3 nm processors depending on HBM memory, the tolerances are much lower especially for training.
- new DCs being spun up are being by up less than ideal conditions due to permitting, part supply and other constraints which will impact operating environment.
Not withstanding, all these issues and even taking a generous 10 year useful life . The expenses dwarf every mega project before it .
Will it be worth the cost of electricity to run them if the flops/watt of newer chips is lower?
A typical node today is 8 GPU node today , you have to keep replacing failed GPUs by cannibalizing parts from other GPUs as nobody is selling new GPUs of that model anymore at higher frequencies.
In addition to outright failure there are higher error rates in computation in graphics it tends to be flickers or screen artifacts and so on.
Azure operated K-80s and P-100s for 9 and 7 years respectively but they were running at 2 GPU nodes and of course were much simpler compared to today’s HBM behomouths on 2/5 nm processor nodes . Google operates their custom ASIC TPUs for about 8-9 years .
With custom inference ASICs like cerebras hitting production the cascading of training NVIDIA chips to inference to get the 5-6 year useful life is also not clear.
What other uses do GPU's have that are critical...? lol
In addition to your points, this is why I always laugh when people do backward comparisons. What characteristics do they share in common? Very little.
Sure, LLMs can kind of put together a prototype of some CRUD app, so long as it doesn’t need to be maintainable, understandable, innovative or secure. But they excel at persisting until some arbitrary well defined condition is met, and it appears to be the case that “you gain entry to system X” works well as one of those conditions.
Given the amount of industrial infrastructure connected to the internet, and the ways in which it can break, LLMs are at some point going to be used as weapons. And it seems likely that they’ll be rather effective.
FWIW, people first saw TNT as a way to dye things yellow, and then as a mining tool. So LLMs starting out as chatbots and then being seen as (bad) software engineers does put them in good company.
Comical. China can continue innovating on GPUs and all this existing spend to stock up on compute is a waste. Again, comical. Moreover China has energy capacity that the US does not. Meaning all those GPU's that deliver less performance per watt? Yep going in the bin.
So yeah.. carry on telling me how this is going to yield some supreme advantage lmao.
Unclassified public cloud GPUs are completely useless when your warfighting workloads are at the SECRET level or above.
I think it’s maybe plausible that private compute feels similar in the next do-or-die global war.
[1] https://eh.net/encyclopedia/the-american-economy-during-worl...
Even if private compute was at a level of maturity where you could use it for classified workloads, knowing that the infrastructure is being managed by someone in India or China, securely getting data into and out of that infrastructure is still a mostly unsolvable problem.
GPUs are essential to every kind of scientific and engineering simulation you can think of. AI-accelerated simulations are a huge deal now.
Now compare that with the life a rail road. Amusing.
That said, I'm pretty sure in a compute-hungry AI world you aren't going to retire GPUs every 6 years anymore. Even if compute capacity jumps such that current H100s only represent 10% of total compute available in 6 years, you're still running those H100s until they turn to dust.
I just think it's hard to compare localized railroad infrastructure to globalized AI capacity and say one was more rational than the other on a % of GDP basis until the history actually plays out.
If you compare global investment in nuclear weapons it would dwarf the manhattan project and AI thus far, and yet, 99.99999% of nuclear weapons investment is just "wasted" capacity in that it has never been "used." But the value it has created in other ways (MAD-enabled peace) has surely been profitable on net. Nobody would have predicted this at the time.
Playing armchair internet pessimist about the "new thing" always makes you feel smart but is usually not a good idea since you always mis-price what you don't know about the future (which is almost everything).
https://news.ycombinator.com/item?id=44805979
The modern concept of GDP didn't exist back then, so all these numbers are calculated in retrospect with a lot of wiggle room. It feels like there's incentive now to report the highest possible number for the railroads, since that's the only thing that makes the datacenter investment look precedented by comparison.
We're talking about the period before modern finance, before income taxes, back when most labor was agricultural... Did the average person shoulder the cost of railroads more than the average taxpayer today is shouldering the cost of F-35? (That's another line in Paul's post.)
What that means for the US is this: if the US had to fight a conventional war with a near-peer military today, the US actually has the ability to replace stealth fighter losses. The program isn't some near-dormant, low-rate production deal that would take a year or more to ramp up: it's a operating line at full rate production that could conceivably build a US Navy squadron every ~15 days, plus a complete training and global logistics system, all on the front burner.
If there is any truth to Gen Bradley's "Amateurs talk strategy, professionals talk logistics" line, the F-35 is a major win for the US.
That's amazing. I had no idea the US was still capable of things like that.
I wonder if there's a way to get close to that, for things that aren't new and don't have a lot of active orders. Like have all the equipment setup but idle at some facility, keep an assembly teams ready and trained, then cycle through each weapon an activate a couple of these dormant manufacturing programs (at random!) every year, almost as a drill. So there's the capability to spin up, say F-22 production quickly when needed.
Obviously it'd cost money. But it also costs a lot of money to have fighter jets when you're not actively fighting a way. Seems like manufacturing readiness would something an effective military would be smart to pay for.
It's more than just the US though. It's the demand from foreign customers that makes it possible. It's the careful balance between cost and capability that was achieved by the US and allies when it was designed.
Without those things, the program would peter out after the US filled its own demand, and allies went looking for cheaper solutions. The F-35 isn't exactly cheap, but allies can see the capability justifies the cost. Now, there are so many of them in operation that, even after the bulk of orders are filled in the years to come, attrition and upgrades will keep the line operating and healthy at some level, which fulfills the goal you have in mind.
Meanwhile, the F-35 equipped militaries of the Western world are trained to similar standards, operating similar and compatible equipment, and sharing the logistics burden. In actual conflict, those features are invaluable.
There are few peacetime US developed weapons programs with such a record. It seems the interval between them is 20-30 years.
The F-22 production tooling is supposedly in storage at Sierra Army Depot. Why there and not at the boneyard at Davis-Monthan is an interesting question[1]. Spooling production of the F-22 back up will take less time than originally, but still won't be quick (a secure factory floor large enough has to be found, workforce knowledge has been lost, adding upgrades, etc.)
[0] Scattered across as many congressional districts as possible.
[1] I was at Sierra in the 80's on TDY and it was all Army and Army civilians. A USAF guy like me really stood out.
Until we run out of materials
https://mwi.westpoint.edu/minerals-magnets-and-military-capa...
As you get further and further into the past you have to start trying to measure it using human labor equivalents or similar. For example, what was the cost of a Great Pyramid? How does the cost change if you consider the theory that it was somewhat of a "make work" project to keep a mainly agricultural society employed during the "down months" and prevent starvation via centrally managed granaries?
With £800K today, you may not even be able to afford the annual maintenance for his mansion and grounds. I knew somebody with a biggish yard in a small town and the garden was ~$40K/yr to maintain. Definitely not a Darcy estate either.
Thinking about it, an income of £800K is something like the interest on £10m.
Alternatively, £10,000 is 200,000 sterling silver shillings per year (20 shillings per pound) for him. A sterling shilling today is about $13.50 at spot price. So that’s $2.7million per year in silver-equivalent wealth. Still plenty!
https://en.wikipedia.org/wiki/Income_in_the_United_Kingdom
Then from 1971 (when the USD became completely unbacked) to present, it increased by more than 800 points, 1600% more than our baseline. And it's only increasing faster now. So the state of modern economics makes it completely incomparable to the past, because there's no precedent for what we're doing. But if you go back to just a bit before 1970, the economy would have of course grown much larger than it was in the past but still have been vaguely comparable to the past centuries.
And I always find it paradoxical. In basic economic terms we should all have much more, but when you look at the things that people could afford on a basic salary, that does not seem to be the case. Somebody in the 50s going to college, picking up a used car, and then having enough money squirreled away to afford the downpayment on their first home -- all on the back of a part time job was a thing. It sounds like make-believe but it's real, and certainly a big part of the reason boomers were so out of touch with economic realities. Now a days a part time job wouldn't even be able to cover tuition, which makes one wonder how it could be that labor cost practically nothing in the past, as you said. Which I'm not disputing - just pointing out the paradox.
https://www.minneapolisfed.org/about-us/monetary-policy/infl...
It is notable that the median monthly rent was $35/month on a median income of $3000, so ~15% of income spent on rental housing. But it's interesting reading that report because a significant focus was on the overcrowding "problem". Housing was categorized by number of rooms, not number of bedrooms. The median number of rooms was 4, and the median number of occupants >4 per unit (or more than 1 person per room). I don't think it's a stretch to say that the amount of space and facilities you get for your money today is roughly equivalent. Yes, greater percentage of your income goes to housing, and yet we have far more creature comforts today then back in 1950--multiple TVs, cellphones, appliances, and endless amounts of other junk. We can buy many more goods (durable and non-durable) for a much lower percentage of our income.
There's no simple story here.
As for number of occupants, the 50s had a sustainable fertility rate. That means, on average, every woman was having at least 2 kiddos. So a median 4 occupant house would be husband, wife, and 2 children living in a place with a master bedroom, kids room, a combined kitchen/dining room, and a living room. Bathrooms, oddly enough, did not count as rooms. So in modern parlance it'd mostly be a 2/2 for up to 14% of one person's median income, and 0% in most cases as most people 'really' owned their homes.
We definitely have lots more gizmos, but I feel like that's an exchange that relatively few people would make in hindsight.
I suspect that it's a complex mixture of all possibilities, and you can only really look at trends and your own life - the one thing you can have something resembling understanding and control.
Maybe a false dichotomy? My suspicion is that home prices rise because more credit becomes available (and not only homes prices but the price of other assets). If you think about it in broader terms this explains what happens to the fruits of our increased productivity - lenders extend more credit as productivity rises thereby claiming the benefit for themselves. The working person is still stuck with a 40 hour week because despite being more productive they have more debt to service.
It also makes it more dramatic, consider the programs on the list and what they have in common.
* The Apollo program. A government-funded science project. No return on investment required.
* The Manhattan Project. A government-funded military project. No return on investment required.
* The F-35 program. A government funded military project. No return on investment required.
* The ISS. A government funded science project. No return on investment required.
* The Interstate Highway System. A government funded infrastructure project. No return on investment required.
* The Marshall Plan. A government funded foreign policy project. No return on investment required.
The actual return on investment for these projects is in the very long term of decades; Economic development, national security, scientific progress that benefits the entire country if not the entire world.
Consider the Marshall Plan in particular. It's a massive money sink, but it's nature as a government project meant it could run at losses without significant economic risk and could aim for extremely long term benefits. It's been paying dividends until January last year; 77 years.
And that dividend wasn't always obvious; Goodwill from Europe towards the US is what has prevented Europe from taking similar actions as China around the US' Big Tech companies. Many of whom relied extensively on 'Dumping' to push European competitors out of business, a more hostile Europe would've taken much more protectionist measures and ended up much like China, with it's own crop of tech giants.
And then there's the two programs left out. The railroads and AI datacenters. Private enterprise that simply does not have the luxury of sitting on it's ass waiting for benefits to materialize 50 years later.
As many other comments in this thread have already pointed out: When the US & European railroad bubbles failed, massive economic trouble followed.
OpenAI's need for (partial) return on investment is as short as this year or their IPO risks failure. And if they don't, similar massive economic trouble is assured.
Can you explain that? I really have no idea what you are referring to?
The bubble failed in the sense that massive commitments for new railways were made, and then the 1847 economic crisis caused investment to dry up, which collapsed the bubble and put a halt to the railroad construction boom. Those railway commitments never materialized, and stock market crashes followed.
I'm also being a little cheeky with what "massive economic trouble" entails; While the stock market was heavy on railroads and crashed right into a recession, the world in the mid-1800s was much less financialized so the consequences in absolute terms were less pronounced than a similar bubble-collapse would be today. As such, the main historical comparison is structural.
(Similarly, the AI bubble is likely to burst "by itself" unless OpenAI's IPO is truly catastrophically bad. What's more likely is that a recession happens and then the recession triggers a stock market collapse, which then intensify eachother. And so these historical examples of similar situations may prove illustrative.)
And yet 1848 was a very interesting year! Revolutionary even.
Just confirms my suspicion HN is not a forum for intellectual curiosity. It's been entirely subsumed by MBAs and wannabe billionaires.
No. Re-read the comment.
I specifically say "No return on investment required" not "Has no return on investment". It didn't matter whether these projects earned back their money in the short term, or whether it takes the longer term of many decades.
The ISS hasn't earned back it's $150 billion, and it won't for a pretty long time yet. Doesn't mean it's not a good thing for humanity. Just means that it'd be a bad idea to have the project ran & funded by e.g. SpaceX. The project would've failed, you just can't get ROI on $150 billion within the timeframe required. SpaceX barely survived the cost of developing it's rockets. (And observe how AI spending is currently crushing the profitability of the newly-merged SpaceX-xAI.)
I'm not even saying "AI doesn't provide anything to humanity", I was saying that AI needs trillions of dollars in returns that do not appear to exist, and so it's likely to collapse.
I am not an ai-booster, but I would not be surprised at AI having a similar enabling effect over the long term. My caveat being that I am not sure the massive data center race going on right now will be what makes it happen.
The big difference is that the current AI bubble isn't building durable infrastructure.
Building the railroads or the interstate was obscenely expensive, but 100+ years down the line we are still profiting from the investments made back then. Massive startup costs, relatively low costs to maintain and expand.
AI is a different story. I would be very surprised if any of the current GPUs are still in use only 20 years from now, and newer models aren't a trivial expansion of an older model either. Keeping AI going means continuously making massive investments - so it better finds a way to make a profit fast.
It's always like that with software. You can still run an OS or a program made 20 years ago, in some cases that program may in fact have no modern replacements available (think niche domains) - meanwhile, in those 20 years, you've probably churned through 5-10 generations of computing hardware.
Models are technologies. Without the GPUs the technology is not accessible.
You sound like someone who thinks they have a strong understanding of economics when they don't.
Looks to me like, as with a drill bit, a GPU could be reasonably classified as either a consumable or a factor of production.
This is because GPUs wear out and fail; the smaller the features, the faster electromigration kills them.
Reality check, they are already astoundingly meaningful and transformative AI. They can converse in natural language, recall any common fact off the top of their heads, do research online and synthesize new information, translate between different human languages (and explain the nuances involved), translate a vague hand wavey description into working source code (and explain how it works), find security vulnerabilities, and draw SVGs of pelicans on bicycles. All in one singularly mind-blowing piece of tech.
The age of computers that just do what you tell them to, in plain language, is upon us! My God, just look at the front page! Are we on the same HN?
The onus of the proof regarding their meaningful and transformative nature is on you.
The largest niche LLMs have so far managed to carve for themselves is software code, with the jury still on the fence as whether the productivity needle actually moved in one direction or the other, and the other, literal jury, enshrining the fact that vibe-coded software is not copyrightable and becomes a public good, that should give pause to any company living of selling software or software-related services as whether they want to poison their well.
Web search hasn't been disrupted very much either with users being quick to realise how hallucinogenic LLM summaries are (with the fact that it's baked in the tech and practically unsolvable being one of the reasons I don't consider LLMs a significant stepping stone towards actual AI).
The age of computers that respond to voice orders was 10 years ago, with Siri, Alexa, Google Assistant, nobody could care less then, and the fact the same systems became less capable after re-inventing themselves on top of LLMs probably won't have people care more now.
You say the largest niche is software production. Okay, let's talk about that. If the jury is still out then the jury is asleep. When ChatGPT first came out - the GPT3 days, years ago, before "vibe code" was even a term - an artist friend of mine who never wrote a line of code in his life straight-up vibe coded 3d visuals to accompany a performance of the band he was in. In Processing, which he'd never heard of until ChatGPT suggested it to him. Do you realize what this means? Normies can use computers now. Actually use, not just consume. You can describe what you want and the computer will do it - will even ask you for clarification if your specification is too ambiguous. Hell, it will even educate you about the subject matter, meeting you at exactly your level, in your favorite writing style.
If you are still thinking in terms of whether vibe coded software is "copyrightable" or whether LLMs are useful for "selling software", you are a blacksmith scoffing that cars are pointless because they don't need horseshoes. Your entire framework is obsolete.
Vibe coded app are just throwaway codes that you don't understand and can't maintain. Most of our technology isn't creating new things but incremental improvement.
You are so focused on productivity when programming 's bottleneck is never about how many features you implement but how much you can understand your codebase.
Nobody cares about your internet slops but they care about verification of facts which unfortunately require human judgement.
LLM are just a different version of library code we already have, except without quality control by default.
Maybe? It seems as if the tech is starting to taper off already and AI companies are panicking and gaslighting us about what their newest models can actually do. If that's the case the industry is probably in trouble, or the world economy.
I think they have been gaslighting us from the beginning.
Like Madoff, they’re desperate to pump their Ponzi scheme for as long as they can.
Tulips: weeks
GPUs: 6 years
Fiber: 20-50 years
Rail, roads, bridges: 50-100+ years
Hyperscalers closer to tulips than other hard infra.
the only reason any “maintenance” on them is expensive is corruption which at municipal level rivals current administration in some places
LLMs+Data centres on the other hand...
Likewise I don't think it makes sense to compare post-ChatGPT hyperscaler data center construction with all 19th-century US railroad construction. Why not include the already considerable infrastructure of pre-AI AWS/Azure? The relevant economic change isn't "AI," it's having oodles of fast compute available online and a market demanding more of it. OTOH comparing these data centers to the Manhattan Project is wrong in the opposite direction: we should really be comparing a specific headline-grabber like Stargate.
This categorization is just a confusing mishmash. The real conclusion to draw here is that we tend to spend more on long-term and broadly-defined things than we do on specific projects with specific deadlines. Indeed.
We're seeing exactly the same thing with AI, as there is massive investment creating a bubble without a payoff. We know that the value will lower over time due to how software and hardware both gets more efficient and cheaper. And so far there's no evidence that all this investment has generated more profit for the users of AI. It's just a matter of time until people realize and the bubble bursts.
And when the bubble does burst, what's going to happen? Most of the investment is from private capital, not banks. We don't know where all that private capital is coming from, so we don't know what the externalities will be when it bursts. (As just one possibility: if it takes out the balance sheets of hyperscalers and tech unicorns, and they collapse, who's standing on top of them that collapses next? About half the S&P 500 - so 30% of US households' wealth - but also every business built on top of those mega-corps, and all the people they employ) Since it's not banks failing, they probably won't be bailed out, so the fallout will be immediate and uncushioned.
But what I see is the two big costs for America:
1) Less money being invested into risky AI projects in general, in both public (via cash flows from operations) and private markets 2) The large tech firms who participated in large capex spend related to AI projects won't be trusted with their cash balances - aka having to return more cash and therefore less money for reinvestment
All the hype and fanfare that draws in investment at al comes with a cost - you gotta deliver. People have an asymmetric relationship between gains and losses.
It's just banks investing in private equity firms. So it's still banks, just by proxy, due to 2008 regulation.
...
And so far there's no evidence that all this investment has generated more profit for the users of AI.
If you look around a bit, you will find evidence for both. Recent data finds pretty high success in GenAI adoption even as "formal ROI measurement" -- i.e. not based on "vibes" -- becomes common: https://knowledge.wharton.upenn.edu/special-report/2025-ai-a... (tl;dr: about 75% report positive RoI.)
The trustworthiness, salience and nuances of this report is worth discussing, but unfortunately reports like this gets no airtime in the HN and the media echo chamber.
Preliminary evidence, but given this weird, entirely unprecedented technology is about 3+ years old and people are still figuring it out (something that report calls out) this is significant.
I would love to see another report that isn't a year old with actual ROI figures...
Can't say why they don't report exact numbers, but it may be because a) of confidentiality and b) RoI is very context dependent and c) there is a wide spectrum of RoI by different dimensions, including some 9% even reporting negative RoI. This may make it hard to cite a single number, but the majority report "moderate" to "significant" RoI, whatever that means to them.
I'll add that I've seen mentions of similar reports from other sources like McKinsey and co. e.g. this one that claims actual revenue increase: https://www.mckinsey.com/featured-insights/week-in-charts/ge... -- I tend not to take these reports at face value, but I'm seeing multiple of them from various sources that tend to align.
As an aside, I just wanted to say, these are the kinds of discussions I was hoping to see here!
All the middle managers are afraid to say anything though, so go go go.
It honestly just isn't that interesting. (Being most notable for people misunderstanding and misrepresenting the chart on page 46 of the report as being "ROI" rather than "ROI measurement")
In terms of ROI figures, it's really just a survey with the question "Based on internal conversations with colleagues and senior leadership, what has been the return on investment (ROI) from your organization's Gen AI initiatives to date?".
This doesn't mean much. It's not even dubiously-measured ROI data, it's not ROI data at all, it's just what the leadership thinks is true.
And that's a worrying thing to rely on, as it's well documented (and measured by the report's next question) that there's a significant discrepancy in how high level leadership and low-level leadership/ICs rate AI "ROI".
One of the main explanations of that discrepancy being Goodhart's law. A large amount of companies are simply demanding AI productivity as a "target" now, with accusations of "worker sabotage" being thrown around readily. That makes good economy-wide data on AI ROI very hard to get.
There is little discussion of what that means, however, but we really can't expect concrete numbers for what is going to be sensitive business data,and given that the report tracks it across multiple industries and functions ranging from IT to operations to legal to sales, it may be hard to put into sensible numbers, or how the measurements may be flawed or biased.
The one Google's putting in KC North is 500 acres [0] and there were $10 billion in taxable revenue bonds put up by the Port Authority to help with the cost.
This for a company that could pay for that in cash right now.
[0] https://fox4kc.com/news/google-confirms-its-behind-new-data-...
Again, they have the cash to buy that land and develop it without any further consideration beyond permits and planning.
I would love to hear about the economic value being generated by these LLMs. I think a couple years is enough time for us to start putting some actual numbers to the value provided.
And what is the ROI on either of those right now?
Got it.
The claim isn't "google said so". It's "google is doing so". It's a claim to incentives and rational actors, or perhaps to revealed preferences.
If you want to get on a high horse around logical fallacies, make sure to understand them, else you reveal... something else, about yourself.
Suggesting otherwise is defintionally argument from authority.
You know you can just admit you’re a sophist and we can move on.
Learn basic english before moving onto the big words :)
If they were laid on a sensible route, completed on budget and time, and savvily operated. Many railroads went bust.
We aren't even getting infrastructure out of it, they are just powering it with gas turbines..
The US spent ~$12 trillion in ~2024 dollars on nuclear weapons between 1940 and 1996, and the vast majority of that spending was in the 1950s and early 1960s.
https://en.wikipedia.org/wiki/Nuclear_weapons_of_the_United_...
The delivery systems are included when coming up with that number. So all those submarines, bombers, and ICBMs are also counted. All 3 systems of course are still valuable and useful without nuclear weapons.
Trying to design a cancer cure by setting a trillion alight on AI is like trying to achieve UBI by funneling citizen's taxes into Polymarket, so they may operate their free supermarket.
[0] https://www.youtube.com/watch?v=ijTxAfFUHkY
We always wish that our doctors would stay up to date on all of the current medical literature as they practice, and some of them do. In theory, AI systems could greatly accelerate a person's ability to retrieve and extract insights from the current body of knowledge.
Of course, that is highly fraught, but, in theory, I think I see what they're going for.
Medical treatment has never been about asking questions and getting perfect answers. Excellent doctors and nurse practitioners have a great intuition for which questions to ask based on cues during patient assessment.
Writing prescriptions?
Ok, I can see how AI could theoretically do that (assuming it doesn’t hallucinate and kill a bunch of people). Oh and don’t think it’ll be so easy to give AI the legal authority to prescribe controlled substances. And insurance companies may take issue with expensive prescriptions written by a chat bot.
Perform surgeries? Stitch wounds?
That’s decades away. And that also opens a legal can of worms. Maybe the AI lawyers can figure something out.
https://news.ycombinator.com/item?id=47556729 (gitlab founder leveraging AI tools to find cure for his rare cancer)
I’m getting my popcorn ready for the bubble pop.
Or is this "we said we are going to invest $X"? What about the circular agreements?
I was reading geohot's musings about building a data center and doing so cost effectively and solar is _the_ way to get low energy costs. The problem is off-peak energy, but even with that... you might come off ahead.
And that dude is anything but a green fanatic. But he's a pragmatist.
An analogy would be "all the money spent on transportation infra" over some period of time.
~$6.5 trillion
edit - sorry, it is in fact adjusted, text is kinda hard to see
I certainly think it was a mistake.
>The term “hyperscale” first emerged in the late 1990s, heralding a paradigm shift in the world of computing. It was primarily used to describe the awe-inspiring scale and capabilities of data centers...
There’s a loop of everyone is saying stuff because everyone else is saying stuff that turns into a sort of reality inspired fan fiction.
It’s not just that it’s wrong or imprecise, that I expect, it’s that the folklore takes on a life of its own.
The US is working to keep the oligarchs happy.
The only problem is, if AI doesn’t solve cold fusion, we’re back to square one. And a few trillion dollars in the hole.
Then the first question we ask it is: 'How do we fix climate change?' And it answers: 'you can start by unplugging me'
And that point is right before rock bottom.