My guess is: Pick a popular keyword from Google trends of which the Chinese company only released Chinese content and take the domain and put up English content.
BTW so far on my GLM-5 evals it's performing qualitatively as well as Opus 4.5/4.6. Only issue is maybe speed. I will likely incorporate it into daily use. The previous versions were trash, filled with syntax errors and instruction following mistakes.
I wouldn't say nvda is completely out,but the chess moves in response is tough - Release a new chip, obsoleting the existing lines and take hits on billions of defaults on hardware
TLDR: For now, everyone is sold out of tokens: a ridiculous percentage of every Nvidia card is selling every token it's generating, every token generated by Google's TPUs sells, Amazon's Trainium, Groq's silicon giants (they don't really name their chips and the chips are like 30 cm in diameter, so let's go with giants), ... and Nvidia B200s are the cheapest way, by far, to generate tokens and are being sold at something like double the speed they can be produced.
Once the AI craze slows, the most surprising thing is going to happen: Nvidia sales will go up. Why? Because it's older cards that will get priced out first, and it will become a matter of survival for datacenter companies to fill datacenters that currently run older hardware with the newest Nvidia hardware ...
That's the bull case. Under unlimited token demand, Nvidia wins big. Under slowing token demand, Nvidia actually wins bigger, for a while, and only then slows. For now, everything certainly seems to indicate demand is not slowing. Ironically, under slowing demand, it's China that will suffer in this market.
And the threat? Well it is possible to beat Nvidia's best cards in intelligence, in usefullness, because the human mind is doing it, on 20W per head (200W for the "full machine"). And long story short: we don't know how, but obviously it's possible. Someone might figure it out.
My guess is: Pick a popular keyword from Google trends of which the Chinese company only released Chinese content and take the domain and put up English content.
BTW so far on my GLM-5 evals it's performing qualitatively as well as Opus 4.5/4.6. Only issue is maybe speed. I will likely incorporate it into daily use. The previous versions were trash, filled with syntax errors and instruction following mistakes.
TLDR: For now, everyone is sold out of tokens: a ridiculous percentage of every Nvidia card is selling every token it's generating, every token generated by Google's TPUs sells, Amazon's Trainium, Groq's silicon giants (they don't really name their chips and the chips are like 30 cm in diameter, so let's go with giants), ... and Nvidia B200s are the cheapest way, by far, to generate tokens and are being sold at something like double the speed they can be produced.
Once the AI craze slows, the most surprising thing is going to happen: Nvidia sales will go up. Why? Because it's older cards that will get priced out first, and it will become a matter of survival for datacenter companies to fill datacenters that currently run older hardware with the newest Nvidia hardware ...
That's the bull case. Under unlimited token demand, Nvidia wins big. Under slowing token demand, Nvidia actually wins bigger, for a while, and only then slows. For now, everything certainly seems to indicate demand is not slowing. Ironically, under slowing demand, it's China that will suffer in this market.
And the threat? Well it is possible to beat Nvidia's best cards in intelligence, in usefullness, because the human mind is doing it, on 20W per head (200W for the "full machine"). And long story short: we don't know how, but obviously it's possible. Someone might figure it out.