14 comments

  • khalic 20 hours ago
    People seem to be missing the fact that Ai2 is an initiative by the Allen Institute for AI, not a company
    • NitpickLawyer 20 hours ago
      And they've already released open source models, with data and training code. They're definitely the good guys here.
  • datadrivenangel 22 hours ago
    Suggest changing the title to:

    NSF and NVIDIA award Ai2 $152M to support building a fully open AI ecosystem

    To better indicate that this is not related to OpenAI and that the group intends to release everything needed to train their models.

    • nativeit 15 hours ago
      Now where have I heard this before…
  • pmdr 21 hours ago
    So basically nvidia handing out cash to itself. <insert Obama medal meme>
    • lgats 17 hours ago
      or handing out free hardware? with the side effect of investing in open source llms in their ecosystem
  • brunohaid 22 hours ago
    Maybe that'll help them hire someone who can at least respond to S2 API key requests...

    Being open is great, but if over the course of 6 months 3 different entities (including 2 well known universities) apply and send more than a dozen follow ups to 3 different "Reach out to us!" emails with exactly 0 response, the "open" starts sounding like it's coming from Facebook.

  • jeffreysmith 20 hours ago
    Not sure what's with the HN tone on this announcement. AI2 are really some of the best people around for creating truly open artifacts for the whole ecosystem. Their work on OLMo and Molmo is some of the most transparent and educational material you can find on model building. This is just great news for everyone.
    • Guthur 19 hours ago
      Maybe because many of us are not from the US. The stated goal is US dominance of the AI field, and sorry if the rest of us don't see that as a good thing nor particularly open.
      • philipkglass 19 hours ago
        The Allen Institute for Artificial Intelligence projects so far have been very open. They are open about the trained models, the inference code, the training data sets, and the training code. A research group from any country can pick up where AI2 left off if they want to try a different approach or extension. I want to live in a world where there are many models near the top of leader boards, from many different research groups and countries, and I think that AI2 helps enable that.

        The stated "US dominance" goal just pays lip service to what appeals to the funders, kind of like how supercomputing projects traditionally claim that they contribute to curing disease or producing clean energy. (Even if it's something far removed from concrete applications, like high fidelity numerical simulations of aqueous solutions.)

      • laughingcurve 19 hours ago
        Good luck trying to raise money from a NATIONAL science foundation without it being in the NATIONAL interest.
        • jejcndj1848 10 hours ago
          I think there’s an interesting cultural phenomena when dominance and interest are seen as one and the same thing
      • insane_dreamer 18 hours ago
        But better for the rest of the world than private US tech companies dominating.
        • nativeit 15 hours ago
          This just in: AI2 pivoting to a for-profit model, and is seeking venture capital funding.

          Oops, sorry that’s next year’s news. Anyway, this is all ringing very familiar.

      • Guthur 19 hours ago
        Of course you can justify this, as people have, but you can't then blame the rest of us non US citizens for not aligning with that goal. The US is only a small portion of the global population and the government itself has a long history of stamping on the rest of us.
      • FirmwareBurner 19 hours ago
        >The stated goal is US dominance of the AI field

        Any country tries to dominate any field if they can do it, it's just human nature. Why is that a bad thing?

        That constant competition for superiority between nations is how humanity has evolved from hunter gatherer to having tractors, microwave ovens, airplanes, internet and penicillin.

        • Herring 17 hours ago
          Yes, competition is good. Monopoly is bad. A more distributed power structure is much better for overall progress, and even for the monopolist in the long run (Ex: Intel).
          • FirmwareBurner 15 hours ago
            >Monopoly is bad.

            So what do you propose? Should the US stop development till other countries catch up?

            • Herring 14 hours ago
              Nah, I'd say just do more anti-monopoly anti-inequality work. Probably start internally, that's a massive enough task on its own (eg breaking up big tech). Assist other countries eg with aid if (and only if) they are doing the same. This is a big topic, ask your favorite frontier LLM about it.
              • FirmwareBurner 14 hours ago
                Unless China does the same that's an unrealistic ask. That would be like doing nuclear disarmament but only you and everyone else gets to keep their nukes.
                • Herring 13 hours ago
                  Your "nukes" are leaking into the water supply. Inequality shows up a million different ways that Americans don't fully understand yet, eg inflation (dominant companies increasing profits), teacher shortages (low wages), student debt (not an issue for the wealthy so why fix it), housing prices (corporate landlords, exclusionary zoning), layoffs (despite record profits) etc etc. This situation (Trump/Musk/Bezos taking most of the gains) is just not long-term stable, and if any other country wants to do the same to themselves let them. The longer it goes the harder it will be to fix.

                  Again, go have this discussion with the LLM you trust, it's much more informative.

        • byteknight 19 hours ago
          As an American, I obviously can get behind it, but I can easily see how a declared goal of superiority of others would rub those others the wrong way (and possibly prevent their contribution)

          [Insert xkcd new standard image here]

          • laughingcurve 19 hours ago
            As an American researcher, I can assure you that the Chinese superiority and behavior in the field is certainly ENCOURAGING my contributions.
          • FirmwareBurner 19 hours ago
            >a declared goal of superiority of others would rub those others the wrong way

            So what? Does that change anything in how things work in reality? Everyone knows it, so why pussyfoot around it?.

            Why are people nowadays so sensitive about saying the truth of how things work? Have people been coddled that much that they've can't handle reality? A good life lesson is that the world does not revolve around your feelings.

            • Guthur 19 hours ago
              It's not my feelings mate, if you don't live outside the US and have not been subjected to their unipolar attitude you will probably never understand and there is literally nothing I'm going to say to convince you of the objective reality the rest of us face.
              • FirmwareBurner 19 hours ago
                Sorry, I wasn't talking about you specifically, but the general "you" as in you the reader.
  • big_toast 18 hours ago
    Some people will be too young to know the commoditize[1] your complements wisdom of yesteryear. It's hard for me to tell if consumers end up net ahead after things settle.

    I'm surprised we haven't heard about OpenAI pushing a facebook style OpenCompute project or ARM (acorn, apple, VLSI) or similar for the stack below them.

    [1]: https://www.joelonsoftware.com/2002/06/12/strategy-letter-v/

  • gigatexal 18 hours ago
    I guess that makes sense: make Nvidia hardware run stuff so well you buy more of it. That’s all CUDA is it’s a way to get people to buy more CUDA capable hardware aka Nvidia cards.
  • nativeit 15 hours ago
    Are we sure this isn’t Nvidia’s play to make what OpenAI is for Microsoft? Start with a bunch of non-profit, research-for-the-good-of-humanity promises before closing ranks and soliciting venture capital for its for-profit subsidiary?

    I don’t want to be cynical, it’s just that the world has left very few options otherwise.

  • yalogin 18 hours ago
    After reading through the article I couldn't understand what an open AI ecosystem is. Are they talking about hardware or software? If it's software we have opensource models, are they going to create open source vertical integrations?
    • curious_cat_163 18 hours ago
      We don't really have very many open source models. We have "open weights" models. Ai2 is one of the very few labs that actually make their entire training/inference code AND datasets AND training run details public. So, that this investment is happening is a welcome step.

      Congratulations to the team at Ai2!

      • yalogin 17 hours ago
        Ah thanks for the nuance!
  • hobofan 22 hours ago
    If Nvidia were interested in "open" AI, they would spend time to collaborate with AMD, etc. to build an (updated) open alternative to CUDA. That's probably the most closed part of the whole stack right now.
    • sounds 21 hours ago
      Nvidia is interested in commoditizing their complements. It's a business strategy to decrease the power of OpenAI (for instance).

      Nvidia dreams of a world where there are lots of "open" alternatives to OpenAI, like there are lots of open game engines and lots open software in general. All buying closed Nvidia chips.

      • amelius 21 hours ago
        But AI depends on a small number of tensor operators, primitives which can be relatively easily implemented by competitors, so compute is very close to being a commodity when it comes to AI.

        A company like Cerebras (founded in 2015) proves that this is true.

        The moat is not in computer architecture. I'd say the real moat is in semiconductor fabrication.

        • sounds 20 hours ago
          Have you ever tried to run a model from huggingface on an AMD GPU?

          Semiconductor fabrication is a high risk business.

          Nvidia invested heavily in CUDA and out-competed AMD (and Intel). They are working hard to keep their edge in developer mindshare, while chasing hardware profits at the same time.

          • phkahler 19 hours ago
            >> Have you ever tried to run a model from huggingface on an AMD GPU?

            Yes. I'd never touched any of that stuff and then one day decided to give it a shot. Some how-to told me how to run something on Linux which had a choice of a few different LLMs. I picked one of the small ones (under 10B) and had it running on my AMD APU inside of 15 minutes. The weights were IIRC downloaded from huggingface. The wrapper was not. Anyway, what's the problem?

            BTW that convinced me that small LLMs are basically worthless. IMA need to go bigger next time. BTW my "old" 5700G has 64GB of RAM, next build I'll go at least double that.

          • amelius 20 hours ago
            > Have you ever tried to run a model from huggingface on an AMD GPU?

            No, but seeing how easily they run on Apple hardware, I don't understand your point, to be honest.

        • ants_everywhere 19 hours ago
          > The moat is not in computer architecture. I'd say the real moat is in semiconductor fabrication.

          In the longer run, anything that is very capital intensive, affects entire industries, and can be improved with large amounts of simulation will not be a moat for long. That's because you can increasingly use AI to explore the design space.

          Compute not a commodity yet but may be in a few years. Semiconductor fab will take longer, but I wouldn't be surprised to see parts of the fabrication process democratized in a few years.

          Physical commodities like copper or oil can't be improved with simulation so they don't fall under this idea.

        • recursivecaveat 5 hours ago
          It's not like you can just stamp out a giant grid of flops and just go brrr. Getting utilization is difficult, and the closer you hew to Nvidia's tradeoffs the more you are going to come out unfavorably against a giant who's working with 10,000X your volume and decades of experience. Nvidia proprietary software is very highly embedded into everyone's stacks. The models undergo co-evolution with the hardware, so they are designed with its capabilities in mind.

          It's like trying to take on UPS with some new, not quite drop-in logistics network. Theoretically its just a bunch of empty tubs shuffling around, but not so easy in practice. You have to be multiples better than the incumbent to be in contention. Keep in mind for the startups we don't really know who is setting money on fire running models in unprofitable configurations for revenue.

        • bilbo0s 20 hours ago
          which can be relatively easily implemented by competitors

          Oh my.

          Please people, try to think back to your engineering classes. Remember the project where you worked with a group to design a processor? I do. Worst semester of my life. (Screw whoever even came up with that damn real analysis math class.) And here's the kicker, I know I'll be dating myself here, but all I had to do for my part was tape it out. Still sucked.

          Not sure I'd call the necessary processor design work here "relatively easy"? Even for highly experienced, extremely bright people, this is not "relatively easy".

          Far more easy to make the software a commodity. Believe me.

          • amelius 20 hours ago
            To be totally honest, the only thing I can distill from this is that perhaps you should have picked an education in CS instead of EE.

            I mean this is like saying that a class for building compilers sucked. Still, companies make compilers, and they aren't all >$1B companies. In fact, hobbyists make compilers.

            • bilbo0s 19 hours ago
              I did study CS as well.

              That you are comparing designing and writing a compiler with designing and manufacturing a neural processor is only testimony to the futility of my attempt to impress on everyone the difference. So I'll take my leave.

              You have a good day sir or ma'am.

              • amelius 19 hours ago
                But I'm actually saying that manufacturing is the hard part ...
      • PeterStuer 20 hours ago
        I thought they assumed AI hardware would become commoditized sooner rather than later, and their play was to sell complete vertically integrated AI solution stacks, mainly a software and services play?
      • arthurcolle 21 hours ago
        Why is OpenAI a threat to Nvidia? They are still highly dependent on those GPUs
        • tomrod 21 hours ago
          Two concepts

          - Monopsony is the inverse of Monopoly -- one buyer. Walmart is often a monopsony for suppliers (exclusive or near exclusive).

          - Desire for vertical integration and value extraction, related to #1 but with some additional nuances

          • next_xibalba 21 hours ago
            Who is the one buyer in the Nvidia scenario? How would that benefit Nvidia?
            • KaoruAoiShiho 21 hours ago
              It would hurt nvidia not benefit, that's why nvidia spends a lot of effort to prevent that from happening, and it's not the case currently.

              They really need to avoid the situation in the console market, where the fact there's only 3 customers means almost no margins on console chips.

              • next_xibalba 20 hours ago
                Prior to the A.I. boom, Nvidia had a much, much more diverse customer base in terms of revenue mix. According to their 2015 annual report[1], their revenues were spread across the following revenue segments: gaming, automotive, enterprise, HPC and cloud, and PC and mobile OEMs. Gaming was the largest segment and contributed less than 50% of revenues. At this time, with a diverse customer base, their gross margins were 55.5%. (This is a fantastic gross margin in any industry outside software).

                In 2025 (fiscal year), Nvidia only reported two revenue segments: compute and networking ($116B revenue) and graphics ($14.3B revenue). Within the compute and networking segment, three customers represented 34% of all revenue. Nvidia's gross margins for fiscal 2025 were 75% [2].

                In other words, this hypothesis doesn't fit at all. In this case, having more concentration in extremely deep pocketed customers competing over a constrained supply of product has caused margins to sky rocket. Moreover, GP's claim of monopsony doesn't make any sense. Nvidia is not at any risk of having a single buyer, and with the recent news that sales to China will be allowed, the customer base is going to become more diverse, creating even more demand for their products.

                [1] https://s201.q4cdn.com/141608511/files/doc_financials/annual...

                [2] https://s201.q4cdn.com/141608511/files/doc_financials/2025/a...

                • tomrod 15 hours ago
                  I'm not sure your analysis is apples to apples.

                  Prior to the AI boom, the quality of GPUs slightly favored NVidia but AMD was a viable alternative. Also, there are scale differences between 2025 and before the AI boom -- simply put, there was more competition in the market for a smaller bucket and favorable winds on supplier production costs. Further, they just have better software tooling through CUDA.

                  Since 2022 and the rise of multi-billion parameter models, NVidia's CUDA has had a lock on the business side, but face rising costs due to terrible trade policy by the US, significant rebound from COVID as well as geopolitical realignments, inflation on the workforce, and rushed/buggy power supplies as their default supply options have made their position quite untenable -- mostly CUDA is their saving grace. If AMD got their druthers about them and focused they'd potentially unseat NVidia. But until ROCm is at least _easy_ nothing will happen there.

                  • next_xibalba 10 hours ago
                    I merely comment on the concentration of customers and how it has not at all hurt Nvidia's margins. In fact, they have expanded quite dramatically. All of your other points are moot.

                    > "rising costs"

                    Nvidia's margin expansion would suggest otherwise. Or at least, the costs are not scaling with volume/pricing. Again, all we need to do is look at the margins.

                    > "their position quite untenable ... But until ROCm is at least _easy_ nothing will happen there"

                    Seems like you're contradicting yourself. Not sure what point you're trying to make. Bottom line is, there is absolutely no concern about monopsony as suggested by the GP. Revenue is booming and margins are expanding. Will it last? Who knows. Fat margins tend to invite competition.

                • KaoruAoiShiho 18 hours ago
                  Nobody said this was the case...

                  The only example I used was the console market which has been ruined because of this issue. They generally left that market because it was that horrible.

                  • Jlagreen 2 hours ago
                    The console market is low margin because they seem to find someone ready to take low margin (e.g. AMD). Nvidia was in console market before but left it due to low margin. Nvidia only sells old low development chip with probably good margin to Nintendo. The chips in the Switch 2 are using node from 2020 and are super cheap in manufacturing and Nvidia had low efforts in developing them.

                    AMD however has to design new special APUs for Xbox and PS. Why do they do that? They could just decide to step away from the tender but they won't because they seem to be desperate for any business. Jensen was like that 20 years ago but he has learned that some business you simply step away from.

                  • next_xibalba 10 hours ago
                    This whole subthread is about the claim that Nvidia is at risk of a monopsony situation. I pointed out that while revenue has concentrated on a few customers post-AI boom, margins have improved, suggesting Nvidia is nowhere near and not veering toward that risk. Revenue is exploding, as are margins.
        • grim_io 21 hours ago
          Google shows that Nvidia is not necessary. How long until more follow?
          • NitpickLawyer 21 hours ago
            Tbf, goog started a long time ago with their TPUs. And they've had some bumps along the way. It's not as easy as one might think. There are certainly efforts to produce alternatives, but it's not an easy task. Even the ASIC-like providers like cerberas and groq are having problems with large models. They seemed very promising with SLMs, but once MoEs became a thing they started to struggle.
          • arthurcolle 20 hours ago
            I agree in principle but you can't just yolo fab TPUs and leapfrog google
          • ivape 16 hours ago
            I don't think we can say that until we hear how Genie3 and Veo3 were trained. My hunch is that the next-gen multi-modal models that combine world, video, text, and image models can only be trained on the best chips.
        • vlovich123 21 hours ago
          If OpenAI becomes the only buyer, they can push around Nvidia and invest in alternatives to blunt their power. If OpenAI is one of many customers, then they’re not a strong bargaining position and Nvidia gets to set the terms.
        • patates 21 hours ago
          Maybe if they grow too much they'd develop their own chips. Also if one company wins, as in they wipe out the competition, they'd have much less incentive to train more and more advanced models.
        • victorbjorklund 21 hours ago
          One large customer has more bargin power than many big ones. And risk is OpenAI would try to make their own chips if they capture all the market.
      • someone7x 21 hours ago
        > commoditizing their complements

        Feels like a modern euphemism for “subjugate their neighbors”.

        • skybrian 20 hours ago
          No, it’s encouraging competition and cost-cutting in a part of the market they don’t control. This can be a reason for companies to support open source, for example.

          Meanwhile, the companies running data centers will look for ways to support alternatives to Nvidia. That’s how they keep costs down.

          It’s a good way to play companies off each other, when it works.

        • jvanderbot 21 hours ago
          Business has always been a civilized version of war, and one which will always capture us in similar ways, so I guess wartime analogies are appropriate?

          Still it feels awful black and white to phrase it that way when this is a clear net good and better alignment of incentives than before.

    • kookamamie 21 hours ago
      Indeed. This is throwing pennies in virtue-signaling openness.
    • emsign 19 hours ago
      That's because nvidia is in the business of selling chips and compute time. All they care bout is that as much people on Earth become dependend on AI running on nvidia hw as possible.
    • sim7c00 20 hours ago
      you are not wrong. open up cuda would be a a real power move. i think ppl would mind a some of their other crap practices a lot less.
    • colechristensen 20 hours ago
      They could also publish all of their source code, die designs, put their patents in the public domain and go live on a beach somewhere and fish for a living.

      CUDA is what they sell, it makes more sense for them to charge for hardware and give the hardware-locked software away for free.

    • bongodongobob 21 hours ago
      That's AMDs fault, not Nvidia's.
      • hobofan 4 hours ago
        AMD is incompetent on the ecosystem layer, but Nvidia has certainly been acting malicious.

        I've laid out my thesis at length in past comments, which I don't care to repeat, but the gist is: During the ~2010-2014 period (the time where the ecosystem gap really widened), Nvidia purposfully didn't implement OpenCL 2.0 while pushing CUDA, which made cross-platform solutions uncompetitive.

    • latchkey 19 hours ago
      Came here to say this. It is the basis for my entire business. It isn't just CUDA though, it is the hardware layer too. That is why we are exclusive to AMD AI hardware.

      We have dozens of companies building foundational models and they all target a single vendor to supply all the hardware? Make it make sense! Yes, I know models run on AMD too, but the fact is, Nvidia, who's clearly doing a great job, is a literal monopoly. We need viable alternatives.

      This deal was done with CirraScale, who are great people. It is important to point out that they are also one of the 13 official AMD Cloud Partners. I'm on the list too.

  • cruffle_duffle 20 hours ago
    I can’t wait until I can run this shit locally without spending $10,000 on clusters of GPU’s. The models will be trained using some distributed P2P-like technology that “The Man” can’t stop.

    Imagine running a model trained by the degenerates on 4chan. Models that encourage you to cheat on your homework, happily crank out photos of world leaders banging farm animals, and gleefully generate the worst gore imaginable. And best of all, any junior high schooler on the planet can do it.

    That’s how you know this technology has arrived. None of this sanitized, lawyer approved, safety weenie certified, three letter agency authorized nonsense we have now. Until it’s all local, it’s just an extension of the existing regime.

  • jeffWrld 20 hours ago
    [dead]
  • thefaux 20 hours ago
    [flagged]
    • Difwif 20 hours ago
      You're right. It's time to ban the evil numbers from being matrix multiplied. Contact your local representative about CPU control.
      • gameman144 19 hours ago
        I'm not even in favor of banning/heavily-regulating AI developments, but I think this position here is a little reductive; you could boil anything down to the point of absurdity.

        The point of nuclear weapons bans, for instance, isn't to control "evil atoms from touching", it's to prevent the higher-order effects that those atoms touching can cause.

      • mattigames 19 hours ago
        Very funny, the problem is more about the input than whatever the CPU is doing, as in chatGPT would be no more than a footnote without copyrighted material in all it's datasets.
        • artninja1988 19 hours ago
          We all stand on the shoulders of giants:)
          • mattigames 13 hours ago
            It's clear that the giant didn't consent in this particular case.
        • bigyabai 19 hours ago
          Not really?
          • mattigames 12 hours ago
            They wouldn't have used copyrighted material if it wasn't a survival matter for them, but sure you go ahead and pretend that they just happen to use it and not really need it that much.
            • bigyabai 10 hours ago
              Anyone that doesn't at least acknowledge copyrighted materials will be a footnote. But you don't need it for AI's most profitable applications at all, eg. code generation or sentiment analysis.

              ChatGPT just exploits copyright because they can. You might not like it, but the die was cast years ago: https://en.wikipedia.org/wiki/Authors_Guild,_Inc._v._Google,....

    • chvid 20 hours ago
      Exactly why is that? Surely llms have use beyond pure destruction (unlike a nuclear weapon).
      • emsign 19 hours ago
        Their energy and ground water needs are purely destructive. I make a pass on their suppossed benefits.
    • khalic 20 hours ago
      You probably need to look under the hood, it has nothing to do with what popular culture called AI until very recently. It’s just a word generator on steroids, don’t believe the hype about AIs taking over, it’s complete BS.
      • winter_blue 19 hours ago
        Fwiw, the difference it makes in software development speed is astounding.
        • khalic 19 hours ago
          I know first hand, quite incredible
  • zoobab 21 hours ago
    "Open" like an open source FPGA implementation of their chips?