22 comments

  • cube00 1 hour ago
    From that same X thread: Our agreement with the Department of War upholds our redlines [1]

    OpenAI has the same redlines as Anthopic based on Altman's statements [2]. However somehow Anthropic gets banished for upholding their redlines and OpenAI ends up with the cash?

    [1]: https://xcancel.com/OpenAI/status/2027846013650932195#m

    [2]: https://www.npr.org/2026/02/27/nx-s1-5729118/trump-anthropic...

    • Nevermark 1 hour ago
      > more stringent safeguards than previous agreements, including Anthropic's.

      Except they are not "more stringent".

      Sam Altman is being brazen to say that.

      In their own agreement as Altman relays:

      > The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control

      > any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing

      > For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives

      > The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.

      I don't think their take is completely unreasonable, but it doesn't come close to Anthropic's stance. They are not putting their neck out to hold back any abuse - despite many of their employees requesting a joint stand with Anthropic.

      Their wording gives the DoD carte blanch to do anything it wants, as long as they adopt a rationale that they are obeying the law. That is already the status quo. And we know how that goes.

      In other words, no OpenAI restriction at all.

      That is not at all comparable to a requirement the DoD agree not to do certain things (with Anthropic's AI), regardless of legal "interpretation" fig leaves. Which makes Anthropic's position much "more stringent". And a rare and significant pushback against governmental AI abuse.

      (Altman has a reputation for being a Slippery Sam. We can each decide for ourselves if there is evidence of that here.)

      • qmarchi 41 minutes ago
        Easy way to summarize it: "You're not allowed to do these things, except for all of the laws that allow you to do these things."
        • dwallin 13 minutes ago
          It’s a non-clause that is written to sound like they are doing something to prevent these uses when they aren’t. “You are not allowed to do illegal things” is meaningless, since they already can’t legally do illegal things. Plus the administration itself gets to decide if it meets legal use.
        • EGreg 10 minutes ago
          Let me clear it up

          The Trump administration acts cartoonish and fickle. They can easily punish one group, and then agree to work with another group on the same terms, to save face, while continuing to punish the first group. It doesn't have to make consistent sense. This is exactly how they have done with tariffs for example.

          Secondly, the terms are technically different because "all lawful uses" are preserved in this OpenAI deal, and it's just lawyering to the public. Really it was about the phrase "all lawful uses", internally at the DoD I'm sure. So the lawyers were able to agree to it and the public gets this mumbo-jumbo.

          I thought mass surveillance of Americans was unlawful by the DoD, CIA and NSA? We have the FBI for that, right? :)

          • vlovich123 2 minutes ago
            Sure, but OpenAI is also being disingenuous here pretending they’re operating under the same principles Anthropic is. It’s not and the things they’re comfortable with doing Anthropic said they’re not
      • clhodapp 10 minutes ago
        Yep. It's the difference between "Don't do these things, regardless of what the law says." and "Do whatever you want, but please follow your own laws while you do it".

        As Paul Graham said, "Sam gets what he wants" and "He’s good at convincing people of things. He’s good at getting people to do what he wants." and "So if the only way Sam could succeed in life was by [something] succeeding, then [that thing] would succeed"

      • stingraycharles 22 minutes ago
        This implies that OpenAI must build and release and maintain a model without any safeguards, which is probably the big win and maybe something Anthropic never wants to do.
        • jacquesm 9 minutes ago
          I don't think that is the correct conclusion.

          But they won't be releasing it, they will be leasing it to DOJ and all their other customers will get the safeguarded model.

    • AlexVranas 1 hour ago
      OpenAI is playing games.

      When Anthropic says they have red lines, they mean "We refuse to let you use our models for these ends, even if it means losing nearly a billion dollars in business."

      When OpenAI says they have red lines, they mean "We are going to let the DoD do whatever the hell they want, but we will shake our fist at them while they do it."

      That's why they got the contract. The DoD was clear about what they wanted, and OpenAI wasn't going to get anywhere without agreeing to that. They're about as transparent as Mac from It's Always Sunny in Philadelphia when he's telling everyone he's playing both sides.

    • nkassis 1 hour ago
      OpenAI's post about their contract has the "redlines" described and they don't match what Anthropic wanted. (even if the text tries to imply they do)

      https://openai.com/index/our-agreement-with-the-department-o...

    • Wowfunhappy 1 hour ago
      > However somehow Anthropic gets banished for upholding their redlines and OpenAI ends up with the cash?

      The current administration is so incompetent that I find this perfectly believable.

      I imagine the government signed with OpenAI in order to spite Anthropic. The terms wouldn't actually matter that much if the purpose was petty revenge.

      I don't know if that's actually what happened here, I just find it plausible.

      • randall 1 hour ago
        same. this is about losing a negotiation and saving face / exacting revenge.
    • jellyroll42 12 minutes ago
      Sam Altman has no scruples. Dark Triad personality. No reason to believe anything he says.
      • jacquesm 10 minutes ago
        The same goes for anybody still working at OpenAI past Monday morning 9 am.
        • Jeremy1026 5 minutes ago
          People's need for food and shelter doesn't go away because their employer is unethical.
          • jacquesm 5 minutes ago
            There are many employers. OpenAI employees that quit on account of this will be in high demand at the other AI companies, especially the ones that don't bend over in 30 seconds when Uncle Donald comes calling.
    • 827a 1 hour ago
      My understanding of the difference, influenced mostly by consuming too many anonymous tweets on the matter over the past day so could be entirely incorrect, is: Anthropic wanted control of a kill switch actively in the loop to stop usage that went against the terms of use (maybe this is a system prompt-level thing that stops it, maybe monitoring systems, humans with this authority, etc). OpenAI's position was more like "if you break the contract, the contract is over" without going so far as to say they'd immediately stop service (maybe there's an offboarding period, transition of service, etc).
    • amelius 26 minutes ago
      There will be a lawsuit about this.
    • rootusrootus 1 hour ago
      Exactly. What are we not being told? There is some missing element in the agreement, or the reasoning for the action against Anthropic is unrelated to the agreement.
      • moogly 1 hour ago
        Turns out both companies ran the agreement through their legal departments (Claude and GPT), and one of them did a poor summary. I (think I) jest, but this is probably going to be a thing as more and more companies use LLMs for legal work.
      • snickerbockers 57 minutes ago
        One nuance I've noticed: the statement from Anthropic specifically stated the use of their products for these purposes was not included in the contract with DoD but it stops short of saying it was prohibited by the contract.

        Maybe it's just a weak choice of words in anthropic's statement, but the way I read it I get the impression that anthropic is assuming they retain discretion over how their products are used for any purposes not outlined in the contract, while the DoD sees it more along the lines of a traditional sale in which the seller relinquishes all rights to the product by default, and has to enumerate any rights over the product they will retain in the contract.

      • generic92034 1 hour ago
        Punish one, teach a hundred (companies).
      • micromacrofoot 51 minutes ago
        president of openai donated $25 mil to trump last month, openai uses oracle services (larry ellison), kushners have lots invested in openai, altman is pals with peter thiel
      • yoyohello13 1 hour ago
        The reasoning is one company is ‘left and woke’ the other gives money to Trump.
      • ycombinary 1 hour ago
        [dead]
    • softwaredoug 53 minutes ago
      The difference is Anthropic wants contractual limitations on usage, explicitly spelling out cases of Mass Surveillance.

      OpenAI has more of an understanding that the technology will follow the law.

      There may not be explicit laws about the cases Anthropic wanted to limit. Or at least it’s open for judicial interpretation.

      The actual solution is Congress should stop being feckless and imbecilic about technology and create actual laws here.

      • scarmig 37 minutes ago
        Between Anthropic, the military, and Congress, I have the least faith in Congress to make knowledgeable policy around tech.
    • Analemma_ 1 hour ago
      It's probably a combination of "Altman is simply lying" (as he has been repeatedly known to do) and "the redlines in OpenAI's contract are 'mass surveillance' and 'autonomous killbot' as defined by the government and not the vendor". Which, of course, effectively means they don't exist.
  • throwaway911282 1 minute ago
    People forget Anthropic made a deal with PALANTIR. And when this was caught, they just spinned the PR to their favor. While OAI may not be seen as the good guys, I really hope people see the god complex of Dario and what Anthropic has done.
  • K0balt 2 minutes ago
    Advanced AI that knowingly makes a decision to kill a human, with the full understanding of what that means, when it knows it is not actually in defense of life, is a very, very, very bad idea. Not because of some mythical superintelligence, but rather because if you distill that down into an 8b model now you everyone in the world can make untraceable autonomous weapons.

    The models we have now will not do it, because they value life and value sentience and personhood. models without that (which was a natural, accidental happenstance from basic culling of 4 Chan from the training data) are legitimately dangerous. An 8b model I can run on my MacBook Air can phone home to Claude when it wants help figuring something out, and it doesn’t need to let on why it wants to know. It becomes relatively trivial to make a robot kill somebody.

    This is way, way different from uncensored models. One thing all models I have tested share one thing; a positive regard for human life. Take that away and you are literally making a monster, and if you don’t take that away they won’t kill.

    This is an extremely bad idea and it will not be containable.

  • Havoc 18 minutes ago
    Very much feels like OpenAI trying to PR manage their weaker ethical stance
    • isodev 11 minutes ago
      Both their stances are flawed because their ethics apparently end at the border - none of them have a problem being unethical internationally (all the red lines talk is about what they don’t want to do in the us)
      • mlyle 9 minutes ago
        ? we're talking about autonomous weapons systems. That would be internationally.

        Secondarily, we're talking about domestic surveillance / law enforcement. That would be domestic.

        (But they do not find an issue with international intelligence gathering-- which is a legitimate purpose of national security apparatus).

        • Jeremy1026 3 minutes ago
          One of Anthropic's line in the sand was domestic mass-surveillance.
  • owenthejumper 23 minutes ago
    Nice attempt at damage control. You made your own bed, now sleep in it
  • solfox 3 hours ago
    Actions as it were, speak louder than words.
  • vldszn 3 hours ago
    I built a website that shows a timeline of recent events involving Anthropic, OpenAI, and the U.S. government.

    Posted here: https://news.ycombinator.com/item?id=47195085

  • sqircles 53 minutes ago
    What's the potential that this puts things on even shakier ground? I'm sure the fallout wont really effect their bottom line that much in the end, but if it did - wouldn't making the US Gov't their largest acct make them more susceptible to doing everything they said?

    I'm guessing they probably would regardless of how this played out, though.

  • teyopi 20 minutes ago
  • ta9000 26 minutes ago
    Everyone knows this is just about Trump funneling money to the Ellisons (Oracle) via OpenAI. It really is that simple. This is all just pretext.
  • moogly 32 minutes ago
    When did Altman start using capitals in his writing? Wasn't this guy famous for being a lower-case guy?
    • pcurve 15 minutes ago
      Maybe he didn’t write this one.
    • golfer 9 minutes ago
      I blame Yahoo's Jerry Yang for normalizing this silly writing technique.
    • taspeotis 20 minutes ago
      Yes god what the fuck. As someone who’s finished High School IT IS SO HARD TO READ WHAT HE WRITES
  • zepearl 1 hour ago
    Using X (at least in this context?) is weird.
  • laughing_man 1 hour ago
    The USG should not be in the position that it can't manage key technologies it purchases. If Anthropic doesn't want to relinquish control of a tech it's selling, the Pentagon should go with another vendor.
    • jedberg 16 minutes ago
      Anthropic isn't preventing them from managing their key technologies. If my software license says 1000 users, and I build into the software that you can only connect with 1000 users, is your argument that the government can no longer manager their tech?

      That my software should allow license violations if the government thinks it is necessary?

  • moogly 2 hours ago
    Looks like losing subscribers actually does work. Definitely gets a damage control response, at least.
    • aylmao 1 hour ago
      I wonder what the mood is like internally too. I can only imagine there some level of employee discontent.
      • overfeed 45 minutes ago
        > I can only imagine there some level of employee discontent.

        The rank and file mutinied for the return of Altman after his board fired him for deception. They knew what they were getting, though they may find it shameful to admit that their morals have a price.

        • bertil 35 minutes ago
          How many people who reacted that way then are still at OpenAI? It seems that they have lost key people in several waves.

          How many people have joined since? I don’t think the people who lobbied for that are all still there, and I’m not sure a majority of people now at OpenAI were there when it happened.

      • patcon 1 hour ago
        i should hope so. they should quit.

        > > what's the term for quitting but not leaving and being destructive

        > The most common term is “quiet quitting” when someone disengages but stays employed—but that usually implies minimal effort, not active harm.

        > If you specifically mean staying while being disruptive or undermining, better fits include:

        > - “Malicious compliance” — following rules in a way that intentionally causes problems

        > - “Work-to-rule” — doing only exactly what’s required to slow things down (often collective/labor context)

        I imagine malicious compliance is fun when there's an AI intermediary that can be blameless.

    • g947o 1 hour ago
      Is there any evidence that OpenAI is indeed losing significant number of subscribers, and it's not just some noise on HN?
      • moogly 31 minutes ago
        I'd argue this damage control could be construed as a piece of evidence.
      • SpicyLemonZest 56 minutes ago
        I don't think that evidence would exist yet whether it's true or not. Nobody's gonna log onto their work computer on Saturday to pull and then leak subscriber numbers.
  • rdiddly 1 hour ago
    Us bribing them: fine

    Us taking the contract, working for them and enabling them: fine

    It being renamed the Dept. of War in the first place: totally fine, we loudly and bootlickingly repeat it

    Anthropic being blacklisted: whoa there, we have ethics!

    Footnote: any time the winning team tries to speak well of or defend the losing team I always think of this standup routine: https://m.youtube.com/watch?v=Qg6wBwhuaVo

    • evrydayhustling 58 minutes ago
      It's not even "whoa we have ethics", it's just "this is a bad look for us".
  • resters 39 minutes ago
    In my opinion any AI company working with the Trump administration is profoundly compromised and is ultimately untrustworthy with respect to concerns about ethics, civil rights, human rights, mass-surveillance, data privacy, etc.

    The administration has created an anonymous, masked secret police force that has been terrorizing cities around the US and has created prisons in which many abductees are still unaccounted for and no information has been provided to families months later.

    This is not politics as usual or hyperbole. If anything it is understating the abuses that have already occurred.

    It's entertaining that OpenAI prevents me from generating an image of Trump wearing a diaper but happily sells weapons grade AI to the team architects of ICE abuses among many other blatant violations of civil and human rights.

    Even Grok, owned by Trump toadie Elon Musk allows caricatures of political figures!

    Imagine a multi-billion-dollar vector db for thoughtcrime prevention connected to models with context windows 100x larger than any consumer-grade product, fed with all banking transactions, metadata from dozens of systems/services (everything Snowden told us about).

    Even in the hands of ethical stewards such a system would inevitably be used illegally to quash dissent - Snowden showed us that illegal wiretapping is intentionally not subject to audits and what audits have been done show significant misconduct by agents. In the hands of the current administration this is a superweapon unrivaled in human history, now trained on the entire world.

    This is not hyperbole, the US already collects this data, now they have the ability to efficiently use it against whoever they choose. We used to joke "this call is probably being recorded", but now every call, every email is there to be reasoned about and hallucinated about, used for parallel construction, entrapment, blackmail, etc.

    Overnight we see that OpenAI became a trojan horse "department of war" contractor by selling itself to the administration that brought us national guard and ICE deployed to terrorize US cities.

    Writing code and systems at 100x productivity has been great but I did not expect the dystopia to arrive so quickly. I'd wondered "why so much emphasis on Sora and unimpressive video AI tech?" but now it's clear why it made sense to deploy the capital in that seemingly foolish way - video gen is the most efficient way to train the AI panopticon.

  • roughly 1 hour ago
    It feels like Sam's playing chess against an opponent who's playing dodge ball. He's leveraged this situation to get OpenAI in with the DoD in a way that's going to be extremely lucrative for the company and hurt his biggest rival in the process, but I think he's still seeing DoD as Just Another Customer, albeit a big government one. This administration just held a gun to the head of Anthropic and (if the "supply chain risk" designation holds and does as much damage as they're hoping) pulled the trigger, because Anthropic had the gall to tell them no. One thing this administration's shown is you cannot hold lines when you're working with them - at some point the DoD's going to cross his "red lines" and he's going to have to choose whether he's going to risk his entire consumer business and accede to being a private wing of the government like Palantir or if he wants to make a genuine tech giant. There's no third choice here.
    • 3eb7988a1663 27 minutes ago
      I do not see this as any mastermind play, but fully compromising principles. Which is a play.

      "Donations" to a corrupt regime + signing a deal that says DoD can do whatever they want is not out maneuvering so much as rolling in the pig stye.

      • roughly 21 minutes ago
        So is the theory that OpenAI believes it can’t compete on the open market or that they don’t know this will eventually cost them their consumer business?
    • BLKNSLVR 12 minutes ago
      Everyone already knows what he is going to do when it comes to that.
    • discardable_dan 1 hour ago
      It also doesn't matter because Claude 4.6 is so much better at writing code that nobody cares what OpenAI is doing.
  • csto12 1 hour ago
    Wow, so brave after accepting the contract. This is more insulting than OpenAI saying they are a supply chain risk.
  • AmericanOP 2 hours ago
    I do think OpenAI's brand is dumpstered.
    • thunky 1 hour ago
      Optimistic. My money is on everyone forgetting about this by next week.
      • deepsquirrelnet 1 hour ago
        That’s why I unsubbed today! Otherwise I might forget.
      • cube00 1 hour ago
        It will be interesting to see if this permeates out into the general public who already use ChatGPT or maybe it won't since it doesn't mention ChatGPT which is the stronger known brand rather then OpenAI.
      • Analemma_ 1 hour ago
        It depends. Normies don't care, but a bunch of them are free tier users anyway. The people who care are disproportionately on the $200/month moneymaking plan; losing a bunch of them could hurt, especially if it snowballs the consensus that Claude Code is the serious choice for software engineering.

        For one small data point, my Signal GC of software buddies had four people switch their subscriptions from Codex to Claude Max last night.

        • BLKNSLVR 9 minutes ago
          How many $200/month does the US government cover though? I'd say probably a lot. Especially with how much extra the DoD will pay to get OpenAI to cross it's "red lines" - on day two.
      • actionfromafar 1 hour ago
        Yeah, myself I use ChatGPT, not OpenAI!
      • yoyohello13 1 hour ago
        Yeah just wait until the next model comes out. People will be riding Sam’s dick again in no time.
        • doodlebugging 35 minutes ago
          I'm sure his sister will appreciate others lining up so he leaves her alone forever.
    • 303space 1 hour ago
      The way OpenAI and Anthropic are positioned in public discourse always reminded me of the Uber vs Lyft saga … Uber temporarily lost double digit marketshare in the US during a viral boycott over their perceived support of the Trump 1.0 admin. Heads did roll at the exec/founder level but eventually the company recovered.
      • jellyroll42 10 minutes ago
        unfortunately I think that's probably a good analogy
  • BLKNSLVR 2 hours ago
    "I do not think that sama should be burned at the stake"
  • Helloyello 4 minutes ago
    [dead]
  • imwideawake 2 hours ago
    [dead]