>For instance, an agency could pay for a subscription or negotiate a pay-per-use contract with an AI provider, only to find out that it is prohibited from using the AI model in certain ways, limiting its value.
This is of course quite false. They of course know the restriction when they sign the contract.
* Some employee somewhere wanted to click the shiny Claude button in the AWS FedRamp marketplace
* Whatever USG legal team were involved said "that domestic surveillance clause doesn't work for us" and tried to redline it.
* Anthropic rejected the redline.
* Someone got mad and went to Semafor.
It's unclear that this has even really escalated prior to the article, or that Anthropic are really "taking a stand" in a major way (after all, their model is already on the Fed marketplace) - it just reads like a typical fed contract negotiation with a squeaky wheel in it somewhere.
The article is also full of other weird nonsense like:
> Traditional software isn’t like that. Once a government agency has access to Microsoft Office, it doesn’t have to worry about whether it is using Excel to keep track of weapons or pencils.
While it might not be possible to enforce them as easily, many, many shrink-wrap EULAs restrict the way in which software can be used. Almost always there is an EULA carve-out with different tier for lifesaving or safety uses (due to liability / compliance concerns) and for military uses (sometimes for ethics reasons but usually due to a desire to extract more money from those customers).
THIS SOFTWARE PRODUCT MAY CONTAIN SUPPORT FOR PROGRAMS WRITTEN IN JAVA. JAVA TECHNOLOGY IS NOT FAULT TOLERANT AND IS NOT DESIGNED, MANUFACTURED, OR INTENDED FOR USE OR RESALE AS ONLINE CONTROL EQUIPMENT IN HAZARDOUS ENVIRONMENTS REQUIRING FAILSAFE PERFORMANCE, SUCH AS IN THE OPERATION OF NUCLEAR FACILITIES, AIRCRAFT NAVIGATION OR COMMUNICATION SYSTEMS, AIR TRAFFIC CONTROL, DIRECT LIFE SUPPORT MACHINES, OR WEAPONS SYSTEMS, IN WHICH THE FAILURE OF JAVA TECHNOLOGY COULD LEAD DIRECTLY TO DEATH, PERSONAL INJURY OR SEVERE PHYSICAL OR ENVIRONMENTAL DAMAGE.
Look up DoD (DoW?) 882 and LOR ratings. This is a fancy way of saying “Java can’t do that because we haven’t certified a toolchain for it”
And for bonus points, go find the last certified compilers for LOR1 rating that follow 882 guidelines.
Now you’ve scratched the surface of safety-critical software. Actually writing it is a blast. I think most web developers would weep in frustration. “Wait, I can’t allocate memory that way? Or that way? Or in this way not at all?! There’s no framework?! You mean I need to do all this to verify a button click??!!”
There are (or at least WERE) entire divisions dedicated to reading every letter of the contract and terms of service, and usually creating 20 page documents seeking clarification for a specific phrase. They absolutely know what they're getting into.
I have a feeling in today's administration which largely "leads by tweet" that many traditional "inefficient" steps have been removed from government processing, probably including software on-boarding.
Can confirm these teams are still around. There is now an additional "SME review group" that must comb through any and all AI-related issues that were flagged, sends it back down for edits and must give final approval for before docs are sent over to provider for response. Turnaround has gotten much slower (relatively)
I have a legal education but reading TOS and priv policy docs at account creation is purposefully too time consuming by design.
One my fave new AI prompts: you are my Atty and a expert in privacy law and online contracts-of-adhesion. Review the TOS aggreement at [url] and privacy policies at [url] and brief me on all areas that should be of concern to me.
Takes 90 seconds from start to finish, and reveals how contemptuously illusory these agreements are when SO MANY reserve the right to change anything with no duty to disclose changes.
This is a contract, not a click through license. You can't do that.
(Legally you can't do it with a click-through either, but the lack of a contract means that the recourse for the user is just to stop buying the service.)
The contracts will usually say “You agree to the restrictions in our TOS” with a link to that page which allows for them to update the TOS without new signatures.
All the US megacorps tend send me emails saying "We want to change TOS, here's the new TOS that's be valid from date X, and be informed that you have the right to refuse it" (in which case they'll probably terminate the service, but I'm quite sure that if it's a paid service with some subscription, they would have to refund the remaining portion) - so they can change the TOS, but not without at least some form of agreement, even if it's an implicit one 'by continuing to use the service'.
Here in Sweden contracts are a specific thing, otherwise it's not a contract, so agreeing to conditions that can be changed by the other party simply isn't a contract and therefore is just a bullshit paper of very dubious legal validity.
I know that some things like this are accepted in America, and I can't judge how it would be dealt with. I assume that contracts between companies and other sophisticated entities are actual contracts with unchangeable terms.
I know that some things like this are accepted in America
Not really. Everything you said about contracts above applies to contracts in America last time I checked. Disclaimer: IANAL, my legal training amounts of 1 semester of "Business Law" in college.
One thing about the US, is how we handle settings where one could conceptualize a contract as being needed, but where it would be way too inefficient and impractical to negotiate, write out, understand, and sign, a written contract. In those cases, which includes things like retail sales, restaurants, and may other cases, the UCC or Uniform Commercial Code[1][2] applies. Not sure offhand if that relates to the medical example or not, but I expect that at least some similar notion applies. So there are binding laws that cover these transactions, it's just not done the same way as a "full fledged contract".
Yeah, I’ve signed dozens of contracts for services and some are explicit in the way you expect but a lot of software or SAAS type contracts have flexible terms that refer to TOS and privacy policies that are updated regularly. It’s uncommon that any of those things are changed in a way that either party is upset with so companies are generally okay signing up and assuming good faith.
This feels like a hit piece by semafor. A lot of the information in there is purely false. For example, Microsoft's AI Agreemeent says (prohibits):
"...cannot use...For ongoing surveillance or real-time or near real-time identification or persistent tracking of the individual using any of their personal data, including biometric data, without the individual’s valid consent."
They don't do that. They're not capable of cooperating with anyone, it's maximum punishment all the time. It's unclear if they can keep secrets either.
The fact of being elected is not relevant to the question, and neither is the nominal existence of constitutional checks. Extrajudicial murder of alleged criminals, abuse of criminal prosecution to target political enemies, armed thugs yanking innocent people out of their cars, steamrolling firms and universities into administration-favorable policy changes and extracting hundreds of millions of dollars — and that's just the first few things I could think of. There are dozens of examples. It is not inflammatory to describe simple reality.
I would be hesitant to call the US democratic process "free and fair." And the powers held by the president certainly make them more dictatorial then the heads of state of other democracies, particularly as wielded by the current administration.
So you have a democratic process of dubious quality that elected a government that is dictator-ish.
Don't accept that your countries elections are free and fair as a axiom.
The definition of dictatorial government is either a single person or a small group of people. So there being three branches of government doesn't necessarily prohibit a government from being a dictatorship if they are all working together to enact their authoritarian control without constitutional limits.
But really this is just pointless semantics. It doesn't matter what it is called it is still a problem.
Two of them jump at the command one the other one, one out of fear (because he has ended the careers of every rep that has crossed him), and the other has been packed with life-time-appointment sycophants who put loyalty to the cut over anything else.
Russia (or literally any other dictatorial tyre pyre) also has three branches of government and a token opposition, for all the good it does.
Just because you have a nice piece of paper that outlines some kind of de jure separation of powers, doesn't mean shit in practice. Russia (and prior to it, the USSR) has no shortage of such pieces of paper.
That's a ridiculous take. Seriously outlandish. The US has always had and continues to have three working branches of government. That is a factual statement because it is indeed a fact.
It’s not a fact, because it depends upon a subjective interpretation of the word “working.” Some might argue, for example, that if the President can cow Congress into subservience, then the three branches of government are no longer in balance with each other, and thus the constitution is no longer “working” as intended.
Depends on how he cows them into subservience. If he uses the threat of electoral defeat for opposing him, that's totally legitimate. If he uses his position as commander in chief to threaten them with force, that's different.
It can be true that the constitution is not working as intended, AND the US is a far cry from a country like Russian in terms of it operating as a constitutional republic / democracy. It is not subjective to say the US is more of a democratic country than Russia.
The power of the purse is currently being usurped by the executive branch with no pushback from a republican congress, armed forces are being deployed to American cities, media corporations are being forced to have admin installed bias police, due process is a joke, museums are being forced to remove information the admin find objectionable. You can bury your head in the sand if you like but there are plenty of us who won’t.
How do you explain Trump unilaterally renaming the Ministry of Defense, without legislative approval? Is it a “working branch” if their constitutionally granted power is easily sidestepped?
I always think it's funny how people who have strong opinions based on nothing love to out themselves by just repeating that something is fact. clap clap We're all convinced, for sure! ;)
Does threatening to prosecute office supply store workers unless they print certain flyers count as the behavior of a dictator? That doesn't sound like the behavior of a government respecting the First Amendment.
You're being plainly absurd by splitting hairs over a dangerous destruction of the rule of law, complete breakdown of checks and balances, and an executive that is behaving like it is both above the law, and will never lose power.
It's very obvious that America has a dictatorial government, and I'm baffled why you would deny this. The dictator-in-chief has argued explicitly and repeatedly that the written laws of the land don't constrain him; he can shut down departments Congress ordered him to run, levy taxes they didn't authorize, and overrule or rewrite any statute he feels isn't correct. He seized 10% of Intel Corporation without even a fig leaf of legal basis!
Perhaps you're confused that the normal system of laws is still operating? That's just the nature of dictatorship in a large country. The dictator only has so much time in the day, and if he has to delegate anyway he might as well use the preexisting courts and civil servants. He just has to put supervisors on top who can credibly threaten to invoke his wrath if people step too far out of line.
With SaaS, you can be monitored and banned at any moment. With EULAs, at worse you can be banned from updates, and in reality, you probably won't get caught at all.
By using the Apple Software, you represent and warrant that you ... also agree that you will not use these products for any purposes prohibited by United States law, including, without limitation, the development, design, manufacture or production of missiles, or nuclear, chemical or biological weapons. -- iTunes
No production of missiles with iTunes? Curses, foiled again.
"Eventually, though, its politics could end up hurting its government business."
Good? What if, and I know how crazy this sounds, not using AI to surveil people was a more desirable goal than the success of yet another tech company at locking in government pork and subsidies?
First, contracts often come with usage restrictions.
Second, this article is incredibly dismissive and whiny about anyone ever taking safety seriously, for pretty much any definition of "safety". I mean, it even points out that Anthropic has "the only top-tier models cleared for top secret security situations", which seems like a direct result of them actually giving a shit about safety in the first place.
And the whining about "the contract says we can't use it for surveillance, but we want to use it for good surveillance, so it doesn't count. Their definition of surveillance is politically motivated and bad"! It's just... wtf? Is it surveillance or not?
This isn't a partisan thing. It's barely a political thing. It's more like "But we want to put a Burger King logo on the syringe we use for lethal injections! Why are you upset? We're the state so it's totally legal to be killing people this way, so you have to let us use your stuff however we want."
Wasn't a big part of AI 2027 that government employees became overly reliant on AI and couldn't function without it. So guess we are still on track to hit that timeline.
> The policy doesn’t specifically define what it means by “domestic surveillance” in a law enforcement context and appears to be using the term broadly, creating room for interpretation.
> Other AI model providers also list restrictions on surveillance, but offer more specific examples and often have carveouts for law enforcement activities. OpenAI’s policy, for instance, prohibits “unauthorized monitoring of individuals,” implying consent for legal monitoring by law enforcement.
This is unintentionally (for the author) hilarious. It's a blatant misinterpretation of the language, while complimenting the clarity of the lanuage. Who "authorizes" "monitoring of individuals"? If an executive agency monitors an individual in violation of a court order, is that "authorized" ?
Especially one where, realistically, the banana in chief might be put back in his box (crate?) sometime next year. Like, his approval rating is now actually _lower_ than at the same time in his first term, and in his first term the midterms didn't exactly go great for him.
It's particularly awkward to be in the position of having to complement the emperor on his new clothes when the emperor has a limited shelf life.
What would be an example of such criminal charges being brought by this administration? Is there a case that stands out as clear retaliation?
Edited to add: I can think of the mortgage fraud cases being discussed/brought against some high-profile people, but can’t think of any corporate world leadership being charged.
Saying no today also means you can say yes tomorrow. Then you are a hero, a dealmaker, as opposed to the "weak" who never put up a fight. This is schoolyard rules.
Yes! Everybody goose-step in unison as to not irk the administration? /s
People should behave more like the invertebrates we are and show some semblance of a spine.
Now most have more semblance with snails and jellyfish. Yes they will survive but only because there are so many of them.
Are government agencies sending prompts to model inference APIs on remote servers? Or are they running the models in their own environment?
It’s worrying to me that Anthropic, a foreign corporation (EDIT: they’re a US corp), would even have the visibility necessary to enforce usage restrictions on US government customers. Or are they baking the restrictions into the model weights?
1) Anthropic are US based, maybe you're thinking of Mistral?
2) Are government agencies sending prompts to model inference APIs on remote servers?
Of course, look up FedRAMP. Depending on the assurance level necessary, cloud services run on either cloud carve-outs in US datacenters (with various "US Person Only" rules enforced to varying degrees) or for the highest levels, in specific assured environments (AWS Secret Region for example).
3) It’s worrying to me that Anthropic, a foreign corporation, would even have the visibility necessary to enforce usage restrictions on US government customers.
There's no evidence they do, it's just lawyers vs lawyers here as far as I can tell.
> It’s worrying to me that Anthropic, a foreign corporation, would even have the visibility necessary to enforce usage restrictions on US government customers.
"Foreign" to who? I interpretted your comment as foreign to the US government (please correct me if I'm wrong) and I was confused because Anthropic is a US company.
Everyone spies and abuses individuals' privacy. What difference does it make? (Granted I would agree with you if Anthropic were indeed a foreign based entity, so am I contradicting myself wonderfully?)
No judgement here, but a US-based corporation refusing services to the US Government?
While the terms of service are what they are, the US Government can withdraw its military contracts from Anthropic (or refuse future contracts if they don't have any so far). Or softly suggest to its own contractors to limit their business dealings with Anthropic. Then Anthropic will have hard time securing computing from NVIDIA, AWS, Google, MSFT, Oracle, etc...
I am of an age where I read comments like this with my mouth agape. It is (was) perfectly normal to choose whether or not to do business with the government.
I'm sure this sort of unofficial blacklisting is fairly common, but it does seem very opposed to the idea of a free market. It definitely doesn't seem like Anthropic was trying to make some sort of point here, but it would be cool if all the AI companies had a ToS saying it can't be used for any sort of defense/police/military purposes
I am not even sure what free market is, aside from Economics textbooks and foreign policy positioning. Whatever it may be, I don't think we had it for quite some time.
>For instance, an agency could pay for a subscription or negotiate a pay-per-use contract with an AI provider, only to find out that it is prohibited from using the AI model in certain ways, limiting its value.
This is of course quite false. They of course know the restriction when they sign the contract.
This reads to me like:
* Some employee somewhere wanted to click the shiny Claude button in the AWS FedRamp marketplace
* Whatever USG legal team were involved said "that domestic surveillance clause doesn't work for us" and tried to redline it.
* Anthropic rejected the redline.
* Someone got mad and went to Semafor.
It's unclear that this has even really escalated prior to the article, or that Anthropic are really "taking a stand" in a major way (after all, their model is already on the Fed marketplace) - it just reads like a typical fed contract negotiation with a squeaky wheel in it somewhere.
The article is also full of other weird nonsense like:
> Traditional software isn’t like that. Once a government agency has access to Microsoft Office, it doesn’t have to worry about whether it is using Excel to keep track of weapons or pencils.
While it might not be possible to enforce them as easily, many, many shrink-wrap EULAs restrict the way in which software can be used. Almost always there is an EULA carve-out with different tier for lifesaving or safety uses (due to liability / compliance concerns) and for military uses (sometimes for ethics reasons but usually due to a desire to extract more money from those customers).
THIS SOFTWARE PRODUCT MAY CONTAIN SUPPORT FOR PROGRAMS WRITTEN IN JAVA. JAVA TECHNOLOGY IS NOT FAULT TOLERANT AND IS NOT DESIGNED, MANUFACTURED, OR INTENDED FOR USE OR RESALE AS ONLINE CONTROL EQUIPMENT IN HAZARDOUS ENVIRONMENTS REQUIRING FAILSAFE PERFORMANCE, SUCH AS IN THE OPERATION OF NUCLEAR FACILITIES, AIRCRAFT NAVIGATION OR COMMUNICATION SYSTEMS, AIR TRAFFIC CONTROL, DIRECT LIFE SUPPORT MACHINES, OR WEAPONS SYSTEMS, IN WHICH THE FAILURE OF JAVA TECHNOLOGY COULD LEAD DIRECTLY TO DEATH, PERSONAL INJURY OR SEVERE PHYSICAL OR ENVIRONMENTAL DAMAGE.
author added "must be used for good, not evil" to the license
...and IBM asked for an exception.
https://en.wikipedia.org/wiki/JSLint#License
https://news.ycombinator.com/item?id=5138866
note that restricting use of software makes it non-free gpl-wise.
RMS said the GPL does not restrict rights of the USER of software, just that when the software is redistributed, the rights are passed along.
And for bonus points, go find the last certified compilers for LOR1 rating that follow 882 guidelines.
Now you’ve scratched the surface of safety-critical software. Actually writing it is a blast. I think most web developers would weep in frustration. “Wait, I can’t allocate memory that way? Or that way? Or in this way not at all?! There’s no framework?! You mean I need to do all this to verify a button click??!!”
If it gives you high priority support, I dont care, if its the same tier of support, then that's just obnoxiously greedy.
One my fave new AI prompts: you are my Atty and a expert in privacy law and online contracts-of-adhesion. Review the TOS aggreement at [url] and privacy policies at [url] and brief me on all areas that should be of concern to me.
Takes 90 seconds from start to finish, and reveals how contemptuously illusory these agreements are when SO MANY reserve the right to change anything with no duty to disclose changes.
(Legally you can't do it with a click-through either, but the lack of a contract means that the recourse for the user is just to stop buying the service.)
I know that some things like this are accepted in America, and I can't judge how it would be dealt with. I assume that contracts between companies and other sophisticated entities are actual contracts with unchangeable terms.
Not really. Everything you said about contracts above applies to contracts in America last time I checked. Disclaimer: IANAL, my legal training amounts of 1 semester of "Business Law" in college.
This would be a non-contract in Swedish law, for example.
[1]: https://en.wikipedia.org/wiki/Uniform_Commercial_Code
[2]: The UCC also covers other things, but these cases are a lot of what it's best known for.
"...cannot use...For ongoing surveillance or real-time or near real-time identification or persistent tracking of the individual using any of their personal data, including biometric data, without the individual’s valid consent."
And I’m not singling out Anthropic. None of these companies or governments (i.e. people) can be trusted at face value.
So you have a democratic process of dubious quality that elected a government that is dictator-ish.
Don't accept that your countries elections are free and fair as a axiom.
But really this is just pointless semantics. It doesn't matter what it is called it is still a problem.
Russia (or literally any other dictatorial tyre pyre) also has three branches of government and a token opposition, for all the good it does.
Just because you have a nice piece of paper that outlines some kind of de jure separation of powers, doesn't mean shit in practice. Russia (and prior to it, the USSR) has no shortage of such pieces of paper.
But you get there by doing exactly what's being done on a daily basis.
A) Boundary testing. Small bites end up being large portions after enough are taken.
B) If I shit in 10 gallons of chocolate pudding, would you want to eat a bite of that pudding?
Perhaps you're confused that the normal system of laws is still operating? That's just the nature of dictatorship in a large country. The dictator only has so much time in the day, and if he has to delegate anyway he might as well use the preexisting courts and civil servants. He just has to put supervisors on top who can credibly threaten to invoke his wrath if people step too far out of line.
Regardless of what you think about the government, that wasn’t a statement in the above. The statement was about tech companies. So it wasn’t clear.
Viva local-first software!
By using the Apple Software, you represent and warrant that you ... also agree that you will not use these products for any purposes prohibited by United States law, including, without limitation, the development, design, manufacture or production of missiles, or nuclear, chemical or biological weapons. -- iTunes
No production of missiles with iTunes? Curses, foiled again.
Good? What if, and I know how crazy this sounds, not using AI to surveil people was a more desirable goal than the success of yet another tech company at locking in government pork and subsidies?
https://www.reuters.com/business/retail-consumer/anthropic-o...
Second, this article is incredibly dismissive and whiny about anyone ever taking safety seriously, for pretty much any definition of "safety". I mean, it even points out that Anthropic has "the only top-tier models cleared for top secret security situations", which seems like a direct result of them actually giving a shit about safety in the first place.
And the whining about "the contract says we can't use it for surveillance, but we want to use it for good surveillance, so it doesn't count. Their definition of surveillance is politically motivated and bad"! It's just... wtf? Is it surveillance or not?
This isn't a partisan thing. It's barely a political thing. It's more like "But we want to put a Burger King logo on the syringe we use for lethal injections! Why are you upset? We're the state so it's totally legal to be killing people this way, so you have to let us use your stuff however we want."
I do love the smell of hypocrisy early in the morning.
> Other AI model providers also list restrictions on surveillance, but offer more specific examples and often have carveouts for law enforcement activities. OpenAI’s policy, for instance, prohibits “unauthorized monitoring of individuals,” implying consent for legal monitoring by law enforcement.
This is unintentionally (for the author) hilarious. It's a blatant misinterpretation of the language, while complimenting the clarity of the lanuage. Who "authorizes" "monitoring of individuals"? If an executive agency monitors an individual in violation of a court order, is that "authorized" ?
It's particularly awkward to be in the position of having to complement the emperor on his new clothes when the emperor has a limited shelf life.
Edited to add: I can think of the mortgage fraud cases being discussed/brought against some high-profile people, but can’t think of any corporate world leadership being charged.
People should behave more like the invertebrates we are and show some semblance of a spine. Now most have more semblance with snails and jellyfish. Yes they will survive but only because there are so many of them.
It’s worrying to me that Anthropic, a foreign corporation (EDIT: they’re a US corp), would even have the visibility necessary to enforce usage restrictions on US government customers. Or are they baking the restrictions into the model weights?
2) Are government agencies sending prompts to model inference APIs on remote servers?
Of course, look up FedRAMP. Depending on the assurance level necessary, cloud services run on either cloud carve-outs in US datacenters (with various "US Person Only" rules enforced to varying degrees) or for the highest levels, in specific assured environments (AWS Secret Region for example).
3) It’s worrying to me that Anthropic, a foreign corporation, would even have the visibility necessary to enforce usage restrictions on US government customers.
There's no evidence they do, it's just lawyers vs lawyers here as far as I can tell.
"Foreign" to who? I interpretted your comment as foreign to the US government (please correct me if I'm wrong) and I was confused because Anthropic is a US company.
The concern remains even if it’s a US corporation though (not government owned servers).
> The concern remains even if it’s a US corporation though (not government owned servers).
Very much so, I completely agree.
While the terms of service are what they are, the US Government can withdraw its military contracts from Anthropic (or refuse future contracts if they don't have any so far). Or softly suggest to its own contractors to limit their business dealings with Anthropic. Then Anthropic will have hard time securing computing from NVIDIA, AWS, Google, MSFT, Oracle, etc...
This won't last.