>The ask is simple: let us use your models for anything that's technically legal.
> Weapons development, intelligence collection, battlefield operations, mass surveillance of American citizens.
> OpenAI said yes.
> Google said yes.
> xAI said yes.
> Anthropic said no.
Is that accurate? Did all the labs, other than Anthropic, say yes to allowing their models to be used for weapons development and mass surveillance of Americans?
The Claude chatbot for the general public won't even answer questions related to military AI. It won't even answer questions like if there are any dual use papers among a group of new AI research paper listings that might be of concern from an AI safety viewpoint.
> Weapons development, intelligence collection, battlefield operations, mass surveillance of American citizens.
> OpenAI said yes.
> Google said yes.
> xAI said yes.
> Anthropic said no.
Is that accurate? Did all the labs, other than Anthropic, say yes to allowing their models to be used for weapons development and mass surveillance of Americans?
Or is the poster overlooking some nuance?
https://www.wsj.com/livecoverage/stock-market-today-dow-sp-5...
He's not a reporter, he doesn't work for any of those companies, per his profile.
I believe he's doing a lot of guesswork.
https://www.wsj.com/livecoverage/stock-market-today-dow-sp-5...
Link?
13 points by fortran77 20 hours ago | 6 comments
https://news.ycombinator.com/item?id=47057294
https://theconversation.com/openai-has-deleted-the-word-safe...