They accurately outline the fact that claim of the NYT is not properly supported by facts. There was almost certainly sexual violence on Oct 7. It hasn't been established in an evidence based way that there was organized premeditated weaponization of sexual violence as a tool of war as is claimed in the article.
Was it premeditated weaponization of sexual violence as a tool of war?
The comment you replying to is not denying that it happened.
What if the weaponization accusation is just an embellishment added to suit a political narrative? Establishing its truth does require evidence, separately from the evidence of the violence itself.
That's even if we happened to find the narrative morally prevailing.
Regardless of their initial intentions, I find it difficult to argue that these acts haven't served as a tool of war.
One must also acknowledge these acts were not isolated. I do not have concrete proof, maybe there is some out there, but the fact of the matter is that scale and their psychological effects are undeniable.
If you're going to cite the ICC arrest warrant for Hamas then you have to acknowledge the one they've issued to Netanyahu for using starvation as a weapon.
1 side have been subjugated for over 70 years through control over their land, air and sea - the other is a nation backed by the most powerful nations in the world, filled with a lot of foreigners that lay claim to that place due to something that happened 2,000 years ago.
Guess which side was deemed the baddies because they lashed out?
I am very skeptical that something like "An LLM-based chatbot that answers history and law questions about palestine in a hasbara free way" is going to materially help anyone.
Sure it can, as can any other kind of well-designed Q+A bot, even about far less controversial topics.
The material benefit as such resides in the fact that (1) they can potentially save users a great deal of time (a very large portion of search engine queries are straight-up questions, which the articles in the result set sometimes answer, but usually only partially, and it still requires a good chunk of time to plough through them just to get that answer) and (2) high-potency propaganda of any kind (not specific to the hasbara project, though it seems to provide a shining example of such) promotes anxiety, paranoia, and increased susceptibility to psychosis. In fact to some extent that is its very purpose.
So anything that helps abate the pernicious blight of (2) is potentially quite helpful, both materially and spiritually.
Also, people are going to be using AI chatbots for this kind of research anyway. Rather than hoping the larger players will do the right thing here (or at least avoid doing the wrong thing), it seems quite prudent for independent organizations to pick up the task on their own, and start creating their own bots for these purposes.
Hard disagree on the impact this will have on (2). When Russia attempted to interfere with the 2016 election they did so overwhelmingly through social media, not by polluting google search.
And “saving users time while searching” is more likely to be accomplished by improving your query->result pipeline and not by improving a tangentially associated technology
No one is suggesting that these bots will in any way influence ("pollute", per your spin) Google search. Obviously Google won't be incorporating their output, and if anything they'll actively block it the moment they get wind of it.
I think it could be an accessable way to learn more about the topic. But just plainly reading about the topic through the vast written literature would be a better way to do so.
Users flagged it, which is common on divisive topics. We sometimes turn flags off, even when a topic is divisive, but I think it's best if the underlying article is not a direct advocacy piece (or an organizational announcement*).
Here are some other places where I've posted about this in the context of this topic. If you or anyone take a look at those explanations and still have a question that isn't answered, I'd be happy to take a crack at it.
(* we tend not to favor posts that announce organizations, because while the organization may be important and its work may be interesting, the announcement posts themselves tend not to be interesting, so they end up fueling generic rather than specific discussion.)
I think you're doing a good job in turning off flags for some devisive posts. I was just surprised at how easy it is to kill the post from just a couple (?) flags especially after others have vouched for it. There was this weird behaviour going on where the post got killed then revived because some user vouched for it and then another flagged it.
It's also worthy of note that this system is ripe for abuse, because the visibility of the post is asymmetric. It gets killed as soon as it gets flagged, so no new users see it and hence hard for someone to vouch. There's also the obvious problem of ganging on a post to flag/vouch for it, which again seems like it can be easily abused by small groups.
It is if the post consistently gets vouched for whenever others stumble across it. Of course if it only got flags then it likely represents a widespread opinion.
T4P is one of the most exciting things I've seen in tech. I was an early adopter of both the GitHub banner for my biggest 9k+ star GitHub project, profile[1], and of the photo border tool[2].
1) Already exists - No Thanks. "No Thanks" is the name of the app. No Thanks explicitly conveyed it's alignment with the BDS movement in it's description on the play store, Boycat is very generic and I fear it could be used in the opposite direction as well (although probably to less effect).
2) Good luck finding a sufficiently large source of training data that isn't biased against Palestine
3) This one's just straight up "Hey let's underpay some Palestinians and call it charity"
4) Not a bad idea inherently but centralizing protest organization has some very obvious drawbacks.
5) Is the groundwork for 2. "It provides references" - ok but we know AI hallucinates...
6) Uses AI generated images in advertising instead of using genuine protest photos. Is it because they've never been to a Pro-Palestine protest? The signs in the photo seem to be protesting police violence and while I'm all about intersectionality this seems like a rebrand (cashgrab) of an app/idea originally targeting the BLM protests and they were just too lazy/cheap to keep it on message
Edit: I’m just now realizing how dogshit of an idea it is to have any kind of AI talk about Palestine when there are live Palestinian’s and even Israeli Jewish Scholars who could debunk the whole topic in 20 questions. Even a well trained AI pales in comparison to Ilan Pappé
Edit edit: be skeptical of anyone using AI to solve fascism. The tide finally seems to be shifting on Palestine because we can all see firsthand accounts on our phones
You can just not trust my word, I'm just someone online.
The comment you replying to is not denying that it happened.
What if the weaponization accusation is just an embellishment added to suit a political narrative? Establishing its truth does require evidence, separately from the evidence of the violence itself.
That's even if we happened to find the narrative morally prevailing.
Regardless of their initial intentions, I find it difficult to argue that these acts haven't served as a tool of war.
One must also acknowledge these acts were not isolated. I do not have concrete proof, maybe there is some out there, but the fact of the matter is that scale and their psychological effects are undeniable.
Guess which side was deemed the baddies because they lashed out?
Arrest > Trial > Jail. You can arrest someone on suspicion of a crime and then you go to trial which is ostensibly the fact finding segment of things.
The material benefit as such resides in the fact that (1) they can potentially save users a great deal of time (a very large portion of search engine queries are straight-up questions, which the articles in the result set sometimes answer, but usually only partially, and it still requires a good chunk of time to plough through them just to get that answer) and (2) high-potency propaganda of any kind (not specific to the hasbara project, though it seems to provide a shining example of such) promotes anxiety, paranoia, and increased susceptibility to psychosis. In fact to some extent that is its very purpose.
So anything that helps abate the pernicious blight of (2) is potentially quite helpful, both materially and spiritually.
Also, people are going to be using AI chatbots for this kind of research anyway. Rather than hoping the larger players will do the right thing here (or at least avoid doing the wrong thing), it seems quite prudent for independent organizations to pick up the task on their own, and start creating their own bots for these purposes.
And “saving users time while searching” is more likely to be accomplished by improving your query->result pipeline and not by improving a tangentially associated technology
Edit: To anyone that sees this comment, please vouch for the post as someone has flagged it again.
Here are some other places where I've posted about this in the context of this topic. If you or anyone take a look at those explanations and still have a question that isn't answered, I'd be happy to take a crack at it.
https://news.ycombinator.com/item?id=39920732 (April 2024)
https://news.ycombinator.com/item?id=39435024 (Feb 2024)
https://news.ycombinator.com/item?id=38947003 (Jan 2024)
https://news.ycombinator.com/item?id=38749162 (Dec 2023)
https://news.ycombinator.com/item?id=38657527 (Dec 2023)
(* we tend not to favor posts that announce organizations, because while the organization may be important and its work may be interesting, the announcement posts themselves tend not to be interesting, so they end up fueling generic rather than specific discussion.)
It's also worthy of note that this system is ripe for abuse, because the visibility of the post is asymmetric. It gets killed as soon as it gets flagged, so no new users see it and hence hard for someone to vouch. There's also the obvious problem of ganging on a post to flag/vouch for it, which again seems like it can be easily abused by small groups.
Congratulations on the launch!
[1]: https://github.com/LGUG2Z
[2]: https://ppm.techforpalestine.org/
1) Already exists - No Thanks. "No Thanks" is the name of the app. No Thanks explicitly conveyed it's alignment with the BDS movement in it's description on the play store, Boycat is very generic and I fear it could be used in the opposite direction as well (although probably to less effect).
2) Good luck finding a sufficiently large source of training data that isn't biased against Palestine
3) This one's just straight up "Hey let's underpay some Palestinians and call it charity"
4) Not a bad idea inherently but centralizing protest organization has some very obvious drawbacks.
5) Is the groundwork for 2. "It provides references" - ok but we know AI hallucinates...
6) Uses AI generated images in advertising instead of using genuine protest photos. Is it because they've never been to a Pro-Palestine protest? The signs in the photo seem to be protesting police violence and while I'm all about intersectionality this seems like a rebrand (cashgrab) of an app/idea originally targeting the BLM protests and they were just too lazy/cheap to keep it on message
Edit: I’m just now realizing how dogshit of an idea it is to have any kind of AI talk about Palestine when there are live Palestinian’s and even Israeli Jewish Scholars who could debunk the whole topic in 20 questions. Even a well trained AI pales in comparison to Ilan Pappé
Edit edit: be skeptical of anyone using AI to solve fascism. The tide finally seems to be shifting on Palestine because we can all see firsthand accounts on our phones