They accurately outline the fact that claim of the NYT is not properly supported by facts. There was almost certainly sexual violence on Oct 7. It hasn't been established in an evidence based way that there was organized premeditated weaponization of sexual violence as a tool of war as is claimed in the article.
If you're going to cite the ICC arrest warrant for Hamas then you have to acknowledge the one they've issued to Netanyahu for using starvation as a weapon.
1 side have been subjugated for over 70 years through control over their land, air and sea - the other is a nation backed by the most powerful nations in the world, filled with a lot of foreigners that lay claim to that place due to something that happened 2,000 years ago.
Guess which side was deemed the baddies because they lashed out?
Was it premeditated weaponization of sexual violence as a tool of war?
The comment you replying to is not denying that it happened.
What if the weaponization accusation is just an embellishment added to suit a political narrative? Establishing its truth does require evidence, separately from the evidence of the violence itself.
That's even if we happened to find the narrative morally prevailing.
I have no idea what difference it makes if the rape was premeditated. The strategy employed by Hamas was to terrorize. You think Gaza should be invaded less harshly because her soldiers decided to rape in the heat of the moment?
Regardless of their initial intentions, I find it difficult to argue that these acts haven't served as a tool of war.
One must also acknowledge these acts were not isolated. I do not have concrete proof, maybe there is some out there, but the fact of the matter is that scale and their psychological effects are undeniable.
I am very skeptical that something like "An LLM-based chatbot that answers history and law questions about palestine in a hasbara free way" is going to materially help anyone.
Sure it can, as can any other kind of well-designed Q+A bot, even about far less controversial topics.
The material benefit as such resides in the fact that (1) they can potentially save users a great deal of time (a very large portion of search engine queries are straight-up questions, which the articles in the result set sometimes answer, but usually only partially, and it still requires a good chunk of time to plough through them just to get that answer) and (2) high-potency propaganda of any kind (not specific to the hasbara project, though it seems to provide a shining example of such) promotes anxiety, paranoia, and increased susceptibility to psychosis. In fact to some extent that is its very purpose.
So anything that helps abate the pernicious blight of (2) is potentially quite helpful, both materially and spiritually.
Also, people are going to be using AI chatbots for this kind of research anyway. Rather than hoping the larger players will do the right thing here (or at least avoid doing the wrong thing), it seems quite prudent for independent organizations to pick up the task on their own, and start creating their own bots for these purposes.
Hard disagree on the impact this will have on (2). When Russia attempted to interfere with the 2016 election they did so overwhelmingly through social media, not by polluting google search.
And “saving users time while searching” is more likely to be accomplished by improving your query->result pipeline and not by improving a tangentially associated technology
No one is suggesting that these bots will in any way influence ("pollute", per your spin) Google search. Obviously Google won't be incorporating their output, and if anything they'll actively block it the moment they get wind of it.
Also, I didn't specifically say these bots would save users time "while searching", i.e. while using a regular search engine. Though I didn't dig too deeply into the T4P proposal (so I don't know what they're proposing), my own guess is that people would seek out these bots as an alternative to general-purpose search engines -- for example as a sidebar on their favorite news sites.
I strongly disagree on the LLM-based bot helping on any controversial topic. LLMs _will_ hallucinate no matter what you do. They will do the exact opposite by providing a few false/hallucinated informations in a context where you need truth and exactitude.
I think you're overthinking it. This is like saying LLMs will never be useful because they hallucinate. That's a known issue, and yet of course they have been proven to be often quite useful nonetheless.
What it comes down to is, how often do they hallucinate, what's the negative impact when they do (both of which can be measured) and very importantly: for whatever their measured performance is, how does it compare to the next best alternative that users have?
It's not like they're trying to build a model to design a nuclear reactor in one go. It's just a Q+A bot, whose performance can be easily measured by benchmarking it against the top 30 questions or so in a given subject area (probably accounting for 95 percent of all inputs). And the current alternative users have (search engines) is pretty darn mediocre.
BTW I'm actually not much of a fan of LLMs or chatbots, so I have nothing to "sell" you here. But this is my rough take, based on my generally quite skeptical attitude toward this technology.
Which does seem to suggest that, at the very least, it's an idea worth exploring.
I'm not overthinking it, some papers describe it [0] and [1] for example. I agree for some subjects you can deal with some error.
The problem is not how often but how bad just one single error can be. My point was on controversial topic, where a single error can deal serious damage. Yes it must be error-free like for a nuclear reactor. Just imagine a Q&A chatbot answering questions on the subject of Israel and Palestine or something else really touchy, do you really think you can afford any error/hallucination ?
You are indeed overthinking, because I just said, very clearly, that I acknowledged the hallucination problem, and yet you're throwing citations back as if I never heard of it. How many times does one have to say "it's a known issue"?
My point was on controversial topic, where a single error can deal serious damage.
Okay, but so can a single garbage article on a search engine result. I guess one shouldn't build search engines then (unless they can be held to the same standards as nuclear reactors), because do you think we can afford even single bad result? Just imagine what will happen, etc.
I think it could be an accessable way to learn more about the topic. But just plainly reading about the topic through the vast written literature would be a better way to do so.
Users flagged it, which is common on divisive topics. We sometimes turn flags off, even when a topic is divisive, but I think it's best if the underlying article is not a direct advocacy piece (or an organizational announcement*).
Here are some other places where I've posted about this in the context of this topic. If you or anyone take a look at those explanations and still have a question that isn't answered, I'd be happy to take a crack at it.
(* we tend not to favor posts that announce organizations, because while the organization may be important and its work may be interesting, the announcement posts themselves tend not to be interesting, so they end up fueling generic rather than specific discussion.)
I think you're doing a good job in turning off flags for some devisive posts. I was just surprised at how easy it is to kill the post from just a couple (?) flags especially after others have vouched for it. There was this weird behaviour going on where the post got killed then revived because some user vouched for it and then another flagged it.
It's also worthy of note that this system is ripe for abuse, because the visibility of the post is asymmetric. It gets killed as soon as it gets flagged, so no new users see it and hence hard for someone to vouch. There's also the obvious problem of ganging on a post to flag/vouch for it, which again seems like it can be easily abused by small groups.
It is if the post consistently gets vouched for whenever others stumble across it. Of course if it only got flags then it likely represents a widespread opinion.
There are lots of such asymmetries (although yours isn't very accurate since posts don't get hidden the moment they are flagged) but however the system is tuned, it's still not evidence of abuse. People disagree on what stories belong where and the system may work differently than the way you prefer or assume but neither of these are evidence of abuse. That's not saying there isn't abuse it's just not clear how you reach that.
T4P is one of the most exciting things I've seen in tech. I was an early adopter of both the GitHub banner for my biggest 9k+ star GitHub project, profile[1], and of the photo border tool[2].
No matter the views on the whole issue, I find it quite bold, but also risky to start such a project in the US. At the core of it, this is still an armed conflict, that the US is actively involved in. (for much of the conflict's ~140 year history but especially the last ~70 years)
So I'd expect an US-based incubator or startup with an explicit pro-palestine agenda to be at risk of getting into legal hot water very quickly if they don't commit themselves to purely civilian and nonviolent projects.
All the projects that are currently listed on the page fulfill that condition, but I don't see how that incubator's stance towards projects with a military significance or "dual use" potential would be.
1) Already exists - No Thanks. "No Thanks" is the name of the app. No Thanks explicitly conveyed it's alignment with the BDS movement in it's description on the play store, Boycat is very generic and I fear it could be used in the opposite direction as well (although probably to less effect).
2) Good luck finding a sufficiently large source of training data that isn't biased against Palestine
3) This one's just straight up "Hey let's underpay some Palestinians and call it charity"
4) Not a bad idea inherently but centralizing protest organization has some very obvious drawbacks.
5) Is the groundwork for 2. "It provides references" - ok but we know AI hallucinates...
6) Uses AI generated images in advertising instead of using genuine protest photos. Is it because they've never been to a Pro-Palestine protest? The signs in the photo seem to be protesting police violence and while I'm all about intersectionality this seems like a rebrand (cashgrab) of an app/idea originally targeting the BLM protests and they were just too lazy/cheap to keep it on message
Edit: I’m just now realizing how dogshit of an idea it is to have any kind of AI talk about Palestine when there are live Palestinian’s and even Israeli Jewish Scholars who could debunk the whole topic in 20 questions. Even a well trained AI pales in comparison to Ilan Pappé
Edit edit: be skeptical of anyone using AI to solve fascism. The tide finally seems to be shifting on Palestine because we can all see firsthand accounts on our phones
Arrest > Trial > Jail. You can arrest someone on suspicion of a crime and then you go to trial which is ostensibly the fact finding segment of things.
Guess which side was deemed the baddies because they lashed out?
You can just not trust my word, I'm just someone online.
The comment you replying to is not denying that it happened.
What if the weaponization accusation is just an embellishment added to suit a political narrative? Establishing its truth does require evidence, separately from the evidence of the violence itself.
That's even if we happened to find the narrative morally prevailing.
Regardless of their initial intentions, I find it difficult to argue that these acts haven't served as a tool of war.
One must also acknowledge these acts were not isolated. I do not have concrete proof, maybe there is some out there, but the fact of the matter is that scale and their psychological effects are undeniable.
The material benefit as such resides in the fact that (1) they can potentially save users a great deal of time (a very large portion of search engine queries are straight-up questions, which the articles in the result set sometimes answer, but usually only partially, and it still requires a good chunk of time to plough through them just to get that answer) and (2) high-potency propaganda of any kind (not specific to the hasbara project, though it seems to provide a shining example of such) promotes anxiety, paranoia, and increased susceptibility to psychosis. In fact to some extent that is its very purpose.
So anything that helps abate the pernicious blight of (2) is potentially quite helpful, both materially and spiritually.
Also, people are going to be using AI chatbots for this kind of research anyway. Rather than hoping the larger players will do the right thing here (or at least avoid doing the wrong thing), it seems quite prudent for independent organizations to pick up the task on their own, and start creating their own bots for these purposes.
And “saving users time while searching” is more likely to be accomplished by improving your query->result pipeline and not by improving a tangentially associated technology
Also, I didn't specifically say these bots would save users time "while searching", i.e. while using a regular search engine. Though I didn't dig too deeply into the T4P proposal (so I don't know what they're proposing), my own guess is that people would seek out these bots as an alternative to general-purpose search engines -- for example as a sidebar on their favorite news sites.
I think you're overthinking it. This is like saying LLMs will never be useful because they hallucinate. That's a known issue, and yet of course they have been proven to be often quite useful nonetheless.
What it comes down to is, how often do they hallucinate, what's the negative impact when they do (both of which can be measured) and very importantly: for whatever their measured performance is, how does it compare to the next best alternative that users have?
It's not like they're trying to build a model to design a nuclear reactor in one go. It's just a Q+A bot, whose performance can be easily measured by benchmarking it against the top 30 questions or so in a given subject area (probably accounting for 95 percent of all inputs). And the current alternative users have (search engines) is pretty darn mediocre.
BTW I'm actually not much of a fan of LLMs or chatbots, so I have nothing to "sell" you here. But this is my rough take, based on my generally quite skeptical attitude toward this technology.
Which does seem to suggest that, at the very least, it's an idea worth exploring.
The problem is not how often but how bad just one single error can be. My point was on controversial topic, where a single error can deal serious damage. Yes it must be error-free like for a nuclear reactor. Just imagine a Q&A chatbot answering questions on the subject of Israel and Palestine or something else really touchy, do you really think you can afford any error/hallucination ?
[0] : https://arxiv.org/abs/2409.05746 [1] : https://arxiv.org/abs/2401.11817
My point was on controversial topic, where a single error can deal serious damage.
Okay, but so can a single garbage article on a search engine result. I guess one shouldn't build search engines then (unless they can be held to the same standards as nuclear reactors), because do you think we can afford even single bad result? Just imagine what will happen, etc.
Edit: To anyone that sees this comment, please vouch for the post as someone has flagged it again.
Here are some other places where I've posted about this in the context of this topic. If you or anyone take a look at those explanations and still have a question that isn't answered, I'd be happy to take a crack at it.
https://news.ycombinator.com/item?id=39920732 (April 2024)
https://news.ycombinator.com/item?id=39435024 (Feb 2024)
https://news.ycombinator.com/item?id=38947003 (Jan 2024)
https://news.ycombinator.com/item?id=38749162 (Dec 2023)
https://news.ycombinator.com/item?id=38657527 (Dec 2023)
(* we tend not to favor posts that announce organizations, because while the organization may be important and its work may be interesting, the announcement posts themselves tend not to be interesting, so they end up fueling generic rather than specific discussion.)
It's also worthy of note that this system is ripe for abuse, because the visibility of the post is asymmetric. It gets killed as soon as it gets flagged, so no new users see it and hence hard for someone to vouch. There's also the obvious problem of ganging on a post to flag/vouch for it, which again seems like it can be easily abused by small groups.
Congratulations on the launch!
[1]: https://github.com/LGUG2Z
[2]: https://ppm.techforpalestine.org/
So I'd expect an US-based incubator or startup with an explicit pro-palestine agenda to be at risk of getting into legal hot water very quickly if they don't commit themselves to purely civilian and nonviolent projects.
All the projects that are currently listed on the page fulfill that condition, but I don't see how that incubator's stance towards projects with a military significance or "dual use" potential would be.
1) Already exists - No Thanks. "No Thanks" is the name of the app. No Thanks explicitly conveyed it's alignment with the BDS movement in it's description on the play store, Boycat is very generic and I fear it could be used in the opposite direction as well (although probably to less effect).
2) Good luck finding a sufficiently large source of training data that isn't biased against Palestine
3) This one's just straight up "Hey let's underpay some Palestinians and call it charity"
4) Not a bad idea inherently but centralizing protest organization has some very obvious drawbacks.
5) Is the groundwork for 2. "It provides references" - ok but we know AI hallucinates...
6) Uses AI generated images in advertising instead of using genuine protest photos. Is it because they've never been to a Pro-Palestine protest? The signs in the photo seem to be protesting police violence and while I'm all about intersectionality this seems like a rebrand (cashgrab) of an app/idea originally targeting the BLM protests and they were just too lazy/cheap to keep it on message
Edit: I’m just now realizing how dogshit of an idea it is to have any kind of AI talk about Palestine when there are live Palestinian’s and even Israeli Jewish Scholars who could debunk the whole topic in 20 questions. Even a well trained AI pales in comparison to Ilan Pappé
Edit edit: be skeptical of anyone using AI to solve fascism. The tide finally seems to be shifting on Palestine because we can all see firsthand accounts on our phones