20 comments

  • nappy-doo 9 hours ago
    I worked at FB for almost 2 years. (I left as soon as I could, I knew it wasn't a good fit for me.)

    I had an Uber from the campus one day, and my driver, a twenty-something girl, was asking how to become a moderator. I told her, "no amount of money would be enough for me to do that job. Don't do it."

    I don't know if she eventually got the job, but I hope she didn't.

    • narrator 8 hours ago
      Yes, these jobs are horrible. However, I do know from accidently encountering bad stuff on the internet that you want to be as far away from a modern battlefield as possible.

      It's just kind of ridiculous how people think war is like Call of Duty. One minute you're sitting in a trench, the next you're a pile of undifferentiated blood and guts. Same goes for car accidents and stuff. People really underestimate how fragile we are as human beings. Becoming aware of this is super damaging to our concept of normal life.

      • noduerme 7 hours ago
        Watching someone you love die of cancer is also super damaging to one's concept of normal life. Getting a diagnosis, or being in a bad car accident, or the victim of a violent assault is, too. I think a personal sense of normality is nothing more than the state of mind where we can blissfully (and temporarily) forget about our own mortality. Obviously, marinating yourself in all the horrible stuff makes it really hard to maintain that state of mind.

        On the other hand, never seeing or reckoning with or preparing for how brutal reality actually is can lead to a pretty bad shock once something bad happens around you. And maybe worse, can lead you to under-appreciate how fantastic and beautiful the quotidian moments of your normal life actually are. I think it's important to develop a concept of normal life that doesn't completely ignore that really bad things happen all around us, all the time.

        • karlgkk 51 minutes ago
          Frankly

          there’s a difference between a one or two or even ten off exposure to the brutality of life, where various people in your life will support you and help you acclimate to it

          Versus straight up mainlining it for 8 hours a day

      • sandworm101 9 minutes ago
        >> ridiculous how people think war is like Call of Duty.

        It is also ridiculous how people think every soldier's experience is like Band of Brothers or Full Metal Jacket. I remember an interview with a WWII vet who had been on omaha beach: "I don't remember anything happening in slow motion ... I do remember eating a lot of sand." The reality of war is often just not visually interesting enough to put on the screen.

      • FireBeyond 5 hours ago
        Speaking as a paramedic, two things come to mind:

        1) I don't have squeamishness about trauma. In the end, we are all blood and tissue. The calls that get to me are the emotionally traumatic, the child abuse, domestic violence, elder abuse (which of course often have a physical component too, but it's the emotional for me), the tragic, often preventable accidents.

        2) There are many people, and I get the curiosity, that will ask "what's the worst call you've been on?" - one, you don't really want to hear, and two, "Hey, person I may barely know, do you think you can revisit something traumatic for my benefit/curiosity?"

        • Modified3019 8 minutes ago
          That’s an excellent way to put it, resonates with my (non medical) experience. It’s the emotional stuff that will try to follow me around and be intrusive.

          I won’t watch most movies or TV because they are just some sort of tragedy porn.

        • BobaFloutist 32 minutes ago
          It's also super easy to come up with better questions: "What's the funniest call you've ever been on?" "What call do you feel like you made the biggest difference?" "What's the best story you have?"
      • int_19h 7 hours ago
        It's not that we're particularly fragile, given the kind of physical trauma human beings can survive and recover from.

        It's that we have technologically engineered things that are destructive enough to get even past that threshold. Modern warfare in particular is insanely energetic in the most literal, physical way - when you measure the energy output of weapons in joules. Partly because we're just that good at making things explode, and partly because improvements in metallurgy and electronics made it possible over time to locate targets with extreme precision in real time and then concentrate a lot of firepower directly on them. This, in particular, is why the most intense battlefields in Ukraine often look worse than WW1 and WW2 battles of similar intensity (e.g. Mariupol had more buildings destroyed than Stalingrad).

        But even our small arms deliver much more energy to the target than their historical equivalents. Bows and arrows pack ~150 J at close range, rapidly diminishing with distance. Crossbows can increase this to ~400 J. For comparison, an AK-47 firing standard issue military ammo is ~2000 J.

        • llm_trw 6 hours ago
          Watch how a group of wild dogs kill their prey, then realise that for milenia human like apes were part of their diet. Even the modern battlefield is more humane than the African savannah.
        • Dalewyn 3 hours ago
          >Crossbows can increase this to ~400 J.

          Funny you mention crossbows; the Church at one point in time tried to ban them because they democratized violence to a truly trivial degree. They were the nuclear bombs and assault rifles of medieval times.

          Also, I will take this moment to also mention that the "problem" with weapons always seem to be how quickly they can kill rather than the killing itself. Kind of takes away from the discussion once that is realized.

        • newsclues 6 hours ago
          Humans can render other humans unrecognizable with a rock.

          Brutal murder is low tech.

      • nradov 4 hours ago
        I don't mean to trivialize traumatic experiences but I think many modern people, especially the pampered members of the professional-managerial class, have become too disconnected from reality. Anyone who has hunted or butchered animals is well aware of the fragility of life. This doesn't damage our concept of normal life.
        • Eisenstein 3 hours ago
          What is it about partaking in or witnessing the killing of animals or humans that makes one more connected to reality?
          • AnarchismIsCool 3 hours ago
            Lots of people who spend time working with livestock on a farm describe a certain acceptance and understanding of death that most modern people have lost.
            • Eisenstein 3 hours ago
              Are farmers more willing to discuss things like end of life medical decisions?

              Are they more amenable to terminally ill people having access to euthanasia?

              Do they cope better after losing loved ones?

              Are there other ways we can get a sense of how a more healthy acceptance of mortality would manifest?

              Would be interested in this data if it is available.

          • Dalewyn 3 hours ago
            In Japan, some sushi bars keep live fish in tanks that you can order to have served to you as sushi/sashimi.

            The chefs butcher and serve the fish right in front of you, and because it was alive merely seconds ago the meat will still be twitching when you get it. If they also serve the rest of the fish as decoration, the fish might still be gasping for oxygen.

            Japanese don't really think much of it, they're used to it and acknowledge the fleeting nature of life and that eating something means you are taking another life to sustain your own.

            The same environment will likely leave most westerners squeamish or perhaps even gag simply because the west goes out of its way to hide where food comes from, even though that simply is the reality we all live in.

            Personally, I enjoy meats respecting and appreciating the fact that the steak or sashimi or whatever in front of me was a live animal at one point just like me. Salads too, those vegetables were (are?) just as alive as I am.

            • Eisenstein 3 hours ago
              If I were to cook a pork chop in the kitchen of some of my middle eastern relatives they would feel sick and would probably throw out the pan I cooked it with (and me from their house as well).

              Isn't this similar to why people unfamiliar with that style of seafood would feel sick -- cultural views on what is and is not normal food -- and not because of their view of mortality?

              • Dalewyn 3 hours ago
                You're not grasping the point, which I don't necessarily blame you.

                Imagine that to cook that pork chop, the chef starts by butchering a live pig. Also imagine that he does that in view of everyone in the restaurant rather than in the "backyard" kitchen let alone a separate butchering facility hundreds of miles away.

                That's the sushi chef butchering and serving a live fish he grabbed from the tank behind him.

                When you can actually see where your food is coming from and what "food" truly even is, that gives you a better grasp on reality and life.

                It's also the true meaning behind the often used joke that goes: "You don't want to see how sausages are made."

      • doublerabbit 6 hours ago
        One does not fully-experience life until you encounter a death of something you care about. It being a pet, person; nothing gives you that real sense of reality until your true feelings are challenged.

        I used to live in the Disney headspace until my dog had to be put down. Now with my parents being in their seventies, and me in my thirties I fear losing them the most as the feeling of losing my dog was hard enough.

        • batch12 3 hours ago
          That's the tragic consequence of being human. Either the people you care about leave first or you do, but in the end, everyone goes. We are blessed and cursed with the knowledge to understand this. We should try to maximize the time we spend with those that are important to us.
    • LeftHandPath 6 hours ago
      Earlier this year, I was at ground zero of the Super Bowl parade shooting. I didn’t ever dream about it, but I spent the following 3-4 days constantly replaying it in my head in my waking hours.

      Later in the year I moved to Florida, just in time for Helene and Milton. I didn’t spend much time thinking about either of them (aside from during prep and cleanup and volunteering a few weeks after). But I had frequent dreams of catastrophic storms and floods.

      Different stressors affect people (even myself) differently. Thankfully I’ve never had a major/long-term problem, but I know my reactions to major life stressors never seemed to have any rhyme or reason.

      I can imagine many people might’ve been through a few things that made them confident they’d be alright with the job, only to find out dealing with that stuff 8 hours a day, 40 hours a week is a whole different ball game.

      • sandworm101 3 hours ago
        A parade shooting is bad, very bad, but is still tame compared to the sorts of things to which website moderators are exposed on a daily/hourly basis. Footage of people being shot is actually allowed on many platforms. Just think of all the war footage that is so common these days. The dark stuff that moderators see is way way worse.
        • wkat4242 58 minutes ago
          > Footage of people being shot is actually allowed on many platforms.

          It's also part of almost every American cop and military show and movie. Of course it's not real but it looks the same.

    • consumer451 4 hours ago
      I have often wondered what would happen if social product orgs required all dev and product team members to temporarily rotate through moderation a couple times a year.
      • alex-korr 4 hours ago
        I can tell you that back when I worked as a dev for the department building order fulfillment software at a dotcom, my perspective on my own product has drastically changed after I had spent a month at a warehouse that was shipping orders coming out of the software we wrote. Eating my own dog food was not pretty.
      • Teever 4 hours ago
        Yeah I've wondered the same thing about jobs in general too.

        Society would be a very different place if everyone had to do customer service or janitorial work one weekend a month.

  • azinman2 14 minutes ago
    > The moderators from Kenya and other African countries were tasked from 2019 to 2023 with checking posts emanating from Africa and in their own languages but were paid eight times less than their counterparts in the US, according to the claim documents

    Why would pay in different countries be equivalent? Pretty sure FB doesn’t even pay the same to their engineers depending on where in the US they are, let alone which country. Cost of living dramatically differs.

  • pllbnk 10 minutes ago
    There have been multiple instances where I would receive invites or messages from obvious bots - users having no history, generic name, sexualised profile photo. I would always report them to Facebook just to receive a reply an hour or a day later that no action has been taken. This means there is no human in the pipeline and probably only the stuff that's not passing their abysmal ML filter goes to the actual people.

    I also have a relative who is stuck with their profile being unable to change any contact details, neither email nor password because FB account center doesn't open for them. Again, there is no human support.

    BigTech companies must be mandated by law to have the number of live support people working and reachable that is a fixed fraction of their user number. Then, they would have no incentive to inflate their user numbers artificially. As for the moderators, there should also be a strict upper limit on the number of content (content tokens, if you will) they should view during their work day. Then the companies would also be more willing to limit the amount of content on their systems.

    Yeah, it's bad business for them but it's a win for the people.

  • pluc 8 hours ago
    Worked at PornHub's parent company for a bit and the moderation floor had a noticeable depressive vibe. Huge turnover. Can't imagine what these people were subjected to.
    • HenryBemis 8 hours ago
      You don't mention the year(s), but I recently listened to Jordan Peterson's podcast episode 503. One Woman’s War on P*rnhub | Laila Mickelwait.

      I will go ahead and assume that on the wild/carefree time of PornHub, when anyone could be able to upload anything and everything, from what that lady said, the numbers of pedophilia videos, bestiality, etc. was rampant.

      • pluc 7 hours ago
        Yeah, it was during that time, before the great purge. It's not just sexual depravity, people used that site to host all kinds of videos that would get auto-flagged anywhere else (including, the least of it, full movies).
      • chimeracoder 4 hours ago
        > You don't mention the year(s), but I recently listened to Jordan Peterson's podcast episode 503. One Woman’s War on P*rnhub | Laila Mickelwait.

        Laila Mickelwait is a director at Exodus Cry, formerly known as Morality in Media (yes, that's their original name). Exodus Cry/Morality in Media is an explicitly Christian organization that openly seeks to outlaw all forms of pornography, in addition to outlawing abortion and many gay rights including marriage. Their funding comes largely from right-wing Christian fundamentalist and fundamentalist-aligned groups.

        Aside from the fact that she has an axe to grind, both she (as an individual) and the organization she represents have a long history of misrepresentating facts or outright lying in order to support their agenda. They also intentionally and openly refer to all forms of sex work (from consensual pornography to stripping to sexual intercourse) as "trafficking", against the wishes of survivors of actual sex trafficking, who have extensively document why Exodus Cry actually perpetuates harm against sex trafficking victims.

        > everything, from what that lady said, the numbers of pedophilia videos, bestiality, etc. was rampant.

        This was disproven long ago. Pornhub was actually quite good about proactively flagging and blocking CSAM and other objectionable content. Ironically (although not surprisingly, if you're familiar with the industry), Facebook was two to three orders of magnitude worse than Pornhub.

        But of course, Facebook is not targeted by Exodus Cry because their mission - as you can tell by their original name of "Morality in Media" - is to ban pornography on the Internet, and going after Facebook doesn't fit into that mission, even though Facebook is actually way worse for victims of CSAM and trafficking.

      • throwaway314155 6 hours ago
        [flagged]
  • 1vuio0pswjnm7 6 hours ago
    Perhaps this is what happens when someone creates a mega-sized website comprising hundreds of millions of pages using other peoples' submitted material, effectively creating a website that is too large to "moderate". By letting the public publish their material on someone else's mega-sized website instead of hosting their own, perhaps it concentrates the web audience to make it more suitable for advertising. Perhaps if the PTSD-causing material was published by its authors on the authors' own websites, the audience would be small, not suitable for advertising. A return to less centralised web publishing would perhaps be bad for the so-called "ad ecosystem" created by so-called "tech" company intermediaries. To be sure, it would also mean no one in Kenya would be intentionally be subjected to PTSD-causing material in the name of fulfilling the so-called "tech" industry's only viable "business model": surveillance, data collection and online ad services.
    • coryrc 3 hours ago
      It's a problem when you don't verify the identity of your users and hold them responsible for illegal things. If Facebook verified you were John D SSN 123-45-6789 they could report you for uploading CSAM and otherwise permanently block you from using the site if uploading objectionable material; meaning only exposure to horrific things is only necessary once per banned user. I would expect that to be orders of magnitude less than what they deal with today.
    • croissants 6 hours ago
      A return to less centralized web publishing would also be bad for the many creators who lack the technical expertise or interest to jump through all the hoops required for building and hosting your own website. Maybe this seems like a pretty small friction to the median HN user, but I don't think it's true for creators in general, as evidenced by the enormous increase in both the number and sophistication of online creators over the past couple of decades.

      Is that increase worth traumatizing moderators? I have no idea. But I frequently see this sentiment on HN about the old internet being better, framed as criticism of big internet companies, when it really seems to be at least in part criticism of how the median internet user has changed -- and the solution, coincidentally, would at least partially reverse that change.

      • moomin 5 hours ago
        I mean, the technical expertise thing is solvable, it’s just no-one wants to solve it because SaaS is extremely lucrative."
  • fouronnes3 9 hours ago
    Absolutely grim. I wouldn't wish that job on my worst enemy. The article reminded me of a Radiolab episode from 2018: https://radiolab.org/podcast/post-no-evil
  • oefrha 4 hours ago
    They should probably hire more part time people working one hour a day?

    Btw, it’s probably a different team handling copyright claims, but my run-in with Meta’s moderation gives me the impression that they’re probably horrifically understaffed. I was helping a Chinese content creator friend taking down Instagram, YouTube and TikTok accounts re-uploading her content and/or impersonating her (she doesn’t have any presence on these platforms and doesn’t intend to). Reported to TikTok twice, got it done once within a few hours (I was impressed) and once within three days. Reported to YouTube once and it was handled five or six days later. No further action was needed from me after submitting the initial form in either case. Instagram was something else entirely; they used Facebook’s reporting system, the reporting form was the worst, it asked for very little information upfront but kept sending me emails afterwards asking for more information, then eventually radio silence. I sent follow-ups asking about progress, again, radio silence. Impersonation account with outright stolen content is still up till this day.

  • para_parolu 9 hours ago
    One of few fields where AI is very welcome
    • hinkley 9 hours ago
      I’m wondering if, like looking out from behind a blanket at horror movies, if getting a moderately blurred copy of images would reduce the emotional punch of highly inappropriate pictures. Or just scaled down tiny.

      If it’s already bad blurred or as a thumbnail don’t click on the real thing.

      • EasyMark 1 hour ago
        I'd be fine with that as long as it was something I could turn off and on at will
    • sunaookami 7 hours ago
      No, this just leads to more censorship without any option to appeal.
      • soulofmischief 7 hours ago
        Not if we retain control and each deploy our own moderation individually, relying on trust networks to pre-filter. That probably won't be allowed to happen, but in a rational, non-authoritarian world, this is something that machine learning can help with.
      • krior 7 hours ago
        Curious, do you have a better solution?
        • throw__away7391 4 hours ago
          The solution to most social media problems in general is:

          `select * from posts where author_id in @follow_ids order by date desc`

          At least 90% of the ills of social media are caused by using algorithms to prioritize content and determine what you're shown. Before these were introduced, you just wouldn't see these types of things unless you chose to follow someone who chose to post it, and you didn't have people deliberately creating so much garbage trying to game "engagement".

          • mulmen 2 hours ago
            I'd love a chronological feed but if you gave me a choice I'd get rid of lists in SQL first.

            > select * from posts where author_id in @follow_ids order by date desc

            SELECT post FROM posts JOIN follows ON posts.author_id = follows.author_id WHERE follows.user_id = $session.user_id;

      • SoftTalker 3 hours ago
        Nobody has a right to be published.
      • jsemrau 6 hours ago
        That's a workflow problem.
      • slothtrop 5 hours ago
        > without any option to appeal.

        Why would that be?

        Currently content is flagged and moderators decide whether to take it down. Using AI, it's easy conceive a process where some uploaded content is preflagged requiring an appeal (otherwise it's the same as before, a pair of human eyes automatically looking at uploaded material).

        Uploaders trying to publish rule-breaking content would not bother with an appeal that would reject them anyway.

        • Eisenstein 3 hours ago
          Because edge cases exist, and it isn't worth it for a company to hire enough staff to deal with them when one user with a problem, even if that problem is highly impactful to their life, just doesn't matter when the user is effectively the product and not the customer. Once the AI works well enough, the staff is gone and the cases where someone's business or reputation gets destroyed because there are no ways to appeal a wrong decision by a machine get ignored. And of course 'the computer won't let me' or 'I didn't make that decision' is a great way for no one to ever have to feel responsible for any harms caused by such a system.
    • Havoc 9 hours ago
      I would have hoped the previously-seen & clearly recognisable stuff already gets auto-flagged.
    • 29athrowaway 8 hours ago
      And then the problem is moved to the team curating data sets.
    • antegamisou 9 hours ago
      You know what is going to end up happening though is something akin to the Tesla's "autonomous" Optimus robots.
    • itake 9 hours ago
      Maybe.. apple had a lot of backlash using AI to detect CSAM.
      • Cyph0n 9 hours ago
        Wasn’t the backlash due to the fact that they were running detection on device against your private library?
        • threeseed 9 hours ago
          Yes. As opposed to running it on their servers like they do now.

          And it was only for iCloud synced photos.

          • Zak 9 hours ago
            There's a huge gap between "we will scan our servers for illegal content" and "your device will scan your photos for illegal content" no matter the context. The latter makes the user's device disloyal to its owner.
            • FabHK 8 hours ago
              The choice was between "we will upload your pictures unencrypted and do with them as we like, including scan them for CSAM" vs. "we will upload your pictures encrypted and keep them encrypted, but will make sure beforehand on your device only that there's no known CSAM among it".
            • itake 3 hours ago
              Apple is already categorizing content on your device. Maybe they don't report what categories you have. But I know if I search for "cat" it will show me pictures of cats on my phone.
            • aaomidi 9 hours ago
              And introduces avenues for state actors to force the scanning of other material.

              This was also during a time where Apple hadn’t pushed out e2ee for iCloud, so it didn’t even make sense.

              • shadowgovt 8 hours ago
                This ship has pretty much sailed.

                If you are storing your data in a large commercial vendor, assume a state actor is scanning it.

                • kotaKat 6 hours ago
                  I'm shocked at the amount of people I've seen on my local news getting arrested lately for it and it all comes from the same starting tip:

                  "$service_provider sent a tip to NCMEC" or "uploaded a known-to-NCMEC hash", ranging from GMail, Google Drive, iCloud, and a few others.

                  https://www.missingkids.org/cybertiplinedata

                  "In 2023, ESPs submitted 54.8 million images to the CyberTipline of which 22.4 million (41%) were unique. Of the 49.5 million videos reported by ESPs, 11.2 million (23%) were unique."

                  • shadowgovt 4 hours ago
                    And, indeed, this is why we should not expect the process to stop. Nobody is rallying behind the rights of child abusers and those who traffic in child abuse material. Arguably, nor should they. The slippery slope argument only applies if the slope is slippery.

                    This is analogous to the police's use of genealogy and DNA data to narrow searches for murderers, who they then collected evidence on by other means. There's is risk there, but (at least in the US) you aren't going to find a lot of supporters of the anonymity of serial killers and child abusers.

                    There are counter-arguments to be made. Germany is skittish about mass data collection and analysis because of their perception that it enabled the Nazi war machine to micro-target their victims. The US has no such cultural narrative.

                    • tzs 2 hours ago
                      > And, indeed, this is why we should not expect the process to stop. Nobody is rallying behind the rights of child abusers and those who traffic in child abuse material. Arguably, nor should they.

                      I wouldn't be so sure.

                      When Apple was going to introduce on-device scanning they actually proposed to do it in two places.

                      • When you uploaded images to your iCloud account they proposed scanning them on your device first. This is the one that got by far the most attention.

                      • The second was to scan incoming messages on phones that had parental controls set up. The way that would have worked is:

                      1. if it detects sexual images it would block the message, alert the child that the message contains material that the parents think might be harmful, and ask the child if they still want to see it. If the child says no that is the end of the matter.

                      2. if the child say they do want to see it and the child is at least 13 years old, the message is unblocked and that is the end of the matter.

                      3. if the child says they do want to see it and the child is under 13 they are again reminded that their parents are concerned about the message, again asked if they want to view it, and told that if they view it their parents will be told. If the child says no that is the end of the matter.

                      4. If the child says yes the message is unblocked and the parents are notified.

                      This second one didn't get a lot of attention, probably because there isn't really much to object to. But I did see one objection from a fairly well known internet rights group. They objected to #4 on the grounds that the person sending the sex pictures to your under-13 year old child sent the message to the child, so it violates the sender's privacy for the parents to be notified.

      • PittleyDunkin 9 hours ago
        I don't think the problem there is the AI aspect
        • itake 3 hours ago
          My understanding was the FP risk. Everything was on device. People designed images that were FPs of real images.
      • sneak 9 hours ago
        No, they had backlash against using AI on devices they don’t own to report said devices to police for having illegal files on them. There was no technical measure to ensure that the devices being searched were only being searched for CSAM, as the system can be used to search for any type of images chosen by Apple or the state. (Also, with the advent of GenAI, CSAM has been redefined to include generated imagery that does not contain any of {children, sex, abuse}.)

        That’s a very very different issue.

        I support big tech using AI models running on their own servers to detect CSAM on their own servers.

        I do not support big tech searching devices they do not own in violation of the wishes of the owners of those devices, simply because the police would prefer it that way.

        It is especially telling that iCloud Photos is not end to end encrypted (and uploads plaintext file content hashes even when optional e2ee is enabled) so Apple can and does scan 99.99%+ of the photos on everyone’s iPhones serverside already.

        • skissane 9 hours ago
          > Also, with the advent of GenAI, CSAM has been redefined to include generated imagery that does not contain any of {children, sex, abuse}

          It hasn’t been redefined. The legal definition of it in the UK, Canada, Australia, New Zealand has included computer generated imagery since at least the 1990s. The US Congress did the same thing in 1996, but the US Supreme Court ruled in the 2002 case of Ashcroft v Free Speech Coalition that it violated the First Amendment. [0] This predates GenAI because even in the 1990s people saw where CGI was going and could foresee this kind of thing would one day be possible.

          Added to that: a lot of people misunderstand what that 2002 case held. SCOTUS case law establishes two distinct exceptions to the First Amendment – child pornography and obscenity. The first is easier to prosecute and more commonly prosecuted; the 2002 case held that "virtual child pornography" (made without the use of any actual children) does not fall into the scope of the child pornography exception – but it still falls into the scope of the obscenity exception. There is in fact a distinct federal crime for obscenity involving children as opposed to adults, 18 USC 1466A ("Obscene visual representations of the sexual abuse of children") [1] enacted in 2003 in response to this decision. Child obscenity is less commonly prosecuted, but in 2021 a Texas man was sentenced to 40 years in prison over it [2] – that wasn't for GenAI, that was for drawings and text, but if drawings fall into the legal category, obviously GenAI images will too. So actually it turns out that even in the US, GenAI materials can legally count as CSAM, if we define CSAM to include both child pornography and child obscenity – and this has been true since at least 2003, long before the GenAI era.

          [0] https://en.wikipedia.org/wiki/Ashcroft_v._Free_Speech_Coalit...

          [1] https://www.law.cornell.edu/uscode/text/18/1466A

          [2] https://www.justice.gov/opa/pr/texas-man-sentenced-40-years-...

          • blackeyeblitzar 4 hours ago
            Thanks for the information. However I am unconvinced that SCOTUS got this right. I don’t think there should be a free speech exception for obscenity. If no other crime (like against a real child) is committed in creating the content, what makes it different from any other speech?
            • skissane 2 hours ago
              > However I am unconvinced that SCOTUS got this right. I don’t think there should be a free speech exception for obscenity

              If you look at the question from an originalist viewpoint: did the legislators who drafted the First Amendment, and voted to propose and ratify it, understand it as an exceptionless absolute or as subject to reasonable exceptions? I think if you look at the writings of those legislators, the debates and speeches made in the process of its proposal and ratification, etc, it is clear that they saw it as subject to reasonable exceptions – and I think it is also clear that they saw obscenity as one of those reasonable exceptions, even though they no doubt would have disagreed about its precise scope. So, from an originalist viewpoint, having some kind of obscenity exception seems very constitutionally justifiable, although we can still debate how to draw it.

              In fact, I think from an originalist viewpoint the obscenity exception is on firmer ground than the child pornography exception, since the former is arguably as old as the First Amendment itself is, the latter only goes back to the 1982 case of New York v. Ferber. In fact, the child pornography exception, as a distinct exception, only exists because SCOTUS jurisprudence had narrowed the obscenity exception to the point that it was getting in the way of prosecuting child pornography as obscene – and rather than taking that as evidence that maybe they'd narrowed it a bit too far, SCOTUS decided to erect a separate exception instead. But, conceivably, SCOTUS in 1982 could have decided to draw the obscenity exception a bit more broadly, and a distinct child pornography exception would never have existed.

              If one prefers living constitutionalism, the question is – has American society "evolved" to the point that the First Amendment's historical obscenity exception ought to jettisoned entirely, as opposed to merely be read narrowly? Does the contemporary United States have a moral consensus that individuals should have the constitutional right to produce graphic depictions of child sexual abuse, for no purpose other than their own sexual arousal, provided that no identifiable children are harmed in its production? I take it that is your personal moral view, but I doubt the majority of American citizens presently agree – which suggests that completely removing the obscenity exception, even in the case of virtual CSAM material, cannot currently be justified on living constitutionalist grounds either.

        • itake 3 hours ago
          My understanding was the FP risk. The hashes were computed on device, but the device would self-report to LEO if it detects a match.

          People designed images that were FPs of real images. So apps like WhatsApp that auto-save images to photo albums could cause people a big headache if a contact shared a legal FP image.

        • dialup_sounds 6 hours ago
          Weird take. The point of on-device scanning is to enable E2EE while still mitigating CSAM.
          • sneak 5 hours ago
            No, the point of on-device scanning is to enable authoritarian government overreach via a backdoor while still being able to add “end to end encryption” to a list of product features for marketing purposes.

            If Apple isn’t free to publish e2ee software for mass privacy without the government demanding they backdoor it for cops on threat of retaliation, then we don’t have first amendment rights in the USA.

        • threeseed 9 hours ago
          > they don’t own to report said devices to police for having illegal files on them

          They do this today. https://www.apple.com/child-safety/pdf/Expanded_Protections_...

          Every photo provider is required to report CSAM violations.

      • pluc 8 hours ago
        Probably because you need to feed it child porn so it can detect it...
        • hirvi74 4 hours ago
          Already happened/happening. I have an ex-coworker that left my current employer for my state's version of the FBI. Long story short, the government has a massive database to crosscheck against. Often times, the would use automated processes to filter through suspicious data they would collect during arrests.

          If the automated process flags something as a potential hit, then they, the humans, would then review those images to verify. Every image/video that is discovered to be a hit is also inserted into a larger dataset as well. I can't remember if the Feds have their own DB (why wouldn't they?), but the National Center for Missing and Exploited Children run a database that I believe government agencies use too. Not to mention, companies like Dropbox, Google, etc.. all has against the database(s) as well.

      • llm_trw 9 hours ago
        Apple had a lot of backlash by using AI to scan every photo you ever took and sending it back to the mothership for more training.
  • jkestner 8 hours ago
    Borrowing the thought from Ed Zitron, but when you think about it, most of us are exposing ourselves to low-grade trauma when we step onto the internet now.
    • rnewme 7 hours ago
      That's the risk of being in a society in general, it's just that we interact with people outside way less now. If one doesn't like it, they can always be a hermit.
      • jkestner 5 hours ago
        Not just that, but that algorithms are driving us to the extremes. I used to think it was just that humans were not meant to have this many social connections, but it's more about how these connections are mediated, and by whom.

        Worth reading Zitron's essay if you haven't already. It sounds obvious, but the simple cataloging of all the indignities we take for granted builds up to a bigger condemnation than just Big Tech. https://www.wheresyoured.at/never-forgive-them/

  • wkat4242 1 hour ago
    I have several friends who do this work for various platforms.

    The problem is, someone has to do it. These platforms are mandated by law to moderate it or else they're responsible for the content the users post. And the companies can not shield their employees from it because the work simply needs doing. I don't think we can really blame the platforms (though I think the remuneration could be higher for this tough work).

    The work tends to suit some people better than others. The same way some people will not be able to be a forensic doctor doing autopsies. Some have better detachment skills.

    All the people I know that do this work have 24/7 psychologists on site (most of them can't work remotely due to the private content they work with). I do notice though that most of them do have an "Achilles heel". They tend to shrug most things off without a second thought but there's always one or two specific things or topics that haunt them.

    Hopefully eventually AI will be good enough to deal with this shit. It sucks for their jobs or course but it's not the kind of job anyone really does with pleasure.

  • blueflow 7 hours ago
    I'm curious about the contents that these people moderated. What is it that seeing it fucks people up?
    • crystal_revenge 7 hours ago
      From the first paragraph of the article:

      > post-traumatic stress disorder caused by exposure to graphic social media content including murders, suicides, child sexual abuse and terrorism.

      If you want a taste of the legal portion of theses just got to 4chan.org/gif/catalog and look for a "rekt", "war", "gore", or "women hate" thread. Watch every video there for 8-10 hours a day.

      Now remember this is the legal portion of the content moderated as 4chan does a good job these days of removing illegal content mentioned in that list above. So all these examples will be a milder sample of what moderators deal with.

      And do remember to browse for 8-10 hours a day.

      edit: it should go without saying that the content there is deep in the NSFW territory, and if you haven't already stumbled upon that content, I do not recommend browsing "out of curiosity".

      • dyauspitr 7 hours ago
        As someone that grew up with 4chan I got pretty desensitized to all of the above very quickly. Only thing I couldn’t watch was animal abuse videos. That was all yers ago though, now I’m fully sensitized to all of it again.
        • azinman2 15 minutes ago
          Did your parents know what you were seeing? Advice to others to not have kids see this kind of stuff, let alone get desensitized to it?

          What drew you to 4chan?

        • sandspar 4 hours ago
          The point is that you don't know which one will stick. Even people who are desensitized will remember certain things, a person's facial expression or a certain sound or something like that, and you can't predict which one will stick with you.
    • bdangubic 7 hours ago
      things that you cannot unsee, the absolute worst of humanity
    • kernal 7 hours ago
      There was a report by 60 minutes (I think) on this fairly recently. I’m not surprised the publicity attracted lawyers soon after.
  • atleastoptimal 7 hours ago
    Obvious job that would benefit everyone for AI to do instead of humans.
  • percentcer 7 hours ago
    it's kinda crazy that they have normies doing this job
    • istjohn 7 hours ago
      Normies? As opposed to who?
      • medvezhenok 6 hours ago
        [flagged]
        • loriverkutya 6 hours ago
          I’m not sure what is behind your assumption, if it’s the Autistic people does not have empathy myth, please read up on the topic.
          • wkat4242 56 minutes ago
            Autistic people do have empathy, it just works differently. Most of them are actually really caring, just not very good at showing it. Nor at picking up others' feelings. But they do care about them in my experience.

            Most of them I know will have more difficulty with this type of work, not less. Because they don't tend to process it as well. This includes myself as I do have some autistic tendencies. No way I could do this.

        • xvector 6 hours ago
          I'd wager they'd still have PTSD, but wouldn't be able to communicate it as well as a normal person.

          What you really want is AI doing this job. Or psychopaths/unempathetic people if that's not an option.

    • jsheroes 6 hours ago
      [dead]
  • shadowgovt 9 hours ago
    Good! I hope they get every penny owed. It's an awful job and outsourcing if to jurisdictions without protection was naked harm maximization.
  • xvector 6 hours ago
    This is the one job we can probably automate now.
  • decremental 9 hours ago
    [dead]
  • blackeyeblitzar 9 hours ago
    It’s the job they signed up for. I don’t understand the complaint. If they don’t want to do the part of the job that is obviously core to it, they should move on. The mass diagnosis just seems like a tactic to generate “evidence”. And the mention of pay compared to other countries makes this look like a bad faith lawsuit to get more compensation.
    • prng2021 9 hours ago
      You think people who took these jobs had a list of job offers and were jumping for joy to be able to pick this one out? Or that they stuck with it after the first 5 minutes of moderating necrophilia because they believed other jobs would have similar downsides? You’re so out of touch with the real world and hardships people face trying to make a living for themselves and their family.
      • janderson215 6 hours ago
        I’m curious of other perspectives and conclusions on this.

        Why do you think Facebook is the responsible party and not the purveyors of the content that caused them PTSD? From my perspective, Facebook hired people to prevent this content from reaching a wider audience. Thanks for any insight you can provide.

        • prng2021 3 hours ago
          I never said Facebook is the responsible party. I’m saying these workers deserve our sympathy and I’m saying it’s not a case of people who had a simple choice but willingly chose a job that caused them PTSD.

          I don’t think Facebook is blameless though. They practically brag about their $40B of AI spend per year and absolutely brag about how advanced their AI is. You can’t focus some of your R&D to flag content that’s instantly recognizable as disgusting content, like pedophilia, necrophilia, and beastiality? There’s already a ton of pre-labeled data they can use from all these workers. No, they don’t get a pass on that. I think it’s shameful they focus all their AI compute and engineering on improving targeted ads and not put a major focus on solving this specific problem that’s directly hurting so many people.

        • HarryHirsch 4 hours ago
          Maybe the solution is that Facebook shouldn't exist. It solves both the problem of distribution and the problem of moderation.
          • janderson215 4 hours ago
            While that would solve the problem within Facebook, I think you're kidding yourself if you think that's going to stop the demand or supply of horrible content.
          • blackeyeblitzar 4 hours ago
            If others want to moderate why should these complainers get in the way? They are free to not take the job, which obviously involves looking at repulsive content so others don’t have to. Most people don't have a problem with social media existing or moderators having the job of a moderator.
    • AriedK 9 hours ago
      At first glance you may have a point. Thing is they’re often recruited with very promising job titles and descriptions, training on mild cases. Once they fully realize what they got themselves into the damage has been done. If they’re unlucky, quitting also means losing their house. This may help empathize a bit with their side of this argument.
    • gklitz 9 hours ago
      If you pay someone to deliver post, and they get their leg blown of because you order them to go through a minefield, you can’t just avoid responsibility by going “that’s what they signed up for” obviously the responsibility of ensuring that the job can be carried out not physically and r safe is with the employer and workers are well within reason to demand compensation if the employer hasn’t ensured the job can be safely carried out.
      • eesmith 8 hours ago
        I think a better example is mining, where miners received no safety equipment, and the mines were not built with safety foremost.

        The idea was, if you didn't like it, leave. If you wanted safety equipment, buy it yourself. Or leave. Can't work due to black lung disease partially from poor ventilation the company was responsible for? You're fired; should have left years ago.

        There are still people who believe the contract is all that counts, nothing else matters, and if you don't like it, leave.

    • throw_m239339 8 hours ago
      > It’s the job they signed up for. I don’t understand the complaint. If they don’t want to do the part of the job that is obviously core to it, they should move on. The mass diagnosis just seems like a tactic to generate “evidence”. And the mention of pay compared to other countries makes this look like a bad faith lawsuit to get more compensation.

      its also their right to sue their employer for damage if they believe it affected them in a extremely harmful way. signing up for a job doesnt make the employer above the law.

      But some here can't fathom that workers also have rights.

    • thrance 8 hours ago
      Exploited people of the world should just pull themselves up by their bootstraps and work harder to get what they want, like you did?
      • blackeyeblitzar 7 hours ago
        They aren’t exploited. They’re paid money in return for viewing and filtering content for others. They could not apply or decline the offer and look at other jobs. The availability of this job doesn’t change the rest of their employment options. But it’s pretty clear what this job is. If it was just looking at friendly content, it wouldn’t need to exist.
        • crystal_revenge 7 hours ago
          Exploitation nearly always involves paying. Plenty of people caught up in sex trafficking still get paid, they just don't have a viable way out. Plenty of people working in sweat shops still get paid, but again not enough with enough viable alternatives to get out.
          • blackeyeblitzar 4 hours ago
            You’re still not acknowledging the key points - that it is obvious up front that the job fundamentally involves looking at content others don’t want to, and that it is a new job that can be accepted or avoided without taking away from other employment opportunities. Therefore it doesn’t match these other situations you’re drawing a comparison to.
        • jsheroes 3 hours ago
          [dead]
  • sneak 9 hours ago
    Perhaps if looking at pictures of disturbing things on the internet gives you PTSD than this isn’t the kind of job for you?

    Not everyone can be a forensic investigator or coroner, too.

    I know lots of people who can and do look at horrible pictures on the internet and have been doing so for 20+ years with no ill effects.

    • wruza 8 hours ago
      It isn’t known in advance though. These people went to that job and got psychiatric diseases that, considering the thirdworldiness, they are unlikely to get rid of.

      I’m not talking about obvious “scream and run away” reaction here. One may think that it doesn’t affect them or people on the internet, but then it suddenly does after they binge it all day for a year.

      The fact that not less than 100% got PTSD should be telling something here.

    • luqtas 9 hours ago
      perhaps life on Kenya isn't easy as yours?
    • eesmith 8 hours ago
      The 100+ years of research on PTSD, starting from shell shock studies in WWI shows that PTSD isn't so simple.

      Some people come out with no problems, while their trenchmate facing almost identical situations suffers for the rest of their lives.

      In this case, the claim is that "it traumatised 100% of hundreds of former moderators tested for PTSD … In any other industry, if we discovered 100% of safety workers were being diagnosed with an illness caused by their work, the people responsible would be forced to resign and face the legal consequences for mass violations of people’s rights."

      Do those people you know look at horrible pictures on the internet for 8-10 hours each day?

    • Sharlin 7 hours ago
      [flagged]
      • sneak 6 hours ago
        I didn’t make any claims about me. Read it again, more carefully, before making personal attacks.
        • doublerabbit 5 hours ago
          > Perhaps if looking at pictures of disturbing things on the internet gives you PTSD than this isn’t the kind of job for you?

          Perhaps these are jobs people are forced to do because the price of labour isn't as rich as other countries, trafficked and the likes.

          > I know lots of people who can and do look at horrible pictures on the internet and have been doing so for 20+ years with no ill effects.

          Looking at is different to moderating. I've sen my fair share of snuff from the first iraqi having their head cut off in 2005 all the way down to: ogrish/liveleak, goatse, tubgirl, 2girls1cup shock sites.

          But when you are faced with imagery of gruesome material day-in day-out on 12-hour shifts if not longer non-stop, being paid very little, it would take a toll on anyone.

          I've done it, lone-wolf sysop for a adult dating website for two years and the stuff I saw was moderate but still made me feel mentally disturbed. The normality wears off very quickly.

          Could you work a five days week looking at extreme obscenity imagery for $2 an hour?

          • blackeyeblitzar 4 hours ago
            The alternative is they have no job. And it is clear what this job entails, so complaining about the main part of the job afterwards, as this small group of moderators is doing, seems disingenuous.
      • blackeyeblitzar 4 hours ago
        Leave these empty personal attacks off HN, please. Respond substantively.
  • bdangubic 9 hours ago
    I wish they get trillion dollars but I am sure they signed their life away via waivers and whatnots when they got the job :(
    • zuminator 8 hours ago
      Maybe so, but in places with good civil and human rights, you can't sign them away via contract, they're inalienable. If Kenya doesn't offer these protections, and the allegations are correct, then Facebook deserves to be punished regardless for profiting off inhumane working conditions.
  • neilv 9 hours ago
    If I was a tech billionaire, and there was so much uploading of stuff so bad, that it was giving my employee/contractors PTSD, I think I'd find a way to stop the perpetrators.

    (I'm not saying that I'd assemble a high-speed yacht full of commandos, who travel around the world, righting wrongs when no one else can. Though that would be more compelling content than most streaming video episodes right now. So you could offset the operational costs a bit.)

    • DiggyJohnson 9 hours ago
      How else would you stop the perpetrators?
      • abdullahkhalids 9 hours ago
        Large scale and super sick perpetrators exist (as compared to small scale ones who do mildly sick stuff) because Facebook is a global network and there is a benefit to operating on such a large platform. The sicker you are, while getting away with it, the more reward you get.

        Switch to a federated social systems like Mastodon, with only a few thousand or ten thousand users per instance, and perpetrators will never be able to grow too large. Easy for the moderators to shut stuff down very quickly.

        • shadowgovt 9 hours ago
          Tricky. It also gives perpetrators a lot more places to hide. I think the jury is out on whether a few centralized networks or a fediverse makes it harder for attackers to reach potential targets (or customers).
          • abdullahkhalids 8 hours ago
            The purpose of facebook moderators (besides legal compliance) is to protect normal people from the "sick" people. In a federated network, of course, such people will create their own instances, and hide there. But then no one is harmed from them, because all such instances will be banned quite quickly, same as all spam email hosts are blocked very quickly by everyone else.

            From a normal person perspective on not seeing bad stuff, the design of a federated network is inherently better than a global network.

            • shadowgovt 8 hours ago
              That's the theory. I'm not sure yet that it works in practice, I've seen a lot of people on Mastodon complaining about how as a moderator, keeping up with the bad services is a perpetual game of whack-a-mole because everything is access on by default. Maybe this is a Mastodon specific issue.
              • abdullahkhalids 2 hours ago
                That's because Mastodon or any other federated social network hasn't taken off, and so not enough development has gone into them. If they take off, naturally people will develop analogs of spam lists and SpamAssassin etc for such systems, which will cut down moderation time significantly. I run an org email server, and don't exactly do any thing besides installing such automated tools.

                On Mastodon, admins will just have to do the additional work to make sure new accounts are not posting weird stuff.

          • mu53 8 hours ago
            Big tech vastly underspends on this area. You can find a stream of articles from the last 10 years where BigTech companies were allowing open child prostitution, paid-for violence, and other stuff on their platforms with little to no moderation.
    • thrance 8 hours ago
      If you were a tech billionaire you'd be a sociopath like the others and wouldn't give a single f about this. You'd be going on podcasts to tell the world that markets will fix everything if given the chance.
    • llm_trw 9 hours ago
      [flagged]
      • almog 9 hours ago
        "More than 140 Facebook content moderators have been diagnosed with severe post-traumatic stress disorder caused by exposure to graphic social media content including murders, suicides, child sexual abuse and terrorism."

        What part here are you suggesting is similar to seeing two men kissing?

        • llm_trw 9 hours ago
          To a 1980s Christian mom? All of them.
          • almog 9 hours ago
            But we're not talking about an 80's christian mom since you proceeded to make the observation that "People are outraged by whatever they are told to be outraged over and ignore everything that they aren't".

            Which is to say, being exposed to extreme violent/abusive content could only cause PTSD iff one is subject to a specific social construct that define these acts in a certain way. Let's just assume you're right, what kind of employees would that imply are immune to getting PTSD from such content given your previous observation?

            • llm_trw 8 hours ago
              The answer is in the question: Whoever has been raised in a culture that doesn't make a big deal out of whatever the given content is.
              • almog 8 hours ago
                And what culture is that?
          • JTyQZSnP3cQGa8B 9 hours ago
            You're lying and you know it. I remember the 80s as much as anyone else. Especially the part where Elton John and Freddy Mercury where at the peak of their popularity, unless you where living in a religious shithole but that was (and still is) a small part of the world.
            • llm_trw 8 hours ago
              In 1980 75% of adults thought that homosexuality was always wrong and 20% that it was never wrong or only sometimes wrong.

              https://lgbpsychology.org/html/gss4.html

              https://lgbpsychology.org/html/prej_prev.html

              Your feelings about the period mean nothing.

              • wkat4242 54 minutes ago
                Depends seriously on the country, the Netherlands was way ahead there. In many ways more ahead than it is now because it has become so conservative lately.
              • noduerme 6 hours ago
                Seeing something you think is culturally wrong is not necessarily traumatizing, is it? And surely there are degrees of "wrongness", ranging from the merely uncomfortable to the truly gross to the utterly horrifying. Even within the horrifying category, one can differentiate between things like a finger being chopped off and e.g. disembowelment. It's reasonable to expect a person would be more traumatized the further up the ladder of horror they're forced to look.
                • llm_trw 5 hours ago
                  This would be believable if not for the fact that hanging gutting and quartering was considered good wholesome family entertainment to watch while getting fast food in the market not three centuries ago literally everywhere.
                  • noduerme 3 hours ago
                    How did that go away, if people only do what they're brought up to do?
                    • llm_trw 1 hour ago
                      We found out that keeping people homeless to die of exposure slowly was much more effective at keeping the majority in line.
            • ipaddr 8 hours ago
              In what 80s fantasy were you living in where gay people were open. Rumor was John was bisexual and Freddy getting aids in the late 80s was a huge deal. Queen's peak is around 1992 with Wayne's world. No men kissed on stage or in movies neither did women.
          • FireBeyond 4 hours ago
            It might have induced 'disgust', but no, two men kissing didn't give 1980s Christian moms actual PTSD.
      • Yiin 9 hours ago
        You have no idea what content is being discussed here if you even think about bringing identity politics into this topic.
      • UniverseHacker 8 hours ago
        Equating outrage to PTSD is absolute nonsense. As someone that lives with a PTSD sufferer, it is an extremely severe and debilitating condition that has nothing to do with “outrage” and can’t be caused by seeing people kiss.
        • llm_trw 8 hours ago
          PTSD is what happens when you see someone standing next to you reduced to a chunky red salsa in a split second.

          The idea that seeing images of that can match the real thing can only be said by people who haven't smelled the results.

          • UniverseHacker 8 hours ago
            You’re very wrong, it can be caused by different things to different people- as the causes are emotional it requires severe emotional trauma, which does not have to happen through a specific category of event- a lot of different types of trauma and abuse can cause it.

            It’s hard to imagine a more disgusting thought process than someone trying to gatekeep others suffering like you are doing here.

            • fsckboy 7 hours ago
              actually, not everybody gets PTSD in for example a combat situation, and Gabor Mate says that people who do develop PTSD are the people who have already suffered traumas as children; in a sense, childhood trauma is a preexisting condition.
              • UniverseHacker 6 hours ago
                A lot of PTSD is also not from combat all all- childhood emotional trauma alone can cause it. This is recognized now, but it took a while because initially it was discovered in war veterans and categorically excluded other groups- eventually they discovered that war wasn’t unique in causing the condition.

                However, I would point out that Mates’ views are controversial, and don’t fully agree with other research on trauma and PTSD. He unrealistically associates essentially all mental illness and neurodivergence with childhood trauma, even in cases where good evidence contradicts that view. He claims ADHD is caused by childhood emotional trauma, although that is proven not to be the case, so I don’t put much stock in his scientific reasoning abilities- he has his hammer and sees everything as a nail.

            • llm_trw 7 hours ago
              You were literally gatekeeping ptsd from Christian mom's not one post ago.
              • UniverseHacker 7 hours ago
                HN ethos is to assume good faith, but my imagination is failing me here as to how you might be sincere and not trolling- can you please share more info to help me out?

                What makes you think people have experienced clinically diagnosed or diagnosable PTSD from seeing someone kiss? Has anyone actually claimed that?

                You used the word outrage, and again outrage is not trauma- it describes an outer reaction, not an inner experience. They’re neither mutually exclusive nor synonymous.

                Your assertion seems to be that only being physically present for a horrific event can be emotionally traumatic- that spending years sitting in a room watching media of children being brutally murdered and raped day in and day out is not possibly traumatic, but watching someone kiss that you politically think should be banned from doing so, can be genuinely traumatic?

      • shadowgovt 9 hours ago
        In this context, this is dangerously close to asserting "people are only outraged about CSAM because they're told to be." I don't think that's what you mean.
        • llm_trw 8 hours ago
          It is exactly what I mean.

          If you don't nurture that outrage every day than you'd be rather surprised what can happen to a culture in a single generation.

          • noduerme 6 hours ago
            I think your logic is backwards. The main reason for a culture to ban pedophilia is because it causes trauma in children. For thousands of years, cultures have progressed towards protecting children. This came from a natural sense of outrage in a majority of people, which became part of the culture. Not vice versa. In many of your comments, you seem to assume that people are only automatons who think and do exactly what their culture teaches them, but that's not the truth. The culture is made up of individuals, and individual conscience frequently - thankfully - overrides cultural diktat. Otherwise no dictatorship would ever fall, no group of people would ever be freed, and no wicked practices would ever be stamped out. It has always been individual people acting against the culture whose outrage has caused the culture to change. Which strongly implies that people's sense of outrage is at least partly intrinsic to human nature, totally apart from cultural practices of the time.
            • llm_trw 5 hours ago
              I'm now old enough to have seen people who treated homosexuals in the 1980s the same we treat pedophiles today start waving rainbow flags and calling the people they beat up for being gay in highschool Nazis.

              There maybe a few people with principles who stick with them.

              The majority will happily shove who ever they are told to in a gas chamber.

              • noduerme 3 hours ago
                I'm not saying there aren't a lot of people who are natural conformists, who do whatever they're told to, and hate or love whatever the prevailing culture hates or loves. They may be a majority. And yes, a prevailing culture can take even the revulsion of murder out of people to some extent (although check out the state sanctioned degree of alcohol and drug use among SS officers and you'll see it's not quite so easy to make people do acts of murder and torture every day).

                What I am saying is that the conformists don't drive the culture, they're just a blunt weapon of whoever is driving the culture. That weapon can be turned toward gay rights or toward burning people at the stake, but what changes a culture are the individuals with either a conscience or the individuals with wicked plans. Both of which exist outside the mainstream in any time and place.

                Maybe another way of saying this is that I think most people are capable of murder and most people are capable of empathy (and therefore trauma) with someone being tortured, but primarily they're concerned with being a good guy. What throws the arc of history towards a higher morality is that maybe >0% of people naturally need to perceive themselves as "good" by defending life and the dignity and humanity of other people, to the extent that needing to be a good person overrides their cultural programming. And those are not the only people who change a culture, but 51% of the time they change it for the better instead of worse.

                That's just my view on it.

              • shadowgovt 5 hours ago
                Wait, why are they calling gay people Nazis? This story is very unclear. And I can't see how it relates to CSAM and the moderators who have to see it, which is a categorically different issue to homosexuality, so different as to be completely unconflatable.
          • shadowgovt 5 hours ago
            I'm trying to interpret this post in the best light and, regrettably, I'm failing.

            Can you clarify what you think the change to society will be if we expose more people online to CSAM and normalize it?