12 comments

  • falloutx 1 hour ago
    Why is that whenever there is a news about AI, its either a new scam or something vile. Like all this harm being done to environment, people's sanity and lives, just so companies can pay less to their employees. Great work.
    • ofalkaed 3 minutes ago
      That is news in general, nothing special about AI.
    • seanmcdirmid 10 minutes ago
      The early days of the internet were mostly about how they enabled porn, spam, and scams...just so people could order things onlnie.

      We are now talking about AI in how it enables porn, spam, and scams....

    • polishdude20 6 minutes ago
      Don't discount the fact that bad new sells.
    • Almondsetat 25 minutes ago
      Sounds like confirmation bias you are not interested in challenging
    • vlan0 19 minutes ago
      Can be said about so many things in life. It's almost like we don't learn and just repeat in loops.
    • nonethewiser 18 minutes ago
      Are you suggesting people shouldn't develop AI because it's basically just produces unemployment and scams? Like that they should just be good people and stop, or government should ban the development of AI?

      I mean you are clearly equivocating AI with unemployment and scams, which I think is a very incomplete picture. What do you think should be done in light of that?

      • erikerikson 11 minutes ago
        "Guns don't kill people, I do."

        Blaming the technology for bad human behavior seems an error and it's not clear that the GP made it.

        People could and likely will also increase economic activity, flexibility, and evolve how we participate in the world. The alternative would get pretty ugly pretty quick. My pitchfork is sharp and the powers that be prefer it continues being used on straw.

        • croes 9 minutes ago
          People without guns kill less.
          • erikerikson 7 minutes ago
            The statistics are that our car use, pollution, and many other problems kill far more people.
      • blibble 8 minutes ago
        > What do you think should be done in light of that?

        you suggested it:

        > government should ban the development of AI?

        works for me!

      • falloutx 11 minutes ago
        >I mean you are clearly equivocating AI with unemployment and scams, which I think is a very incomplete picture.

        What else, let me guess, slop in software, ai psychosis, environmental concerns, growing wealth inequality. And yes may be we can write some crappy software faster. That should cover it.

        I have no suggestions to on how to solve it. Only way is to watch openAI/Claude lose more money and then hopefully models are cheaper or completely useless.

        • nonethewiser 2 minutes ago
          >What else, let me guess, slop in software

          Are you a developer? If so does this mean you have not been able to employ AI to increase the speed nor quality of your owrk?

      • croes 8 minutes ago
        If the harm outweighs the benefits stopping should be an option, don’t you think?
        • nonethewiser 4 minutes ago
          I dont think AI just brings scams and unemployment.
    • surgical_fire 15 minutes ago
      Well, it's one thing AI actually revolutionized.
    • api 37 minutes ago
      Good news doesn't get clicks. Usually doesn't even get reported.
      • seanmcdirmid 9 minutes ago
        News is whatever people would care about reading.
      • venndeezl 23 minutes ago
        Media in the US is obsessed with fear mongering:

        https://flowingdata.com/2025/10/08/mortality-in-the-news-vs-...

        If they reported on heart disease people might get healthy. But it's instinctual understanding that people dying all over just improves journalists odds in our society. Keep them anxious with crime stats!

        Such an unserious joke of a society.

        • api 17 minutes ago
          This is purely economic. Fear mongering gets clicks, which boosts ad revenues.

          I've read statistics to the effect that bad news (fear or rage bait) often gets as much as 10,000X the engagement vs good news.

        • expedition32 18 minutes ago
          So do you dispute that this is happening? And its all over my country too.

          Expecting tech bros to take responsibility for what they have unleashed is asking too much I suppose.

      • SecretDreams 26 minutes ago
        Also, saving me a bit of time in coding is objectively not a good trade if the same tool very easily emboldens pedophiles and other fringe groups.
    • knicholes 58 minutes ago
      Because news about scams or something vile using AI gets you to click and read.
      • falloutx 53 minutes ago
        About all of the good news, once you read a little bit more, are all due to traditional ML and all are in medical imagery field. Then OpenAI tries to take credit and say "Oh look AI is doing that too", which is not true. Go ahead and read deeper on any of those news and you would quickly find LLMs haven't done much good.
        • knicholes 50 minutes ago
          They helped me make some damn good brownies and be a better parent in the last month. Maybe I should write a blog for all of the great things LLMs are doing for me.

          Oh yeah, and one rewrote the 7-minute-workout app for me without the porn ads before and after the workout so I can enjoy working out with one of my kids.

          • falloutx 44 minutes ago
            What makes you think you couldn't have made brownies without LLMs. Go to google and just scroll 20cm and there it is, a recipe, the same one chatGPT gave you. I wont comment on rewriting an app, because LLMs can definitely do that.
            • knicholes 26 minutes ago
              Because, "Why are the edges burnt and the middle is too soft? How are these supposed to actually look? I used a clear 8"x8" pan, and I'm in Utah, which is at 4,600 ft elevation"

              Oh, it's a higher elevation, I need to change the recipe and lower the temperature. Oh, after it looked at the picture, the top is supposed to be crackly and shiny. Now I know what to look for. It's okay if it's a little soft while still in the oven because it'll firm up after taking them out? Great!

              Another one, "Uh oh, I don't have Dutch-processed baking power. Can I still use the normal stuff for this recipe?" Yeah, Google can answer that, but so can an LLM.

              • falloutx 1 minute ago
                You make it sound like brownie making is a scientific endeavour. I wouldn't think its hard but I guess I haven't made brownies in all conditions.
            • xmprt 24 minutes ago
              What makes you think you couldn't have made brownies without Google. Just go to your local library and find the first baking cookbook you can find. And there it is, a better recipe than Google without all the SEO blog spam.

              To avoid my comment just being snarky, I agree that there's a difference between comparing Google to LLMs, and the library to Google... but still I hope you can acknowledge that LLMs can do a lot more than Google such as answering questions about recipe alterations or baking theory which a simple recipe website can't/won't.

            • pousada 18 minutes ago
              fwiw modern recipe sites are awful - you have to scroll down literal minutes until you get to the recipe. LLMs give you the answer you want in seconds.

              I’m certainly no LLM enthusiast but pretending they are useless won’t make the issues with them go away

    • EGreg 37 minutes ago
      Because it has a lot of potential for abuse.

      BUT, notice the absolutely opposite approach to AI and Web3 on HN. Things that highlight Web3 scams are upvoted and celebrated. But AI deepfakes and scams at scale are always downvoted, flagged and minimized with a version of the comment:

      “This has always been the case. AI doesn’t do anything new. This is a nothingburger, move on.”

      You can probably see multiple versions in this thread or the sibling post just next to it on HN front page: https://news.ycombinator.com/item?id=46603535

      It comes up so often as to be systematic. Both downvoting Web3 and upvoting AI. Almost like there is brigading, or even automation.

      Why?

      I kept saying for years that AI has far larger downsides than Web3, because in Web3 you can only lose what you volunarily put in, but AI can cause many, many, many people to lose their jobs, their reputations, etc. and even lives if weaponized. Web3 and blockchain can… enforce integrity?

      • falloutx 29 minutes ago
        At this point I think HN is flooded with wannabe founders who think this is "their" gold rush and any pushback against AI is against them personally, against their enterprise, against their code. This is exactly what happens on every vibe coding thread, every AI adjacent thread.
      • blibble 32 minutes ago
      • ronsor 11 minutes ago
        There are plenty of posts critical of AI on HN that reach the front page, and even more threads filled with AI criticism whether on-topic or not.

        What you're noticing is a form of selection bias:

        https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

      • ceejayoz 14 minutes ago
        > BUT, notice the absolutely opposite approach to AI and Web3 on HN. Things that highlight Web3 scams are upvoted and celebrated. But AI deepfakes and scams at scale are always downvoted, flagged and minimized…

        It took a few years for that to happen.

        Plenty of folks here were all-in on NFTs.

    • add-sub-mul-div 56 minutes ago
      The era of new technologies being used to work for us rather than net against us is something we took for granted and it's in the past. Those who'd scam or enshittify have the most power now. This new era of AI isn't unique in that, but it's a powerful force multiplier, and more for the predatory than the good.
      • blibble 7 minutes ago
        there's literally a NFT/crypto scammer occupying the oval office

        can't wait until he figures out AI

      • cindyllm 53 minutes ago
        [dead]
    • alfalfasprout 56 minutes ago
      What's worse is a significant number of folks here seem to be celebrating it. Or trivializing what makes us human. Or celebrating the death of human creativity.

      What is it, do you think, that has attracted so many misanthropes into tech over the last decade?

  • ahmetomer 43 minutes ago
    The classic "bad thing always existed but AI made it worse" case.
    • tunesmith 34 minutes ago
      cultural problem too... like even before AI in recent years there's been more of a societal push that it's fair game to just lie to people. Not that it didn't always happen, but it's more shameless now. Like... I don't know, just to pick one, actors pretending to be romantically involved for pr of their upcoming movie. That's something that seems way more common than I remember in the past.
      • Loughla 22 minutes ago
        While I agree with you, your example is not a great one.. There are examples of fake relationships between stars dating back to the start of talkies.

        But I do agree. It is more socially acceptable to just lie, as long as you're trying to make money or win an argument or something. It's out of hand.

        • vladms 5 minutes ago
          Do you have any data to back that "it is more socially acceptable to lie"? I looked a bit and could not find anything either way.

          The impression can be a bias of growing up. Adults will generally teach and insist that children tell the truth. As one grows, it is less constrained and can say many "white lies" (low impact lies).

          We do have more impact for some people (known people, influences, etc.) than before because of network effects.

    • nonethewiser 18 minutes ago
      Im honestly shocked at the reaction to this. I'm well aware of the culture we live in. Isn't everyone else?
    • SecretDreams 22 minutes ago
      I think it's the speed by which it can do harm. Whatever efficiency gains we gain from AI for good causes will also be seen by nefarious causes. Tools need safety mechanisms to ensure they aren't symmetrically supporting good and bad actors. If we can't sufficiently minimize the latter, the benefits the former group gains may not be worth it.
  • mandevil 1 hour ago
    I love the Nicolas Maduro image in particular. That feels like its a parody of this whole genre of ad?
  • Teever 5 minutes ago
    This is a technique that will absolutely be used by those reputaiton management companies.

    I predict that it within three years we'll be discussing a story about how a celebrity hired a company to produce pictures of them doing intimate things with people to head off the imminent release sexual assault allegations.

  • TacticalCoder 24 minutes ago
    When you see what z-image turbo with some added LORA does in mere seconds on a 4090 locally, you know it's a lost fight. And that's not even the best model: just a very good one for something that everybody can run.

    Not only is the cat out of the bag but this is just the beginning. For example say porn vids where people can change the actress to their favorites celebrity in real-time is imminent.

    There's no fighting this.

  • Mordisquitos 45 minutes ago
    There's something about the way terminology used in this article that feels off to me.

    First of all, I'm not sure it makes sense to refer to these AI-generated characters as AI 'influencers'. Did these characters actually have followers prior to these fake videos being generated in December 2025? Do they even have followers now? I don't know, maybe they did or do, but I get the impression that they are just representing influencer-ish characteristics as part of the scheme. Don't get me wrong, the last thing I want is to gatekeep such an asinine term as 'influencer'. However, just like I would not be an influencer just by posting a video acting like one, neither do AI characters get a free pass at becoming one.

    Second, there's the way the article is subjectifying the AI-generated characters. I can forgive the headline for impact, but by consistently using 'AI influencers' throughout the article as the subject of these actions, it is not only contributing to the general confusion as to what characters in AI-generated videos actually are, but also subtly removing the very real human beings who are behind this from the equation. See for instance these two sentences from the article, UPPERCASE mine:

    1- 'One AI influencer even SHARED an image of HER in bed with Venezuela’s president Nicolás Maduro'

    2- 'Sometimes, these AI influencers STEAL directly from real adult content creators by faceswapping THEMSELVES into their existing videos.'

    No, there is no her sharing an image of herself in bed with anyone. No, there are no them stealing and faceswapping themselves onto videos of real people. The 'AI influencers' are not real. They are pure fictions, as fictional as the fictinal Nicolás Maduro, Mike Tyson and Dwayne Johnson representations that appear in the videos. The sharing and the faceswapping is being done by real dishonest individuals and organisations out there in the real world.

  • dreadsword 1 hour ago
    All the more reason to steer clear of big-brand social media, and protect spaces like this.
    • throwaway198846 1 hour ago
      This space is not protected, anyone can sign up.
      • tartuffe78 24 minutes ago
        They used to shut down sign up when Reddit was down :)
      • dreadsword 57 minutes ago
        Moderation that keeps it focused. Text only.
        • SoftTalker 32 minutes ago
          Text only, no ads, and aggressive downmodding of self-promotion.

          Edit: On the other hand, here we are looking at it and talking about it. Some number of us followed links in that article. Some number of them followed those to an OnlyFans page.

          • giancarlostoro 18 minutes ago
            How long until OnlyFans just says, screw it, we make AI content too.
          • zxcvasd 4 minutes ago
            [dead]
      • nicce 57 minutes ago
        Being less known and niche is one kind of protection. Big carrots are missing for most.
        • add-sub-mul-div 54 minutes ago
          Another kind of protection is Reddit and Twitter remaining alive as quarantines. Rather than if they collapsed and the newer better places absorbed the refugees.
    • contagiousflow 1 hour ago
      What's protecting smaller online spaces from AI?
      • jsheard 1 hour ago
        Nothing is bulletproof, but more hands-on moderation tends to be better at making pragmatic judgement calls when someone is being disruptive without breaking the letter of the law, or breaks the rules in ways that take non-trivial effort to prove. That approach can only scale so far though.
      • plastic-enjoyer 1 hour ago
        Essentially, gatekeeping. Places that are hard to access without the knowledge or special software, places that are invite-only, places that need special hardware...
        • ninthcat 1 hour ago
          Another important factor is whether the place is monetizable. Places where you can't make money are less likely to be infested with AI.
          • deathsentience 1 hour ago
            Or a place that can influence a captive audience. Bots have been known to play a part in convincing people of one thing over another via the comments section. No direct money to be made there but shifting opinions can lead to sales, eventually. Or prevent sales for your competitors.
        • Analemma_ 1 hour ago
          Or places with a terminally uncool reputation. I'm still on Tumblr, and it's actually quite nice these days, mostly because "everyone knows" that Tumblr is passé, so all the clout-chasers, spammers and angry political discoursers abandoned it. It's nice living, under the radar.
      • notpachet 1 hour ago
        Not enough financial upside for it to be worth the trouble.
      • MrLeap 1 hour ago
        The fact it's text only means we only get AI text and not images, I suppose. lmao.
      • cush 1 hour ago
        Economics. Slop will only live where there's enough eyeballs and ad revenue to earn a profit from it
    • expedition32 9 minutes ago
      It would be hilarious if AI enshitified the internet so much that people give up on it.
    • isjdiwjdiwjd 1 hour ago
      [dead]
  • cynicalsecurity 52 minutes ago
    Maybe it's time to stop going hysterical over other people's sex life? Then it won't be a "hot" topic to exploit any more.
    • nonethewiser 10 minutes ago
      I think its more a symptom of a culture with bad values. The safeguard against this behavior is people having shame.
    • SoftTalker 28 minutes ago
      I think most people know that these aren't real. They are just for laughs or titillation and a way to get attention/follower and (ultimately) payers. Celebrity impersonations in advertising are not at all new.
  • _blk 1 hour ago
    Not excusable in any way or form but an explanation lies clearly in the demand for trash news and the Hollywood people cult.
  • bofadeez 55 minutes ago
    "Defaming celebrities" that's a big concern lol. Let's also make sure billionaires securely obtain maximum luxury next.
  • ilhanomar 55 minutes ago
    [flagged]
  • cadamsdotcom 45 minutes ago
    Headline has no quantities, so if it happened twice the headline would be valid.

    This is rage bait and we are above it. Flagged.

    • gavmor 30 minutes ago
      You may be surprised to learn that headlines actually lead to an even larger body of text called an "article" which contains, among other things, references to the scale of the issue named named in the headline.