AI is pretty much killing social media in the long term. Even pre-AI, a good chunk of posts/comment sections on sites were bots/paid. Reddit is becoming less believable than ChatGPT. I guess there's still the Onion-verse.
as a black box, a 4k context LLM AI (text in, text out) is no different than a highly effective search engine indexing all possible 4k bodies of text (also - text in, text out)
I'm not convinced you can have an impromptu global conversation to any positive end. Humans are not well suited to this task and an unsupervised mostly anonymous forum plays to those weaknesses and provides no support to generate positive outcomes.
It was never a particularly good idea at the scale it's currently deployed at.
The demographics of who was online before the internet went mainstream matter a lot, here. It wasn't exactly a representative slice of the general population.
Forums were still going strong a decade after the Internet went mainstream. They only started to fade after smartphones took off and many forums took years to introduce mobile themes. For sports teams however, forums never faded, there tens of millions of users on team-specific soccer forums for example.
That's a good point. I think a lot of forums were less vulnerable for a number of reasons. They typically don't have a large audience (not all, but most), which makes them less of a target. They're also organized around niche interests that don't intersect much with politics and cultural issues, off-topic forums aside. And they're probably more heavily moderated than social media and blog comments.
I think the general point stands when considering large-scale platforms.
Usenet was US-centric but somewhat global and certainly not local. Even dialup BBS's were sometimes nationwide despite long distance phone charges. I wasn't into the BBS thing though.
Were they global or local? I made that distinction intentionally.
Either or both, depending on the SYSOP's resources. I ran a BBS that did store-and-forward between the U.S. and Europe.
The ones with global connections could take a day to a week to forward messages, but that turned out to be a feature. We went outside in the real world instead of staying online arguing with strangers.
Absolutely not. From almost day 1 Reddit has been plagued with jokey meme-speak, which is partially why specialist forums are still thriving (audio/video stuff, XDA-developers, European soccer teams, SomethingAwful boards and up until a few years ago, Notebook-Review).
Reddit has been an absolute dumpster fire from the get-go. Its Wikipedia page has one of the largest “controversies” sections of any publicly listed company. Many of the controversies are so significant they have their own Wikipedia page.
Not wanting to particularly defend Reddit but a controversies section on a wikipedia page is hardly a good metric, in my opinion. Wikipedia is often used to malign various entities (and protect others).
I have the opposite experience. While these anonymous groups tend to be high on vulgarity and directness, there are much more peaceful examples than the other way around. Propaganda got strong when we began to restrict content on social media by the companies themselves or external actors.
This human nature shit is empirically wrong. There are quite a few scammers around. You also meet these people in real life, you just don't notice immediately.
Interestingly, you can still use `author:username` to search for posts. For my part, if something seems suspicious and the profile is private then I assume it's a bot.
Yeah, I saw some posts on there the other day that felt a bit suspect, went to look at their profile, and nothing. I'd already become an infrequent user of reddit since some earlier changes, but that makes me even less likely to go back.
I thought the same. I'm still 90% of the same mindset. But it does worry me how much people like slop. But is it because it's novel? and will people get tired of it?
> Reddit is becoming less believable than ChatGPT.
Hard disagree, and I’ll cite a simple example: Reddit isn’t one community. It’s a hub and spoke model. There are many good communities with curators and SMEs.
My canonical example that’s counter to this is HN. No offense to anyone but Reddit doesn’t have a hive mind - communities do. And HN hive mind is wrong more often than right and has been targeted by all sorts of astroturfers along the way. I personally take very few comments on here seriously, no takes seriously, and mostly show up to read comments by some actual hard cred people (f.e. animats). Everyone else might as well be a shill bot. AI doesn’t change this. I still get cream of the crop from Reddit.
Having said that, social media isn’t dead. It’ll transform. Two things are eternal: 1) women’s need for attention, 2) men’s need to get laid.
I mean, yes and no. The default Reddit experience is absolutely overrun by fake content. Or, there are tens of thousands of real people who have nothing else to do in their life but to go to /r/news or other "front page" subreddits and post the same political talking points multiple times a day, whether the story warrants it or not. Frankly, the AI / paid-shill explanation is greatly preferable in my book.
The non-default experience is a mixed bag. Specialized communities are usually moderated pretty strictly, including rules against outgoing links, product reviews, etc. That said, you definitely see product placement disguised as questions / off-the-cuff recommendations where some previously-unheard-of Chinese brand is all of sudden mentioned every day.
HN has its problems, mostly in the form of people pretending to be experts and saying unhinged nonsense, but it's far less commercialized. If you want your brand to be on the front page, you sort of need to make an effort to write at least a mildly interesting blog post. Now, AI is changing that dynamic a bit because we now get daily front-page stories that are AI-generated... but it's happening more slowly than elsewhere.
So this is the reality we're living in now, where bot farms have become normalized? I always associated bot farms with authoritarian regimes like Russia and China, but are we becoming the same thing? And VC funds are actually backing this? I hope I'm not the only one who finds this completely insane. I can't even listen to the a16z podcast anymore; my mind now permanently associates them with bot farms. These are the news that makes me think does people ever think about moral values and ethics.
"Thou shalt not make a machine in the likeness of a human mind."
Seems like the Butlerian Jihad is arriving ahead of schedule, and the real horrors demanding the uprising aren't oppression and violence, but viral marketing and sockpuppetry.
Yeah, but also the violence already started via hacked brains. OpenAI is having to fend off multiple lawsuits because its chatbot users started taking their own lives.
I think he's old enough to be tried as an adult here. He architected the product, it was no silly accident. I think his choice of role models may be a reflection of his character...
I don’t think they are in short supply, but the vast majority of them aren’t the super-successful so we don’t see their names often.
They are the teachers, coaches, and engineers. The problem is the anti- role models are the ones who get all of the media:
Andrew Tate (mysogenistic pyramid schemer and pimp / sex trafficker of high school girls),
Joe Rogan (his mind is so open that his brains fell out),
Jordan B Peterson (charlatan who dresses up banal self-help advice with pseudo-intellectual jargon to seem profound, drug addict who is still taking very big risks with his health, frequently argues strawmans by misrepresenting postmodernism, Marxism, atheism, etc).
Our heuristics of who we should look up to are skewed because too many young people revere wrath and fame over ethics, morals, and values which may hold us back from success.
Exactly, concentration of attention onto singular figures as role models should be avoided; kind of like how we agree that it is healthier for the EU citizens to have a more diverse market than concentrated monopolies.
We do have to recognize that we have societally dropped the ball by allowing media companies brainwash the population into thinking that money and fame is unquestionable success; this has allowed the corporate mouth pieces to blow so much hot air into the bullshit they spew, that turds end up floating to the top.
What is clear as day is that we live in a world where Brandolini's law is being exploited constantly: that there is a constant fight to DARVO the heck out of our perceptions is undeniable.
We need to normalize bringing receipts to back your claims...
How to teach the average person not to follow the siren's song of populism and rage baiting?? That, I have not yet figured out.
It's almost refreshing how unashamed they are. I hate it, obviously, but I kind of like it better than companies that say something dressed up in marketing speak but actually mean what this site just says outright.
Businesses literally try to track and optimise virality these days as part of their marketing.
Not just businesses. It's governments, too.
There's a public park near me that is tracked for likes and social media engagement. If it misses the city's goals for social media engagement a certain number of months in a row, it can be turned back into a parking lot.
I objected to this measure of "success" during the public meetings about it, but nobody cares about the old man in the back of the room.
It's a great reminder that while room-temperature-IQ AI pumpers like Sam Altman talk about "solving physics" or whatever, the actual value of large language models is generating spam marginally cheaper than Filipinos.
Wow I thought this type of business was illegal or at least a very gray area conducted on the dark web but looks like the VCs at this point have no morals left. Gambling? Amazing. Spam? Take my money. Ad fraud? Yes please
A16Z is basically funding toxic fungi growing on the face of society at this point. So much of what they do seems to be a bet that people will want to pay money to do antisocial things and avoid the consequences.
A lot of people are against the current social media tech it seems. I wouldn't be surprised if they're funding the acceleration of its collapse to see what can come next.
New generation is less social, more sober, less motivated, more doomer.
Viewbotting is a pretty big issue on all the streaming platforms. Twitch changed their technical measures recently and a bunch of big streamers concurrent viewers dropped by a large amount. "Fake it until you make it" is a viable strategy with streaming. It's all about fake engagement to game the algorithm and end up in people's feeds.
Because using the CFAA as a cudgel against things you don't like, whether it's journalists exposing insecure government systems, or companies engaging in deceptive marketing practices is a bad idea? For the latter, there's already laws against it that doesn't involve CFAA, eg. https://www.ecfr.gov/current/title-16/chapter-I/subchapter-B...
>> Why isn't this company sued for computer fraud and abuse?
> Because using the CFAA as a cudgel against things you don't like, whether it's journalists exposing insecure government systems, or companies engaging in deceptive marketing practices is a bad idea?
I think you're confusing bad ethics with a bad idea. A prosecutor's job is to win, not behave ethically.
When you're a company with funding and/or a network of benefactors behind it a lot of laws stop applying. And if all else fails, I hear pardons aren't particularly expensive these days.
My god, horrific. Does not everyone know everything online is a psyop now? I will tel you, they don't. No one studies things, no one takes the time. AI, social media, it all has to be protested, boycotted.
Now it seems war is coming from the US it could not be more true that at this moment.
How about a few prison terms for conspiracy to defraud? And not for small fry like the "CEO" of this company either. Why not, say, 10 years for Marc Andreesen, personally? And, no, no "disrupting" it with serve-your-time-as-a-service, either.
No, we should not stop something that is inevitable. We should work with it to find ways that it fits into a productive society, such as anonymously verifying that you are a citizen so the cost of abuse is at least a felony.
Why? Generic computation existing always means this will be possible. The cats out of the bag. You can’t regulate computation globally. You can only enforce it on the platform level.
I want to do plenty about it, I want to make the barrier to allow this shit online identity theft to make it too expensive to do.
Andreessen's true colours have often flared up. I noticed it when India banned Facebook's Free Basics scheme. He had needlessly, without provocation, lashed out like a child who was denied an ice-cream cone. I will never forget his now-deleted tweet:
> Anti-colonialism has been economically catastrophic for the Indian people for decades. Why stop now?
If you read "Careless People" you'll notice that Andreesseen was prioritizing cash over morals for a long time, and his Facebook investment/involvement was also producing highly unethical things
> Once you have infinite money, you tend to want infinite power next
People who get to what seems like infinite money only do so because they were seeking money as a means to power for which they have an insatiable desire in the first place, its not that getting to (even practically) infinite money triggers the desire for unlimited power, its that it is a symptom of it.
I don't disagree, but lighting money on fire hyping NFTs and whatever other random fad strikes them as interesting doesn't seem to be the way to accomplish that.
My actual guess is that they got way too big, both in terms of headcount and fund size, to limit their investments to what is expected to be the best of the best in terms of financial return and societal impact.
This feels not very different from the recent report revealing how Nick Fuentes has a lot of artificial likes and comments on videos that push his content, due to a large following that responds to commands delivered via Telegram etc. A VC backed corporation using a large phone farm to manipulate the public is no better than Nick Fuentes.
>The organization that recently released the report alleging the contrary is the same one that released that report earlier this year claiming that if you say “Christ is King” then you’re a white supremacist.
No, Rutgers University did not publish a report that says “if you say ‘Christ is King’ then you’re a white supremacist”. You can read about it here, it’s only 20 pages and well-sourced.
Even if they said that, why would it be dismissed casually? There may be good justification to associate the two. It’s clear the phrase has flooded X this year alongside a lot of supremacist stuff.
Eternal September came up in conversation today about how users don't do effort posts any longer, they just want to leave funny comments below reaction videos and then swipe to the next one.
Anyone got any good effort post oases I can lurk and help out in?
As the submitter, I want to point out that I submitted this post with the original title. The one that makes it clear a16z are behind the social media astroturfing. The mods changed the title.
wait is there more than one mod on HN? I for some reason have always thought it was just that @dang guy as the only one. Is he just the top mod and there are others underneath him?
Bad call. 'Major VC firm investing in slop farm' is the newsworthy aspect here. 'startup selling slop as a service' is mildly interesting but there are lots of companies like that already.
why was the original title edited to remove the reference to a16z? why hide investment into socially unacceptable product? if you are going to be a scumbag weasel, own it.
Edit: Fine! I found a way to get a16z in there and keep it to 80 chars.
Both for length reasons and because it was clickbait.
The original title doesn’t even have the actual company’s name in it, only the name of the investor, which is intended to elicit just the kind of ragey reaction you’re exhibiting in this comment.
On HN, titles need to be more neutral and factual (I.e., include the name of the company the article is primarily about).
(Also, you seem to be implying some conflict of interest? Doublespeed and a16z have nothing to do with HN/YC.)
Nobody knows what Doublespeed is, everyone knows what a16z is. Doesn't putting the part that's pertinent to people in the headline oblige readers-to-be?
I'd say that the change is editorializing more than the original was "linkbait".
A16z invests in _a great many_ companies. Without the company name in the title, you have to click to find out who the company is. That’s the point. The title gets readers riled up and activates them to click.
The title we’ve set is intended to give enough information to pique curiosity for those who will be curious about the topic - the company name, what the company does (AI-generated promotional content), what’s happened (hacked).
I don’t love the title but it’s the best I could come up with to fit within the 80 character limit.
Anyone is welcome to suggest a better one that is compliant with the guidelines.
Heh, fair enough. My first thought was “thousands” - which is true for YC. Then I thought “hundreds?” I have no idea and I don’t really want to spend time trying to find out, partly as it wouldn’t be a precise figure anyway (they wouldn’t disclose that publicly). So it’s “countless” for me.
a16z is incredibly important to this ecosystem, far more so than any individual company. And them investing in many companies does not exempt them from people identifying when they invest in vile companies.
I am very sorry, but that's a critical part of the story.
The title had to be changed to be compliant with the guidelines. It also has to fit under 80 characters. It’s not an easy task and you’re welcome to suggest a better one.
I’m saying the reason I couldn’t include a16z is that I can’t fit it into the title along with all the other details that seem important, notably the name of the company the article is about.
But the whole _point_ is "major VC funding spam outfit", surely? Like, who cares who the spam outfit is? There are lots of them. These phone farms are not exactly rare. The interesting bit is the involvement of a supposedly proper company.
Meta-discussion isn't in the spirit of intellectual curiosity. HN mods deprioritise and/or bury those threads. Even where the points made are otherwise correct or useful.
tomhow's acknowledged the criticism and revised his edit to reflect it. You've won your argument.
I checked out one of the accounts mentioned, mostly to check if I can discern fake accounts. The content is just still pictures. I'd dismiss those whether or not they're AI. Well, I'm not on TikTok anyway.
This reminds me of some youtube videos when I was researching some stuff to buy. Those videos are just still images plus text-to-speech narration, usually with an annoying background music.
You are making yourself easier to fool: You don't know which fake accounts you overlooked, and by increasing your confidence you make yourself more vulnerable to them in the future.
Your scenario is sth like: guy sees bad AI, good AI, and genuine content in his feed. Bad AI gives him confidence in his ability to detect slop, but he thinks the good AI is genuine content. Here the higher the quality of the slop, the harder it is to detect.
In this case some slop is hacked and exposed. I check them out to see if they're good yet. The quality of the slop is unrelated to whether they'd get hacked.
https://www.tiktok.com/@chloedav1s_ is the first account mentioned in the article. (Unless TikTok's UI is horrendously bad and I misunderstood) They are literally still images.
Am I to mourn the loss of what I personally consider one of the worst manipulative toxins to ever exist?
Thanks AI.
It was never a particularly good idea at the scale it's currently deployed at.
I think the general point stands when considering large-scale platforms.
So it's not unreasonable to say that when demographics of forums was changed, the economic incentive appeared? So it actually depends on demographics?
Either or both, depending on the SYSOP's resources. I ran a BBS that did store-and-forward between the U.S. and Europe.
The ones with global connections could take a day to a week to forward messages, but that turned out to be a feature. We went outside in the real world instead of staying online arguing with strangers.
This human nature shit is empirically wrong. There are quite a few scammers around. You also meet these people in real life, you just don't notice immediately.
Hard disagree, and I’ll cite a simple example: Reddit isn’t one community. It’s a hub and spoke model. There are many good communities with curators and SMEs.
My canonical example that’s counter to this is HN. No offense to anyone but Reddit doesn’t have a hive mind - communities do. And HN hive mind is wrong more often than right and has been targeted by all sorts of astroturfers along the way. I personally take very few comments on here seriously, no takes seriously, and mostly show up to read comments by some actual hard cred people (f.e. animats). Everyone else might as well be a shill bot. AI doesn’t change this. I still get cream of the crop from Reddit.
Having said that, social media isn’t dead. It’ll transform. Two things are eternal: 1) women’s need for attention, 2) men’s need to get laid.
The bots have gotten a lot smarter about making their ads look organic too. Even easier now with the ability to hide post history
The non-default experience is a mixed bag. Specialized communities are usually moderated pretty strictly, including rules against outgoing links, product reviews, etc. That said, you definitely see product placement disguised as questions / off-the-cuff recommendations where some previously-unheard-of Chinese brand is all of sudden mentioned every day.
HN has its problems, mostly in the form of people pretending to be experts and saying unhinged nonsense, but it's far less commercialized. If you want your brand to be on the front page, you sort of need to make an effort to write at least a mildly interesting blog post. Now, AI is changing that dynamic a bit because we now get daily front-page stories that are AI-generated... but it's happening more slowly than elsewhere.
Seems like the Butlerian Jihad is arriving ahead of schedule, and the real horrors demanding the uprising aren't oppression and violence, but viral marketing and sockpuppetry.
Okay, is this just an ad then?
If you want more photos of his phone farm... it's all on his twitter page: https://x.com/rareZuhair/status/1961160231322517997
"Accelerating the dead Internet"? Why are we, as a community, encouraging the acceleration of enshitification of our common spaces? So weird to me...
If we never do things that later make us cringe and want to correct, we're not reflective and self-critical enough.
They are the teachers, coaches, and engineers. The problem is the anti- role models are the ones who get all of the media:
Andrew Tate (mysogenistic pyramid schemer and pimp / sex trafficker of high school girls),
Joe Rogan (his mind is so open that his brains fell out),
Jordan B Peterson (charlatan who dresses up banal self-help advice with pseudo-intellectual jargon to seem profound, drug addict who is still taking very big risks with his health, frequently argues strawmans by misrepresenting postmodernism, Marxism, atheism, etc).
Our heuristics of who we should look up to are skewed because too many young people revere wrath and fame over ethics, morals, and values which may hold us back from success.
We do have to recognize that we have societally dropped the ball by allowing media companies brainwash the population into thinking that money and fame is unquestionable success; this has allowed the corporate mouth pieces to blow so much hot air into the bullshit they spew, that turds end up floating to the top.
What is clear as day is that we live in a world where Brandolini's law is being exploited constantly: that there is a constant fight to DARVO the heck out of our perceptions is undeniable.
We need to normalize bringing receipts to back your claims...
How to teach the average person not to follow the siren's song of populism and rage baiting?? That, I have not yet figured out.
>"Take proven content and spawn variation."
It's almost refreshing how unashamed they are. I hate it, obviously, but I kind of like it better than companies that say something dressed up in marketing speak but actually mean what this site just says outright.
It's obviously marketing. But their marketing strategy appears to be being unashamed about ripping off content and creating bot farms.
What are you suggesting they are lying about? They're actually doing it for the good of the world and just pretending they're a bot farm for hire?
Not just businesses. It's governments, too.
There's a public park near me that is tracked for likes and social media engagement. If it misses the city's goals for social media engagement a certain number of months in a row, it can be turned back into a parking lot.
I objected to this measure of "success" during the public meetings about it, but nobody cares about the old man in the back of the room.
I can only assume his VC funders have a bomb collar on him or something, otherwise I don't see why anyone would trust him with a penny.
Yes but they also stand to make money offering services to counteract the services they offer.
New generation is less social, more sober, less motivated, more doomer.
And then there's everyone else.
According to Cambridge's data, $100 gets you around 2k fake but verified tiktok accounts: https://cotsi.org/platforms?view=map&platform=lf
Viewbotting is a pretty big issue on all the streaming platforms. Twitch changed their technical measures recently and a bunch of big streamers concurrent viewers dropped by a large amount. "Fake it until you make it" is a viable strategy with streaming. It's all about fake engagement to game the algorithm and end up in people's feeds.
Probably moves like affiliate/referral linking, client paid campaigns, cpa lead generating arbitrate at scale, product seeding.
> Because using the CFAA as a cudgel against things you don't like, whether it's journalists exposing insecure government systems, or companies engaging in deceptive marketing practices is a bad idea?
I think you're confusing bad ethics with a bad idea. A prosecutor's job is to win, not behave ethically.
Now it seems war is coming from the US it could not be more true that at this moment.
It's easier to count billionaires who aren't supervillains.
p.s. this is not a great photo of Marc on his Wikipedia page: https://en.wikipedia.org/wiki/File:Marc_Andreessen-9_(croppe...
The call is coming from inside the house.
I want to do plenty about it, I want to make the barrier to allow this shit online identity theft to make it too expensive to do.
They used to be at the pinnacle of the VC sector, and now they seem to actively seek out the most toxic portcos possible.
> Anti-colonialism has been economically catastrophic for the Indian people for decades. Why stop now?
https://www.bbc.com/news/blogs-trending-35542497
People who get to what seems like infinite money only do so because they were seeking money as a means to power for which they have an insatiable desire in the first place, its not that getting to (even practically) infinite money triggers the desire for unlimited power, its that it is a symptom of it.
My actual guess is that they got way too big, both in terms of headcount and fund size, to limit their investments to what is expected to be the best of the best in terms of financial return and societal impact.
Furthermore: reddit is a platform; Fuentes is content. That's a meaningful difference.
No, Rutgers University did not publish a report that says “if you say ‘Christ is King’ then you’re a white supremacist”. You can read about it here, it’s only 20 pages and well-sourced.
https://networkcontagion.us/reports/3-13-25-thy-name-in-vain...
Anyone got any good effort post oases I can lurk and help out in?
Amongst more general discussion platforms, HN, Metafilter, possibly Tildes.net.
Anything large is by definition popular and common, both terms with freighted meanings. The more so if they're advertising-driven.
Edit: The community has spoken and I've come up with a way to include a16z in the title whilst keeping it under 80 chars.
There is an 80-character limit on titles
This title is 75 characters
A mod changed the title to something other than the originally submitted original article title, to protect a major VC.
Not cool.
And this would be an entirely clickbait-free, fact-based summary of what they are doing.
It's not far off fraud as a service. This activity could get people prosecuted in EU countries and the UK.
Both for length reasons and because it was clickbait.
The original title doesn’t even have the actual company’s name in it, only the name of the investor, which is intended to elicit just the kind of ragey reaction you’re exhibiting in this comment.
On HN, titles need to be more neutral and factual (I.e., include the name of the company the article is primarily about).
(Also, you seem to be implying some conflict of interest? Doublespeed and a16z have nothing to do with HN/YC.)
I'd say that the change is editorializing more than the original was "linkbait".
The title we’ve set is intended to give enough information to pique curiosity for those who will be curious about the topic - the company name, what the company does (AI-generated promotional content), what’s happened (hacked).
I don’t love the title but it’s the best I could come up with to fit within the 80 character limit.
Anyone is welcome to suggest a better one that is compliant with the guidelines.
(Edit: s/ countless / a great many /)
Best to drop the contrafactual hyperbole .. unless A16z's accountants really have dropped the ball and can no longer enumerate their investments.
For some of us these exaggerated claims of greater than aleph-null investments send our eyebrows literally to the stratosphere (/s).
I am very sorry, but that's a critical part of the story.
The original title is 75 characters. Your title is 74 characters. If it was edited for length reasons, I'm not sure saving 1 character is worth it.
In this case you aren't backing a phone farm creating ad fraud, but rather a "organic paid media initiative backed by a16z"
Guess they wanted to hide the a16z connection on frontpage, huh?
https://news.ycombinator.com/item?id=46307121
Edit: Fine! I found a way to get a16z in there and keep it to 80 chars.
Awesome, cool job. What petulance we have this morning, Tom.
tomhow's acknowledged the criticism and revised his edit to reflect it. You've won your argument.
<https://news.ycombinator.com/item?id=29788452>
Lmao. Nice.
This reminds me of some youtube videos when I was researching some stuff to buy. Those videos are just still images plus text-to-speech narration, usually with an annoying background music.
You are making yourself easier to fool: You don't know which fake accounts you overlooked, and by increasing your confidence you make yourself more vulnerable to them in the future.
In this case some slop is hacked and exposed. I check them out to see if they're good yet. The quality of the slop is unrelated to whether they'd get hacked.
I'm not on tiktok and the videos often won't play for me because of that.
Are they still images as videos with tts or literally just still images?