Doesn't matter. We must keep building more and more technology no matter the cost. Have an idea for a business? Build it. Does your business make the lives of people worse? Doesn't matter, keep pushing. Could some new technology ruin the lives and relationships that people have? Doesn't matter, just build it. We always need more, need to do more. Every experiment is valid, every impulse must be followed. More complexity, more control, more distraction, more outrage, more engagement. Just keep building forever no matter the cost.
> You better maximize engagement or you will lose engagement this is a red queen’s race we can’t afford to lose! Burn all the social capital, burn all your values, FEED IT ALL TO MOLOCH!
That's an uncharitable take. Those VCs care very deeply about society, that's why they're funding so much research into the Torment Singularity (and giving so many talks about it) and making sure that the Right People get the Torment Nexus first so "we" can decide how it gets used.
I look at it more like "the fact that we can't align humans/human institutions strongly suggests we won't be able to align something as alien as AGI is likely to be"
Eric Weinstein refers to this as an Embedded Growth Obligation (EGO), whereby organizations and economies at large assume perpetual growth, and that things really start to unravel when that growth inevitably slows. It is pretty mindblowing how we have basically accepted growth as the default state, it is not at all a given that things always grow and get better.
> It is pretty mindblowing how we have basically accepted growth as the default state
It is completely to be expected, exactly because it is not new.
It's been scarcely a generation since the peak in net change of the global human population, and will likely be at least another two generations before that population reaches its maximum value. It rose faster than exponentially for a few centuries before that (https://en.wikipedia.org/wiki/World_population#/media/File:P...). And across that time, for all our modern complaints, quality of life has improved immensely.
Of all the different experiences of various cultures worldwide and across recent history, "growth" has been quite probably the most stable.
Culture matters. People's actions are informed by how they are socialized, not just by what they can observe in the moment.
We will achieve essentially zero-cost infinite exponential scalability! The cloud has no limits! InfiniDum enterprises will operate in billions of markets across time space and dimensions of probability!
People want dopamine hints, gamification, addictive distractions, and a culture of competitive perma-hustle.
If they didn't, we wouldn't be having these problems.
The problem isn't AI, it's how marketing has eaten everything.
So everyone is always pitching, looking for a competitive advantage, "telling their story", and "building their brand."
You can't "build trust" if your primary motivation is to sell stuff to your contacts.
The SNR was already terrible long before AI arrived. All AI has done is automated an already terrible process, which has - ironically - broken it so badly that it no longer works.
> You can't "build trust" if your primary motivation is to sell stuff to your contacts
That is false. You build a different type of trust: people need to trust that when they buy something from you it is a good product that will do what they want. Maybe someone else is better, but it won't be enough better as to be worth the time they would need to spend to evaluate that. Maybe someone else is cheaper, but you are still reasonably priced for the features you offer. They won't get fired for buying you because you have so often been worthy of the trust we give you that in the rare case you do something wrong it was nobody is perfect not that you are no longer trustworthy (you can only pull this trick off a few times before you become untrustworthy)
The above is very hard to achieve, and even when you have it very easy to lose. If you are not yet there for someone you still need to act like you are and down want to lose it even though they may never buy from you often enough to realize you are worth it.
It boggles my mind when, despite my general avoidance of advertising online, I see the language being used. Call me old fashioned, but "viral" is a bad thing to me. "Addictive" is a bad thing. "Tricks" are bad! But this is the language being used to attract customers, and I suppose it works well enough.
> All AI has done is automated an already terrible process, which has - ironically - broken it so badly that it no longer works.
Evil contains within itself the seed of its own destruction ;)
Sure, sometimes you should fight the decline. But sometimes... just shrug and let it happen. Let's just take the safety labels off some things and let problems solve themselves. Let everybody run around and do AI and SEO. Good ideas will prevail eventually, focus on those. We have no influence on the "when", but it's a matter of having stamina and hanging in there, I guess
If they didn't, we wouldn't be having these problems.
That assumes people have the ability to choose not to do these things, and that they can't be manipulated or coerced into doing them against their will.
If you believe that advertising, especially data-driven personalised and targeted advertising, is essentially way of hacking someone's mind to do things it doesn't actually want to do, then it becomes fairly obvious that it's not entirely the individual's fault.
If adverts are 'Buy Acme widgets!' they're relatively easy to resist. When the advert is 'onion2k, as a man in his 40s who writes code and enjoys video games, maybe you spend too much time on HN, and you're a bit overweight, so you should buy Acme widgets!' it calls for people to be constantly vigilant, and that's too much to expect. When people get trapped by an advert that's been designed to push all their buttons, the reasonable position is that the advertiser should take some of the responsibility for that.
That’s true…but I do think people need to learn more that avoidance is a strategy too. The odds are too stacked against the average person to engage properly so just don’t. I don’t know. Sure there are certain unavoidable things but for a large part I think you can just choose to zone out of a lot of the consumerist world now
> That assumes people have the ability to choose not to do these things, and that they can't be manipulated or coerced into doing them against their will.
Within the last year I opened an Instagram account just so I could get updates from a few small businesses I like. I have almost no experience with social media. This drove home for me just how much the "this is where their attention goes, so that's revealed preference" thing is bullshit.
You know what I want? The ability to get these updates from the handful of accounts I care about without ever seeing Instagram's algo "feed". Actually, even better would be if I could just have an RSS feed. None of that is an option. Do I sometimes pause and read one of the items in the algo feed that I have to see before I can switch over to the "following" tab? I do, of course, they're tuned to make that happen. Does that mean I want them? NO. I would turn them off if I could. My actual fucking preference is to turn them off and never see them again, no matter that they do sometimes succeed in distracting me.
Like, if you fill my house with junk food I'll get fatter from eating more junk food, but that doesn't mean I want junk food. If I did, I'd fill my house with it myself. But that's often the claim with social media, "oh, it's just showing people more of what they actually want, and it turns out that's outrage-bait crap". But that's a fucking lie bolstered by a system that removes people's ability to avoid even being presented with shit while still getting what they want.
I do think that in general people are just conditioned by advertising in a general sense. I have family (by marriage) where most conversations just boil down to "I bought [product] and it was _so_ good." or "I encountered a minor problem, and solved it by buying [product]." It's pretty unbearable.
There are times I need a widget but I don't know it exists and so someone needs to inform me. Other times I know I need a widget, but I don't know about Acme and I will want to check them out too before buying.
Most ads are just manipulating me, but there are times I need the thing advertised if only I knew it was an option.
The core of this issue is a power imbalance. Advertisers have the full power of American capital at their disposal, and as many PhDs who know exactly how to exploit human psychology as need. Asking people to "vote with their wallet", or talking about "revealed preferences", or expecting people to be able to cope with this system is nonsense in the face of the amount of power available to the marketers.
It's fundamentally exploitation on a population scale, and I believe it's immoral. But because it's also massively lucrative, capitalism allows us to ignore all moral questions and place the blame on the victims, who again, are on the wrong side of a massive power imbalance.
Who else can and will stop the infernal machine other than the people? Can't see anyone. I hope you're wrong and expecting people to cope is not nonsense, because expecting the FDA or UN or Trump or Xi to do it is even more nonsense.
What authority are you going to complain to to "correct the massive power imbalance"? Other than God or Martians I can't see anything working, and those do not exist.
Fixed it for you: People are most easily manipulated into dopamine hints, gamification, addictive distractions, and a culture of competitive perma-hustle.
That is only true as long as people are the only entities who can spend money. As soon as people give AI the power to spend money, we will see companies designing products to appeal to AI:s. A new form of SEO spam, if you will.
This is ignoring the Marketing to Engineering ratio. For most recent history technology companies have had to spend at least as much on marketing as engineering in order to survive, and two to ten times as much spent on marketing as engineering is common for successful companies. Who is going to buy the thing is the most important question and without solid answers there is nothing, no matter how much technology was engineered.
Now this formula has been complicated by technological engineering taking over aspects of marketing. This may seem to be simplifying and solving problems, but in ways it actually makes everything more difficult. Traditional marketing that focused on convincing people of solutions to problems is being reduced in importance. What is becoming most critical now is convincing people they can trust providers with potential solutions, and this trust is a more slippery fish than belief in the solutions themselves. That is partly because the breakdown of trust in communication channels means discussion of solutions is likely to never be heard.
One of the potential upsides to this as that people just might start taking time to engage in a bit of critical thinking before reacting. Is this real? How likely is this AI nonsense? What is the source? Is this the full picture? etc.
Neil deGrasse Tyson said a quote expressing a concern about the future impact of AI on information credibility.
The exact quote is:
"I foresee the day where AI become so good at making a deep fake that the people who believed fake news as true will no longer think their fake news is true because they'll think their fake news was faked by AI."
I wish people who believed that kind of fake news had this piece of critical thinking. I don't think they do though, they'll take whatever confirms their views and reject everything else as faked by AI with no logic or proof whatsoever.
Almost everyone believes they're thinking critically, that's just how it feels to think at all. As an aside, I wonder about the average person who extols critical thinking, and how proficient he actually is himself; in my experience they're often conformist and susceptible to uncritically accepting consensus positions.
The truth is, for those of us with lower IQ, it doesn't matter how critically we think, we lack the knowledge and mental dexterity to reliably arrive at a nuanced and deep understanding of the world.
You have to stop dreaming of a world where everyone can sort everything out for themselves, and instead build a world where people can reliably trust expert opinion. It's about having a high-trust society. That requires people in privileged positions to not abuse their advantage in a short term way, at the cost of alienating, and losing the trust of the unwashed masses.
Because that's what has happened, the experts have been exploited as a social cudgel, by the psychopathic and malignant managerial class, in such an obvious and blunt way, that even those of us who are self-aware of our limitations, figure we're as likely to get it right ourselves, as to get honest and correct information from our social institutions.
There's always someone willing to outthink you. That's the whole premise of a magic show: there are people willing to dedicate irrational amounts of time to trick you in to believing something that isn't true.
But if you accept my premise, it suggests a different course of action than most people are focused on today. That is, if you're a good person, and want to build a healthier society, then rather than focusing on the stupidity of the masses, and trying to suppress every errant idea that emerges from them, you should instead create an incentive structure that engenders their trust. You would focus on stern, even corporal, punishments for those at the pinnacle of society, more than those at the bottom. Politicians should not emerge from their time in government with hundreds of millions of dollars in ill-gotten gains; which is a non-partisan problem today. And any "scientist" that fakes research data, should be treated very harshly, as a criminal. Undermining public trust in expert opinion causes more death and hardship than a typical street-thug murderer.
There is zero chance of making everyone smart enough to navigate the world adroitly. But there is a slightly better than zero chance we could organize our society to earn their trust.
> Will you still be here in 12 months when I’ve integrated your tool into my workflow?
This is the biggie; especially with B2B. It's really 3 months, these days. Many companies have the lifespan of a mayfly.
AI isn't the new reason for this. It's been getting worse and worse, in the last few years, as people have been selling companies; not products, but AI will accelerate the race to the bottom. One of the things that AI has afforded, is that the lowest-tier, bottom-feeding scammer, can now look every bit as polished and professional as a Fortune 50 company (often, even more).
So that means that not only is the SNR dropping, the "noise" is now a lot riskier and uglier.
> One of the things that AI has afforded, is that the lowest-tier, bottom-feeding scammer, can now look every bit as polished and professional as a Fortune 50 company (often, even more).
Yuval Noah Harari, Sapiens fame [0], has a great quote (paraphrasing):
Interviewer: How will humans deal with the avalanche of fake information that AI could bring?
YNH: The way humans have always dealt with fake information: by building institutions we trust to provide accurate information. This is not a new phenomenon btw.
In democracies, this is often either the government (e.g. the Bureau of Labor Statistics) or newspapers (e.g. the New York Times) or even individuals (e.g. Walter Cronkite).
In other forms of government, it becomes trust networks built on familial ties e.g. "Uncle/Aunt is the source for any good info on what's happening in the company" etc
The problem is that too many people just don't know how to weigh different probabilities of correctness against each other. The NYT is wrong 5% of the time - I'll believe this random person I just saw on TikTok because I've never heard of them ever being wrong; I've heard many stories about doctors being wrong - I'll listen to RFK; scientific models could be wrong, so I'll bet on climate change being not real etc.
Trust is much more nuanced than N% wrong. You have to consider circumstantial factors as well. ie who runs The NY Times, who gives them money, what was the reason they were wrong, even if they’re not wrong what information are they leaving out. The list goes on. No single metric can capture this effectively.
Moreover, the more political a topic the more likely the author is trying to influence your thoughts (but not me I promise!). I forgot who, but a historian was asked why they wouldn’t cover civil war history, and responded with something to the affect of “there’s no way to do serious work there because it’s too political right now”.
It’s also why things like calling your opponents dumb, etc is so harmful. Nobody can fully evaluate the truthfulness of your claims (due to time, intellect, etc) but if you signal “I don’t like you” they’re rightfully going to ignore you because you’re signaling you’re unlikely to be trustworthy.
> You have to consider circumstantial factors as well
This, too, goes into the probability of something being right or wrong. But the problem I'm pointing out is an inconsistent epistemology. The same kind of test should be applied to any claim, and then they have to be compared. When people trust a random TikToker over the NYT, they're not applying the same test to both sides.
> It’s also why things like calling your opponents dumb, etc is so harmful.
People who don't try to have any remotely consistent mechanism for weighing the likelihood of one claim against a contradicting one are, by my definition, stupid. Whether it's helpful or harmful to call them stupid is a whole other question.
My experience has been that people who trust some form of alternative news over the NYT are not preferring "some random TikToker".
And a lot of the time, that trust is specific to a topic, one which matters to them personally. If they cannot directly verify claims, they can at least observe ways in which their source resonates with personal experience.
Yes, but their choice of whom to trust is wildly inconsistent. There is no consistent test of how they judge some claim more or less trustworthy against an opposite claim. Of course, none of us are fully consistent, but some are just extremely so.
Ok, so what is a consistent epistemology that would lead someone to (probably an American) to believe the following things: planes are safe, atoms and viruses are real, the world is a globe, vaccines cause autism, Tylenol causes autism, vitamins are helpful, mobile phones do not cause cancer, weather forecasts are usually more-or-less right, man-made climate change is not real, the government can control the weather, GPS is reliable, stimulus causes inflation but tariffs do not, immigration harms my personal economic opportunities but natural population growth does not, the Roman Empire was real but descriptions of its ethnic makeup or the reasons for its collapse are not, etc. etc.? (the content of each individual belief is less important than the composite whole where the scholarship of strangers is sometimes accepted and sometimes rejected in a way that isn't explained by, say, reputation or replication)
The only thing I can come up with is that they do believe rigorous scholarship can arrive at answers, but sometimes those who do have the "real answers", lie to us for nefarious reasons. The problem with that is that this just moves the question elsewhere: how do you decide, in a non-arbitrary way, whether what you're being told is an intentional lie? (Never mind how you explain the mechanism of lying on a massive scale.) For example, an epistemology could say that if you can think of some motivation for a lie then it's probably a lie, except that this, too, is not applied consistently. Why would doctors lie to us more than mechanics or pilots?
Anohter option could be, "I believe things I'm told by people who care about me." I can understand why someone who cares about me may not want to lie to me, but what is the mechanism by which caring about someone makes you know the truth? I'm sure that everyone has had the personal experience of caring about someone else, and still advising them incorrectly, so this, too, quickly runs into contradictions.
I think the president of the United States believes all or nearly all of these things (or claims to).
And I did ask such people such questions - for example, people who fly a lot yet and "chemtrails" are poisoning us - but their answers always ended up with some arbitrary choice that isn't appliled consistently. Pretty much, when forced to choose between claims A and B, they go by which of them they wish to be true, even if they would, in other situations, judge the process of arriving at one of the conclusions to be much stronger than the other. They're more than happy to explain to you that they trust vitamins because of modern scientific research, which they describe as fradulent when it comes to vaccines.
Their epistemology is so flagrantly inconsistent that my only conclusion was that they're stupid. I'm not saying that's an innate character trait, and I think this could well be the result of a poor education.
5% wrong is an extremely charitable take on the NYT.
I once went to a school that had complementary subscriptions. The first time I sat down to read one there was an article excoriating President Bush about hurricane Katrina. The entire article was a glib expansion of an expert opinion who was just some history teacher who said that it was “worse than the battle of Antietam” for America. No expertise in climate. No expertise in disaster response. No discussion of facts. “Area man says Bush sucks!” would have been just as intellectually rigorous. I put the paper back on the shelf and have never looked at one since.
That sounds like something from the opinion page rather than the news. That is ok, as long as it's clearly labeled. It doesn't sound particularly high quality; perhaps they were a local giving their view from the community.
Regardless, clearly labeled opinions are standard practice in journalism. They're just not on the front page. If you saw that on the front page, then I'd need more context, because that is not common practice at NYT.
It was on the front page, and, no, it wasn’t a labeled editorial. If you feel the need to research this to defend their honor, it would have been around fall 2005. I don’t assume their journalism has improved in the past 20 years, and I’m OK with not knowing.
So since incorporating in 1851, let's say they put out 60,000 issues. 1 issue would represent about 0.002% of their output. How do you get to over 5% wrong?
It's a spot check. They checked one article from one of those issues and they spotted an error, so odds of >5% wrongness are high in their view. (They need a larger sample size and some statistics to make such a claim, but certainly your numbers are way off, but in the other direction.)
... and since we now know the world is more complex than what we used to think, say, 1000 years ago, this kind of "second-order thinking" is required more and more.
COVID ended my trust in media. I went from healthy skepticism to assuming everything is wrong/a lie. There was no accountability for this so this will never change for me. I am like the people who lived through the Great Depression not trusting banks 60 years later and keeping their money under the mattress.
I've seen this take a few times recently, including from a relatively famous person who seemed to be on my wavelength generally but I don't quite understand what is meant by it.
Could you quickly summarize how and why you felt let down by the media in regards to COVID?
Seconding this, I somehow managed to avoid encountering the coverage of COVID that people say shook their faith in institutions, despite following the news pretty closely. Like to the point that if not for others' reactions it'd never have occurred to me to regard the coverage as notably bad (unlike, say, the lead-up to the war in Iraq). I'd love to know what people are talking about when they bring this up, because I truly have no idea.
So the position of a sceptic is epistemologically valid: you distrust any claim that is under, say, 95% certainty. But this bar should be applied consistently, and sometimes you have to bet. For example, in the question of getting a vaccine or not, you must choose, and you should choose whatever claim is more likely to get a better result than the other.
The key is that distrusting one side or source does not logically entail trusting another source more. If you think that the media or medical establishment is wrong, say, 45% of the time, you still have to find a source of information that is only wrong 40% of the time to prefer it.
For sure. He then goes on to mention "in democracies", but a lot of democracies are now failing, in part because the institutions like the free press are being directed by their billionaire owners, or suppressed. And the family ties are being impacted heavily by mass misinformation and propaganda campaigns online, where their publishers are actively pushing it themselves (major worldwide social networks are now state influenced and / or their top brass has subjugated themselves to the reigning parties and / or any countermeasures have been removed).
This was done intentionally, over decades, to try to push ‘trust’ closer to where it can be controlled. Religion, family ties (through propaganda), etc.
Trusting institutions is fine but you have to trust people or institutions for the right things, blind trust is harmful.
I'll trust my doctor to give me sound medical advice and my lawyer for better insights into law. I won't trust my doctor's inputs on the matters of law or at least be skeptical and verify thoroughly if they are interested in giving that advice.
Newspapers are a special case. They like to act as the authoritative source on all matters under the sun but they aren't. Their advice is only as good as their sources they choose and those sources tend to vary wildly for many reasons ranging from incompetence all the way to malice on both the sides.
I trust BBC to be accurate on reporting news related to UK, and NYT on news about US. I wouldn't place much trust on BBC's opinion about matters related to the US or happenings in Africa or any other international subjects.
Transferring or extending trust earned in one area to another unrelated area is a dangerous but common mistake.
The thing is, building such institutions and maintaining trust is expensive. Exploiting trust is lucrative (fraud, etc.) It's also expensive to not trust - all sorts of opportunities don't happen in that scenario if, say, you can't get a friend or relative in the right place.
There are many equilibrium points possible as a result. Some have more trust than others. The "west" has benefited hugely from being a high trust society. The sort of place where, in the Prisoner's Dilemma matrix, both parties can get the "cooperate" payoff. It's just that right now that is changing as people exploit that trust to win by playing "defect", over and over again without consequence.
How very inconvenient it is, then, that at the same time intentional efforts to spread uncertainty and to erode trust in traditional institutions are at an all-time high! Must be a coincidence.
It's a feedback loop; you need things like freedom of speech and press to get a functional and free democracy, but you need a functional and free democracy to have freedom of speech / press. Infringe on one and you take down the other. But you need to strip down the legal branch of a free democracy first, because the democracy and freedom of speech/press is protected by a constitution in most cases.
The odd thing is that in the US we we deliberately do not differentiate free speech from other things such as dollars, lies, propaganda and outright manipulation. This is a relatively new thing, forced upon us in the last few decades, and it is causing a spectacular crash in trust towards all our institutions.
But billionaires are making and keeping ever more money than before, so it isn't a problem.
Our familial ties have been corrupted, supposing they were ever anything a sane person should've relied upon. And if humans can build institutions they trust, what happens when AI can build fake, simulated institutions that hit all the right buttons for humans to trust just as if they were of the human-created variety? Do those AIs lock in those pseudo-institution followers forever? Walter Crondeepfake can't not be trusted, just listen to his gravitas!
That's not how they did it though. Trusted institutions are only really needed in a trustless society and reliance on them as a source of truth is a really new trend. Society used to be trustful.
Unfortunately that's not what happens.
BBC, Al-Jazeera, RT, CBC are all propaganda sources and are not sources of information.
The other family members will get the information from those sources so family will not be trusted as well.
And the sources I consider as trustfull, my opinion of them most likely skewed by my bias and others will consider it propaganda as well.
CBC and BBC aren't perfect, but I trust them leagues over any billionaire owned media like anything from Post Media the Murdochs, or Bezos. Really any for profit news isn't to be trusted.
I needed to get some builder quotes for my home. It did not enter my mind to go online to search for any.
I just reached out to my family for any trustworthy builders they've had, and struck up conversations with some of my fancier neighbors for any recommendations.
(I came to the conclusion that all builders are cowboys, and I might as well just try doing some of this myself via youtube videos)
Using the internet to buy products is not a problem for me, I know roughly the quality of what I expect to get and can return anything not up to standard. Using the internet to buy services though? Not a chance. How can you refund a service
When we needed some work done, we asked family and friends too, and ended up with a cowboy. When the work needed to be re-done, we looked up local reviews for contractors, and ended up with someone who was more expensive but also much more competent, and the work was done to a higher standard.
> ended up with someone who was more expensive but also much more competent, and the work was done to a higher standard.
How do you know that? Or is it just that your bias is coybows are bad and so you assume someone who dresses and acts better is better?
Now step back, I'm not asking you personally, but the general person. It is possible that you have the knowledge and skills to do the job and so you know how to inspect it to ensure it was done right. However the average person doesn't have those skills and so won't know the well dressed person who does a bad job that looks good from the poorly dressed person who does a good job but doesn't look as good.
Perhaps I wasn't clear - I don't know enough to say if the job was good or bad just by inspecting it, I know the first job was bad because it didn't solve the problem, and then a more expensive contractor explained why, and did solve the problem.
Our issue was water intrusion along a side wall that was flowing under our hardwoods, warping them and causing them to smell. The first contractor replaced the floor and added in an outside drain.
The drain didn't work, and the water kept intruding and the floor started to warp again.
When we got multiple highly rated contractors out, all of them explained that the drain wasn't installed correctly, that a passive drain couldn't prevent the problem at that location, and that the solution was to either add an actively pumped drain or replace the lower part of the wall with something waterproof. We ended up replacing that part of the wall, and that has fixed the issue along that wall. (We now have water intrusion somewhere else, sigh).
If anything, I was originally biased for the cowboy, as they came recommended, he and his workers were nice, and the other options seemed too expensive & drastic. Now I've learned my lesson, at least about these types of trickier housing issues.
Also, no one mentioned evaluating someone by how they're dressed - the issue was family/friend recommendations vs online reviews, and I while I do take recommendations from friends and family into account, I've actually had better luck trusting online (local) reviews.
Every house is different and so every job is custom. Whatever standards you think the builder is enacting to get the job done to an agreeable price is likely an ad-hoc solution that you yourself could have done as an amateur if you had the tools.
For every standard to be met, you compromise either on cash or time.
I wish we were talking about what's next versus what's increasingly here.
How can infinite AI content be strictly awful if it forces us to fix issues with our trust and reward systems? Short term, sure. But infinite (also) implies long term.
I wish I had a really smart game theorist friend who could help me project forward into time if for nothing other than just fun.
Don't get me wrong, I'm not trying to reduce the value of "ouch, it hurts right now" stories and responses.
But damned if we don't have an interesting and engaging problem on our hands right now. There's got to be some people out there who love digging in to complicated problems.
What's next after trust collapses? All of us just give up? What if that collapse is sooner than we thought; can we think about the fun problem now?
From a game-theory perspective, if players rush the field with AI-generated content because it's where all the advantages are this year, then there's going to be room on the margins for trust-signaling players to advance themselves with more obviously handspun stuff. Basically, a firm handshake and an office right down the street. Lunches and golf.
The real question to ask in this gold rush might be what kind of shovels we can sell to this corner of hand shakers and lunchers. A human-verifiable reputation market? Like Yelp but for "these are real people and I was able to talk to an actual human." Or diners and golf carts, if you're not into abstractions.
That gets my brain moving, thanks. What do you think those who are poor/rich in a trust economy look like? How much of a transformation to trust economy do you think we make?
Human biological limits prevent the realization of stable equilibrium at the scale of coordination necessary for larger emergent superstructures
Humans need to figure out how to become a eusocial superorganism because we’re past the point where individual groups don’t produce externalities that are existential to other groups/individuals
I don’t think that’s possible, so I’m just building the machine version
> How can infinite AI content be strictly awful if it forces us to fix issues with our trust and reward systems?
You're assuming they can be fixed.
> But damned if we don't have an interesting and engaging problem on our hands right now. There's got to be some people out there who love digging in to complicated problems.
I'm sure the peasants during Holomodor also thought: "wow, what an interesting problem to solve".
I don't have the time to read all four stories that ChatGPT turned up right this minute, but I now have cause to believe that at least some minority of those peasants you refer to did find fun in solving their problems.
I'm with that group of people. What was your point in bringing this up?
AI-esque blog post about how infinite AI content is awful, from "a co-founder at Paid, which is the first and only monetization and billing system for AI Agents".
I'm not saying it's actually written with AI (and indeed, I don't think that's the case; hence my calling it "AI-esque" rather than actually AI generated). It's just that it's a particular style of businessy blog writing that, though originated by humans, AI is now often used to crank out. Lots of bullet points, sudden emphases, etc.
It's just funny, even by hand, to be writing in the infinite AI content style while lamenting the awfulness of infinite AI content while co-founding a monetization and billing system for AI agents.
I think this basically proves your point. There were things about it that made me think it may have been at least "AI-assisted", until I saw your "guaranteed bot-free" thing at the bottom. Anyone doing entirely hand-written things from now on are going to be facing a headwind of skepticism.
this is a funny phenomenon that I keep seeing. I think people are going through the reactionary “YoU mUsT hAvE wRiTtEn ThIs oN a CuRsEd TyPwRiTeR instead of handwriting your letter!1!!”
hopefully soon we move onto judging content by its quality, not whether AI was used. banning digital advertisement would also help align incentives against mass-producing slop (which has been happening long before ChatGPT released)
I don't have the time or energy to judge content by its quality. There are too many opportunities for subtle errors, whether made maliciously or casually. We have to use some non-content filter or the avalanche of [mis]information will bury us. We used to be able to filter out things with misspellings and rambling walls of text, and presumably most of the rest was at least written by a human you could harangue if it came to that. Now we're trying to filter out content based on em-dashes and emoji bullet lists. Unfortunately that won't be effective for very long, but we have to use what we've got, because the alternative is to filter out everything.
This didn’t seem AI-generated to me, although it follows the LinkedIn pattern of “single punchy sentence per paragraph”. LinkedIn people wrote like this long before LLMs.
I do love the irony of someone building a tool for AI sales bots complaining that their inbox is full of AI sales slop. But I actually agree with the article’s main idea, and I think if they followed it to its logical conclusion they might decide to do something else with their time. Seems like a great time to do something that doesn’t require me to ever buy or sell SaaS products, honestly.
I don't think this shows that you can't trust things. I think it means trust should be earned.
We might be transitioning to a world where trust has value and is earned and stored in your reputation. Clickbait is a symptom of people valuing attention over trust. Clickbait spends a percentage of their reputation by trading it for attention.
In a world of many providers, most people have not heard of any particular individual provider. This means they have no reputation to lose, so their choice to act in a reputation losing manner is easy.
Beyond a certain scale when everyone can play that game we end up with the problem that this article describes. The content is easy but vacuous. There are far more people vying for the same number of eyballs now.
The solution is, I believe, earned trust. Curators select items from sources they trust. The ones that do a good job become trusted curators. In a sense HackerNews is a trusted curator. Reddit is one that is losing, or has lost, trust.
AI could probably take on some of the role of that curation. In the future perhaps more so. An AI can scan the sources of an article to see if the sources make the claims that the article says it makes. I doubt it can do so with sufficient accuracy to be useful right now, but I don't think that is too far off.
Perhaps the various fediverse reddit clones had the wrong idea. Maybe they should in a distributed fashion where each point is a subreddit analogue operated each with their own ways of curation, then an upper level curation can make a site of the groups they trust.
This makes a multi level trust mechanism. At each level there are no rules governing behaviour. If you violate the values of a higher layer, they lose trust in you. AI could run its own curation nodes. It might be good at it or it might be terrible, it doesn't really matter. If it is consistently good, it earns trust.
I don't mind there being lots of stuff, if I can still find the good stuff.
You can't build trust in your OS (operating system) when your OS spies on the entire customer base, and you spin it off as telemetry. Or you remotely target the OS to implement a radical change, and force it to be installed as an 'update'.
I stopped accepting telephone calls before 2010. They still ring the phone.
What I get from the article is that, proving that a company will stick around for a while after you’ve subscribed is hard now, because anybody can AI generate the general vibe of the marketing department of a big established player. This seems like it’ll be devastating for companies whose business model requires signing new users up for ongoing subscriptions.
Maybe it could lead to a resurgence of the business model where you buy a program and don’t have to get married to the company that supports it, though?
I’d love it if the business model of “buy our buggy product now, we’ll maybe patch it later” died.
you need to prove beyond a doubt that YOU are the right one to buy from, because it's so easy for 3 Stanford dropouts in a trenchcoat to make a seemingly successful business in just a few days of vibecoding.
I think the point is that nobody will give companies money unless the product already works. No more "but in a month this'll get a really cool update that'll add all these features". If you can't trust that a company will continue to exist, you have to be confident that what you're buying is acceptable in its current state.
The modern software market actually seems like a total inversion of normal human bartering and trade relationships, actually…
In Ye Olden Days, you go to the blacksmith, and buy some horseshoes. You expect the things to work, they are simple enough that you can do a cursory check and at least see if they are plausibly shaped, and then you put them on your horse and they either work or they don’t. Later you sell him some carrots, buy a pot: you have an ongoing relationship checkpointed by ongoing completed tasks. There were shitty blacksmiths and scummy farmers, but at some point you get a model of how shitty the blacksmith is and adjust your expectations appropriately (and maybe try to find somebody better when you need nails).
Ongoing contracts were the domain of specialists and somewhat fraught with risk. Big trust (and associated mechanics, reputation and prestige). Now we’re negotiating an ongoing contracts for our everyday tools, it is totally bizarre.
> In Ye Olden Days, you go to the blacksmith, and buy some horseshoes. You expect the things to work, they are simple enough that you can do a cursory check and at least see if they are plausibly shaped, and then you put them on your horse and they either work or they don’t
Nit: that is not how it worked. You took your horse to the blacksmith and he (almost always he - blacksmiths benefit from testosterone even if we ignore the rampant sexism) make shoes to fit. You knew it was good because the horse could still walk (if the blacksmith messes up that puts a nail in their flesh instead of the hoof and the horse won't walk for a few days while it heals). In 1600 he made the shoes right there for the horse, in 1800 he bought factory made horseshoes and adjusted them. Either way you never see the horseshoes until they are one the horse and your check is only that the horse can still walk.
The annoying thing is, there was a voice in the back of my head saying “I’m pretty sure the blacksmith was more involved in the horse-shoeing process” as I wrote the post, but I’d already written enough of the post that I didn’t want to bother checking.
Well, no worries. If you subscribe to the post+ service I’ll fix it in a couple years, promise.
What you’re describing is basically the Drift Principle. Once a system optimizes faster than it can preserve context, fidelity is the first thing to go. AI made the cost of content and the cost of looking credible basically zero, so everything converges into the same synthetic pattern.
That’s why we’re seeing so much semantic drift too. The forms of credibility survive, but the intent behind them doesn’t. The system works, but the texture that signals real humans evaporates. That’s the trust collapse. Over optimized sameness drowning out the few cues we used to rely on.
I think this "drift principle" you're pushing is just called bias or overfitting. We've overfit to engagement in social media and missed the bigger picture, we've overfit to plausible language in LLMs and missed a lot.
We need PageRank like algorithm for "Trust / Human Content" to be applied directly to the source of such content. E.g. following all three channels are AI made. But all these content can be liked to an advanced AI version of audio based videos of Wikiarticles. If a video is providing just a summary based on established historical facts, even though it is AI based, how is it different than refering a thesaurus or dictionary? Aren't such videos making "knowledge" accessible.
FINAL Financial hours of U.S.A. just before the 1929 crash
One small one I do not agree with is "Are you burning VC cash on unsustainable unit economics?". I think it's safe to conclude by now that unsustainable businesses can be kept alive for years as long as the investors want it.
It didn't help documentation at all. I had to work with auth0 for example and their documentation is such a bloat, that I am already prototyping with better-auth.
No structure, outdated stuff marked as "preview" from 2023/2024, wikipedia like in depth articles about everything but not for simple questions like: how to implement a backend for frontend.
You find fragments and pieces of information here and there - but no guidance at all. Settings hidden behind tabs etc.
A nightmare.
No sane developer would have done such a mess, because of time constraints and bloat. You see and experience first hand, that the few gems are from the trenches, with spelling mistakes etc.
This isn't limited to sales. The trust collapse is also coming for the public debate, interpersonal relationships and probably more stuff than I can imagine right now.
I predict a renaissance of meeting people in person.
So I went on X after a long break from social media, and my feed is full of tips like this one:
Growing on X is so simple I’m shocked it works.
100x comments a day
10x posts a day
15x DM’s a day
1x thread a day
1x email a day
This is how you grow your presence on X.
Even if having a presence matters, how can you actually say something meaningful if you post 10 times a day - there's no way (unless you just repeat yourself). Hopefully my algorithm's just gone weird but sadly the people I used to follow stopped posting.
My screening inbox is full of the same exact form of engagement, almost identical to the one mentioned in the article. "I'm curious..." and then some interval later a follow up and then a "I don't seem to be reaching you" e-mail and by that point I have noticed and blocked them. It is fine for me, I have a system to handle it, but my receptionists often forward me these things from their inbox which bypasses my controls.
It's not just that it is 0 effort, it also sucks, and it is increasingly not relevant because their agents are just scooping up stuff to reach out about and they aren't even selling something that you would need to buy.
I just wish that we could go back to the old way. There should be a cost to attempt to get a sales lead.
What if this is the plan all along? People losing trust in media, so the rich and powerful can continue doing shit without getting exposed any more, because now they always can say it's just AI, and didn't really do this or that?
I think you're conflating the cost to consumers to use AI and the cost to run AI. Sure, it's a house of cards, but the cost to consumers still rounds to zero.
From reading some of the reactions to this post, this quote comes to mind by Upton Sinclair: “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
There are a lot of people here who have spent a lot of time, money, and effort building AI products. And they may be good and worthwhile! I'm literally one of them. But you still see some people totally underestimate this public-facing trust collapse and the growing anti-AI sentiment in general.
AI slop has tarnished the one remaining pure thing I do on the internet which is watch videos of cute animals. I've spotted a good amount in my Instagram feed lately which I only happened to notice it was fake at the last second. I'm sure some of them I've watched were indeed AI. It feels betraying to get tricked like that.
We see business go through this cycle a lot. Some new “better cheaper” thing comes along. Everyone implements it to keep up with the Jones’s. Suddenly there’s no differentiation because everyone has it and everyone thinks it sucks. Suddenly going back to some reworked version of the old thing is the new black.
One such example was call centers. In the 2000s implementing a call center in India was all the rage on cost cutting. The customer experience was terrible and suddenly having a US-based call center (the thing companies just abandoned) was now a feature.
I think we’ll see similar things with AI. Everyone will get flooded with AI slop. Folks will get annoyed and suddenly interacting with a real human or a real human writing original content will be a “feature” that folks flock to.
Problem is how do you find real humans in the first place if you don't know them. Easy enough to walk/drive around my city and talk to them. However there are a lot of topics that the experts live elsewhere and so that won't work.
Anyone else see AI as nothing more than a logging system for citing real world experience and expertise?
In order for any AI/ML content to have value it must cite where the accumulation of information came from. By not doing so it nothing more than a custom Wikipedia-esque with a motto _Trust Me Bro_.
Citations and lack there of should be a simple key factor in evaluating trust. What are your sources for this idea / answer?
I'm already seeing this. I very much fall into the category of 'delete all email offers' as I'm a small youtuber, big enough to be targeted by AI sponsor deals, so I'm just buried with it.
The last five times I've looked at something in case it was a legitimate user email it was AI promotion of someone just like in the article.
Their only way to escalate, apart from pure volume, is to take pains to intentionally emulate the signals of someone who's a legitimate user needing help or having a complaint. Logically, if you want to pursue the adversarial nature of this farther, the AIs will have to be trained to study up and mimic the dialogue trees of legitimate users needing support, only to introduce their promotion after I've done several exchanges of seemingly legitimate support work, in the guise of a friend and happy customer. All pretend, to get to the pitch. AI's already capable of this if directed adeptly enough. You could write a script for it by asking AI for a script to do exactly this social exploit.
By then I'll be locked in a room that's also a Faraday cage, poking products through a slot in the door—and mocking my captors with the em-dashes I used back when I was one of the people THEY learned em-dashes from.
One thing about it, it's a very modern sort of dystopia!
YouTubers and other social media influencers are a sort of royalty now, getting to decide by fiat which companies live or die.
But you can’t really even make the case to them anymore because like you said they can’t/won’t even read your email.
What mostly happens is they constantly provide free publicity to existing big players whose products they will cover for free and/or will do sponsored videos with.
The only real chance you have to be covered as a small player is to hope your users aggregate to the scale where they make a request often enough that it gets noticed and you get the magical blessing from above.
Not sure what my point is other than it kinda sucks. But it is what it is.
Or know people personally. There's two people I promote because they actively helped me do my own work, pitched in open source code and did development to support my project. There's another guy who lives in my town, so he gets some mention just because of that (his work's good, but that's my angle for mentioning it). And a microphone company got a shout out not because they give me microphones, but because the guy running the company noticed and liked my ethos and did a repair for free for me. That counts as a form of sponsoring so I talked him up, am already a fan of his work.
Make friends and work with people where possible. I get that some of this only works for us open source types, but the microphone guy isn't, he just did good work. I initially heard of his company through a pro sound engineer website, and ran with it when the advice turned out to be good.
Yeah, and I don’t mean to be complaining. I made the choice to move to the middle of nowhere and change industries. None of my contacts have relevance in this one and there is no presence in my area for networking.
In any case, I can’t complain anyway because I have received my share of favorable coverage. It is just less frequent when you don’t have the personal connections.
I'm honestly all for it. As AI keeps poisoning all aspects of online content and online interactions, people who care about that sort of thing will have to move back to in person interaction more.
The person you're physically interacting with might be using AI in their workflow, that's fine. I use AI too. I just don't want to "build a relationship" with AI. I don't care for AI "content". Art, blogs, articles, advertisements, even detestable things like sales and marketing are all forms of human relationships to me. It's fine if you wanna autogenerate it. I'm even "in the market" for autogenerates stuff as I use AI bots too, but you can't try to sell me a 100% automated-burger when I have the Fabricator 3000 too.
If I'm hungry and just want a burger, I'll get my Fabricator 3000 to generate one for me. If I'm in the mood for a human touch on food and a dining experience, I'll cook, go to a (reputable) restaurant or a friend's place who likes to cook. Maybe there is a market for you to run your Fabricator 3000 to generate a burger for me. maybe. I don't know why I'd buy it though when I can just get your prompt and feed it into my own Fabricator 3000...
I think the claim is wrong, depends on many things. In most cases AI content is better than web content, especially if you use Deep Research to ground it in multiple sources. It is a problem when content is generated to capture traffic (SEO, content feeds), but in those venues we were already neck deep in slop before 2022. If anything AI slop might improve the quality of old SEO and feed bait.
When I need to be sure about something, I check it manually in trusted sources. If I am not sure, I check all the sources I can. More recently I run deep research on 3 agents (Claude, ChatGPT and Gemini) and then compare between the reports.
As Hannah Arendt said : "The result of a consistent and total substitution of lies for factual truth is not that the lie will now be accepted as truth, and truth be defamed as lie, but that the sense by which we take our bearings in the real world - and the category of truth versus falsehood is among the mental means to this end - is being destroyed."
Trust collapse is real, I don't trust anything anymore. Take this article for instance, I don't trust it because of the random bolding. Does that mean it's AI generated? I don't know but I've seen lots of AI generated content and it has random bolding, so when I see it, I immediately don't trust it. And I don't have the time to verify anything, so whether or not this article was written by the author or AI, it's gone on the "not credible" heap for me, just because of the bolding. It's not a strong signal but it's a signal, and due to the volume of slop, I must filter on whatever signals I have to maintain any chance of finding genuine human work product. Maybe I miss something genuine and important by filtering this way, but it's the best I can do.
We're finally getting the "pure" capitalism that economists dream about. An infinite number of sellers with perfectly commoditized products, driving margins asymptotically to zero.
If you want to get ahead, you'll need to find the 1% edge and exploit it for 15 minutes until a competitor erases your lead.
It is becoming unbearable. YouTube now has "AI" slop ads for Freenow (Lyft brand in the EU) with fake cars that move without the wheels turning and "AI" "actors" that look like plastic.
This of course means that Freenow is now on the personal blacklist. People should not engage with companies who advertise with "AI" slop.
Not just ads - there is TONS of AI-generated YouTube shorts now, and the quality is so good that it's tough to always tell whether it's real as long as it's plausible (which not all of it is).
It's annoying because the whole point of a lot of this stuff is that it's real, and one can be informed, entertained or have an emotional response to it. When you distrust everything because it's maybe fake, then the fun of the internet as a window into human nature and the rest of the world just disappears.
Which suggests the next human dirty trick will be to put out AI slop 'supporting' a company and its products just to make 'em deny it was them making it :)
Wow. A new profile text for my Tinder account!
If you are not building the next paperclip optimizer the competition already does!
It is completely to be expected, exactly because it is not new.
It's been scarcely a generation since the peak in net change of the global human population, and will likely be at least another two generations before that population reaches its maximum value. It rose faster than exponentially for a few centuries before that (https://en.wikipedia.org/wiki/World_population#/media/File:P...). And across that time, for all our modern complaints, quality of life has improved immensely.
Of all the different experiences of various cultures worldwide and across recent history, "growth" has been quite probably the most stable.
Culture matters. People's actions are informed by how they are socialized, not just by what they can observe in the moment.
nevermind if the things are people or their lives!!
If they didn't, we wouldn't be having these problems.
The problem isn't AI, it's how marketing has eaten everything.
So everyone is always pitching, looking for a competitive advantage, "telling their story", and "building their brand."
You can't "build trust" if your primary motivation is to sell stuff to your contacts.
The SNR was already terrible long before AI arrived. All AI has done is automated an already terrible process, which has - ironically - broken it so badly that it no longer works.
That is false. You build a different type of trust: people need to trust that when they buy something from you it is a good product that will do what they want. Maybe someone else is better, but it won't be enough better as to be worth the time they would need to spend to evaluate that. Maybe someone else is cheaper, but you are still reasonably priced for the features you offer. They won't get fired for buying you because you have so often been worthy of the trust we give you that in the rare case you do something wrong it was nobody is perfect not that you are no longer trustworthy (you can only pull this trick off a few times before you become untrustworthy)
The above is very hard to achieve, and even when you have it very easy to lose. If you are not yet there for someone you still need to act like you are and down want to lose it even though they may never buy from you often enough to realize you are worth it.
Evil contains within itself the seed of its own destruction ;)
Sure, sometimes you should fight the decline. But sometimes... just shrug and let it happen. Let's just take the safety labels off some things and let problems solve themselves. Let everybody run around and do AI and SEO. Good ideas will prevail eventually, focus on those. We have no influence on the "when", but it's a matter of having stamina and hanging in there, I guess
That assumes people have the ability to choose not to do these things, and that they can't be manipulated or coerced into doing them against their will.
If you believe that advertising, especially data-driven personalised and targeted advertising, is essentially way of hacking someone's mind to do things it doesn't actually want to do, then it becomes fairly obvious that it's not entirely the individual's fault.
If adverts are 'Buy Acme widgets!' they're relatively easy to resist. When the advert is 'onion2k, as a man in his 40s who writes code and enjoys video games, maybe you spend too much time on HN, and you're a bit overweight, so you should buy Acme widgets!' it calls for people to be constantly vigilant, and that's too much to expect. When people get trapped by an advert that's been designed to push all their buttons, the reasonable position is that the advertiser should take some of the responsibility for that.
Within the last year I opened an Instagram account just so I could get updates from a few small businesses I like. I have almost no experience with social media. This drove home for me just how much the "this is where their attention goes, so that's revealed preference" thing is bullshit.
You know what I want? The ability to get these updates from the handful of accounts I care about without ever seeing Instagram's algo "feed". Actually, even better would be if I could just have an RSS feed. None of that is an option. Do I sometimes pause and read one of the items in the algo feed that I have to see before I can switch over to the "following" tab? I do, of course, they're tuned to make that happen. Does that mean I want them? NO. I would turn them off if I could. My actual fucking preference is to turn them off and never see them again, no matter that they do sometimes succeed in distracting me.
Like, if you fill my house with junk food I'll get fatter from eating more junk food, but that doesn't mean I want junk food. If I did, I'd fill my house with it myself. But that's often the claim with social media, "oh, it's just showing people more of what they actually want, and it turns out that's outrage-bait crap". But that's a fucking lie bolstered by a system that removes people's ability to avoid even being presented with shit while still getting what they want.
Most ads are just manipulating me, but there are times I need the thing advertised if only I knew it was an option.
It's fundamentally exploitation on a population scale, and I believe it's immoral. But because it's also massively lucrative, capitalism allows us to ignore all moral questions and place the blame on the victims, who again, are on the wrong side of a massive power imbalance.
What authority are you going to complain to to "correct the massive power imbalance"? Other than God or Martians I can't see anything working, and those do not exist.
The people yearn for the casino. Gambling economy NOW! Vote kitku for president :)
PS. Please don't look at the stock market.
Wired: "Build things society needs"
> nevermind if the things are people or their lives!!
Breaking things is ok. If people are things then it's ok to break them, right? Got it. Gotta get back to my startup to apply that insight.
Larry Fink and The Money Owners.
"Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should."
Now this formula has been complicated by technological engineering taking over aspects of marketing. This may seem to be simplifying and solving problems, but in ways it actually makes everything more difficult. Traditional marketing that focused on convincing people of solutions to problems is being reduced in importance. What is becoming most critical now is convincing people they can trust providers with potential solutions, and this trust is a more slippery fish than belief in the solutions themselves. That is partly because the breakdown of trust in communication channels means discussion of solutions is likely to never be heard.
Perhaps I am too optimistic...
The exact quote is: "I foresee the day where AI become so good at making a deep fake that the people who believed fake news as true will no longer think their fake news is true because they'll think their fake news was faked by AI."
The truth is, for those of us with lower IQ, it doesn't matter how critically we think, we lack the knowledge and mental dexterity to reliably arrive at a nuanced and deep understanding of the world.
You have to stop dreaming of a world where everyone can sort everything out for themselves, and instead build a world where people can reliably trust expert opinion. It's about having a high-trust society. That requires people in privileged positions to not abuse their advantage in a short term way, at the cost of alienating, and losing the trust of the unwashed masses.
Because that's what has happened, the experts have been exploited as a social cudgel, by the psychopathic and malignant managerial class, in such an obvious and blunt way, that even those of us who are self-aware of our limitations, figure we're as likely to get it right ourselves, as to get honest and correct information from our social institutions.
There is zero chance of making everyone smart enough to navigate the world adroitly. But there is a slightly better than zero chance we could organize our society to earn their trust.
This is the biggie; especially with B2B. It's really 3 months, these days. Many companies have the lifespan of a mayfly.
AI isn't the new reason for this. It's been getting worse and worse, in the last few years, as people have been selling companies; not products, but AI will accelerate the race to the bottom. One of the things that AI has afforded, is that the lowest-tier, bottom-feeding scammer, can now look every bit as polished and professional as a Fortune 50 company (often, even more).
So that means that not only is the SNR dropping, the "noise" is now a lot riskier and uglier.
Made my day. So true.
Interviewer: How will humans deal with the avalanche of fake information that AI could bring?
YNH: The way humans have always dealt with fake information: by building institutions we trust to provide accurate information. This is not a new phenomenon btw.
In democracies, this is often either the government (e.g. the Bureau of Labor Statistics) or newspapers (e.g. the New York Times) or even individuals (e.g. Walter Cronkite).
In other forms of government, it becomes trust networks built on familial ties e.g. "Uncle/Aunt is the source for any good info on what's happening in the company" etc
0 - https://amzn.to/4nFuG7C
Moreover, the more political a topic the more likely the author is trying to influence your thoughts (but not me I promise!). I forgot who, but a historian was asked why they wouldn’t cover civil war history, and responded with something to the affect of “there’s no way to do serious work there because it’s too political right now”.
It’s also why things like calling your opponents dumb, etc is so harmful. Nobody can fully evaluate the truthfulness of your claims (due to time, intellect, etc) but if you signal “I don’t like you” they’re rightfully going to ignore you because you’re signaling you’re unlikely to be trustworthy.
Trust is hard earned and easily lost.
This, too, goes into the probability of something being right or wrong. But the problem I'm pointing out is an inconsistent epistemology. The same kind of test should be applied to any claim, and then they have to be compared. When people trust a random TikToker over the NYT, they're not applying the same test to both sides.
> It’s also why things like calling your opponents dumb, etc is so harmful.
People who don't try to have any remotely consistent mechanism for weighing the likelihood of one claim against a contradicting one are, by my definition, stupid. Whether it's helpful or harmful to call them stupid is a whole other question.
And a lot of the time, that trust is specific to a topic, one which matters to them personally. If they cannot directly verify claims, they can at least observe ways in which their source resonates with personal experience.
Call me naive, but I think education can help.
From my experience, there absolutely is. It just isn't legible to you.
The only thing I can come up with is that they do believe rigorous scholarship can arrive at answers, but sometimes those who do have the "real answers", lie to us for nefarious reasons. The problem with that is that this just moves the question elsewhere: how do you decide, in a non-arbitrary way, whether what you're being told is an intentional lie? (Never mind how you explain the mechanism of lying on a massive scale.) For example, an epistemology could say that if you can think of some motivation for a lie then it's probably a lie, except that this, too, is not applied consistently. Why would doctors lie to us more than mechanics or pilots?
Anohter option could be, "I believe things I'm told by people who care about me." I can understand why someone who cares about me may not want to lie to me, but what is the mechanism by which caring about someone makes you know the truth? I'm sure that everyone has had the personal experience of caring about someone else, and still advising them incorrectly, so this, too, quickly runs into contradictions.
First, show me a person who believes all of them.
Then, try asking that person.
You are trying to ask me to justify entire worldviews. That is far beyond the scope of a single HN post, and also blatantly off topic.
And I did ask such people such questions - for example, people who fly a lot yet and "chemtrails" are poisoning us - but their answers always ended up with some arbitrary choice that isn't appliled consistently. Pretty much, when forced to choose between claims A and B, they go by which of them they wish to be true, even if they would, in other situations, judge the process of arriving at one of the conclusions to be much stronger than the other. They're more than happy to explain to you that they trust vitamins because of modern scientific research, which they describe as fradulent when it comes to vaccines.
Their epistemology is so flagrantly inconsistent that my only conclusion was that they're stupid. I'm not saying that's an innate character trait, and I think this could well be the result of a poor education.
I once went to a school that had complementary subscriptions. The first time I sat down to read one there was an article excoriating President Bush about hurricane Katrina. The entire article was a glib expansion of an expert opinion who was just some history teacher who said that it was “worse than the battle of Antietam” for America. No expertise in climate. No expertise in disaster response. No discussion of facts. “Area man says Bush sucks!” would have been just as intellectually rigorous. I put the paper back on the shelf and have never looked at one since.
Don’t get emotionally attached to content farms.
Regardless, clearly labeled opinions are standard practice in journalism. They're just not on the front page. If you saw that on the front page, then I'd need more context, because that is not common practice at NYT.
It’s simply reality, or else propaganda wouldn’t work so well.
Could you quickly summarize how and why you felt let down by the media in regards to COVID?
The key is that distrusting one side or source does not logically entail trusting another source more. If you think that the media or medical establishment is wrong, say, 45% of the time, you still have to find a source of information that is only wrong 40% of the time to prefer it.
Except those institutions have long lost all credibility themselves.
I'll trust my doctor to give me sound medical advice and my lawyer for better insights into law. I won't trust my doctor's inputs on the matters of law or at least be skeptical and verify thoroughly if they are interested in giving that advice.
Newspapers are a special case. They like to act as the authoritative source on all matters under the sun but they aren't. Their advice is only as good as their sources they choose and those sources tend to vary wildly for many reasons ranging from incompetence all the way to malice on both the sides.
I trust BBC to be accurate on reporting news related to UK, and NYT on news about US. I wouldn't place much trust on BBC's opinion about matters related to the US or happenings in Africa or any other international subjects.
Transferring or extending trust earned in one area to another unrelated area is a dangerous but common mistake.
There are many equilibrium points possible as a result. Some have more trust than others. The "west" has benefited hugely from being a high trust society. The sort of place where, in the Prisoner's Dilemma matrix, both parties can get the "cooperate" payoff. It's just that right now that is changing as people exploit that trust to win by playing "defect", over and over again without consequence.
https://en.wikipedia.org/wiki/High-trust_and_low-trust_socie...
But billionaires are making and keeping ever more money than before, so it isn't a problem.
Wall Street, financier centric and biased in general. Very pro oligarchy.
The worst was their cheerleading for the Iraq war, and swallowing obvious misinformation from Colin Powell at face value.
I just reached out to my family for any trustworthy builders they've had, and struck up conversations with some of my fancier neighbors for any recommendations.
(I came to the conclusion that all builders are cowboys, and I might as well just try doing some of this myself via youtube videos)
Using the internet to buy products is not a problem for me, I know roughly the quality of what I expect to get and can return anything not up to standard. Using the internet to buy services though? Not a chance. How can you refund a service
How do you know that? Or is it just that your bias is coybows are bad and so you assume someone who dresses and acts better is better?
Now step back, I'm not asking you personally, but the general person. It is possible that you have the knowledge and skills to do the job and so you know how to inspect it to ensure it was done right. However the average person doesn't have those skills and so won't know the well dressed person who does a bad job that looks good from the poorly dressed person who does a good job but doesn't look as good.
Our issue was water intrusion along a side wall that was flowing under our hardwoods, warping them and causing them to smell. The first contractor replaced the floor and added in an outside drain.
The drain didn't work, and the water kept intruding and the floor started to warp again.
When we got multiple highly rated contractors out, all of them explained that the drain wasn't installed correctly, that a passive drain couldn't prevent the problem at that location, and that the solution was to either add an actively pumped drain or replace the lower part of the wall with something waterproof. We ended up replacing that part of the wall, and that has fixed the issue along that wall. (We now have water intrusion somewhere else, sigh).
If anything, I was originally biased for the cowboy, as they came recommended, he and his workers were nice, and the other options seemed too expensive & drastic. Now I've learned my lesson, at least about these types of trickier housing issues.
Also, no one mentioned evaluating someone by how they're dressed - the issue was family/friend recommendations vs online reviews, and I while I do take recommendations from friends and family into account, I've actually had better luck trusting online (local) reviews.
LOL
because you know the brands and trust them, to a degree
you have prior experience with them
For every standard to be met, you compromise either on cash or time.
How can infinite AI content be strictly awful if it forces us to fix issues with our trust and reward systems? Short term, sure. But infinite (also) implies long term.
I wish I had a really smart game theorist friend who could help me project forward into time if for nothing other than just fun.
Don't get me wrong, I'm not trying to reduce the value of "ouch, it hurts right now" stories and responses.
But damned if we don't have an interesting and engaging problem on our hands right now. There's got to be some people out there who love digging in to complicated problems.
What's next after trust collapses? All of us just give up? What if that collapse is sooner than we thought; can we think about the fun problem now?
The real question to ask in this gold rush might be what kind of shovels we can sell to this corner of hand shakers and lunchers. A human-verifiable reputation market? Like Yelp but for "these are real people and I was able to talk to an actual human." Or diners and golf carts, if you're not into abstractions.
Would this truly be a move back? I've met people outside my social class and disposition who seem to rely quite heavily on networking this way.
Human biological limits prevent the realization of stable equilibrium at the scale of coordination necessary for larger emergent superstructures
Humans need to figure out how to become a eusocial superorganism because we’re past the point where individual groups don’t produce externalities that are existential to other groups/individuals
I don’t think that’s possible, so I’m just building the machine version
I'd love to see the machine version or hear more of your thoughts about what goes into it.
https://kemendo.com/GTC.pdf
If that resonates further let me know at my un on icloud domain
You can't regress back to a being a kid just because the problems you face as an adult are too much to handle.
However this is resolved, it will not be anything like "before". Accept that fact up front.
If you try to “go back” you’ll just end up recreating the same structure but with different people in charge
Meet the New boss same as the old boss - biological humans cannot escape this state because it’s a limit of the species
You're assuming they can be fixed.
> But damned if we don't have an interesting and engaging problem on our hands right now. There's got to be some people out there who love digging in to complicated problems.
I'm sure the peasants during Holomodor also thought: "wow, what an interesting problem to solve".
I'm with that group of people. What was your point in bringing this up?
Wait, was I just trolled? If so, lol. Got me!
It's just funny, even by hand, to be writing in the infinite AI content style while lamenting the awfulness of infinite AI content while co-founding a monetization and billing system for AI agents.
Also, this is entirely hand-written ;)
hopefully soon we move onto judging content by its quality, not whether AI was used. banning digital advertisement would also help align incentives against mass-producing slop (which has been happening long before ChatGPT released)
I do love the irony of someone building a tool for AI sales bots complaining that their inbox is full of AI sales slop. But I actually agree with the article’s main idea, and I think if they followed it to its logical conclusion they might decide to do something else with their time. Seems like a great time to do something that doesn’t require me to ever buy or sell SaaS products, honestly.
This is just how I write in the last few years
We might be transitioning to a world where trust has value and is earned and stored in your reputation. Clickbait is a symptom of people valuing attention over trust. Clickbait spends a percentage of their reputation by trading it for attention.
In a world of many providers, most people have not heard of any particular individual provider. This means they have no reputation to lose, so their choice to act in a reputation losing manner is easy.
Beyond a certain scale when everyone can play that game we end up with the problem that this article describes. The content is easy but vacuous. There are far more people vying for the same number of eyballs now.
The solution is, I believe, earned trust. Curators select items from sources they trust. The ones that do a good job become trusted curators. In a sense HackerNews is a trusted curator. Reddit is one that is losing, or has lost, trust.
AI could probably take on some of the role of that curation. In the future perhaps more so. An AI can scan the sources of an article to see if the sources make the claims that the article says it makes. I doubt it can do so with sufficient accuracy to be useful right now, but I don't think that is too far off.
Perhaps the various fediverse reddit clones had the wrong idea. Maybe they should in a distributed fashion where each point is a subreddit analogue operated each with their own ways of curation, then an upper level curation can make a site of the groups they trust.
This makes a multi level trust mechanism. At each level there are no rules governing behaviour. If you violate the values of a higher layer, they lose trust in you. AI could run its own curation nodes. It might be good at it or it might be terrible, it doesn't really matter. If it is consistently good, it earns trust.
I don't mind there being lots of stuff, if I can still find the good stuff.
I stopped accepting telephone calls before 2010. They still ring the phone.
Maybe it could lead to a resurgence of the business model where you buy a program and don’t have to get married to the company that supports it, though?
I’d love it if the business model of “buy our buggy product now, we’ll maybe patch it later” died.
you need to prove beyond a doubt that YOU are the right one to buy from, because it's so easy for 3 Stanford dropouts in a trenchcoat to make a seemingly successful business in just a few days of vibecoding.
I'm using this
The modern software market actually seems like a total inversion of normal human bartering and trade relationships, actually…
In Ye Olden Days, you go to the blacksmith, and buy some horseshoes. You expect the things to work, they are simple enough that you can do a cursory check and at least see if they are plausibly shaped, and then you put them on your horse and they either work or they don’t. Later you sell him some carrots, buy a pot: you have an ongoing relationship checkpointed by ongoing completed tasks. There were shitty blacksmiths and scummy farmers, but at some point you get a model of how shitty the blacksmith is and adjust your expectations appropriately (and maybe try to find somebody better when you need nails).
Ongoing contracts were the domain of specialists and somewhat fraught with risk. Big trust (and associated mechanics, reputation and prestige). Now we’re negotiating an ongoing contracts for our everyday tools, it is totally bizarre.
Nit: that is not how it worked. You took your horse to the blacksmith and he (almost always he - blacksmiths benefit from testosterone even if we ignore the rampant sexism) make shoes to fit. You knew it was good because the horse could still walk (if the blacksmith messes up that puts a nail in their flesh instead of the hoof and the horse won't walk for a few days while it heals). In 1600 he made the shoes right there for the horse, in 1800 he bought factory made horseshoes and adjusted them. Either way you never see the horseshoes until they are one the horse and your check is only that the horse can still walk.
Well, no worries. If you subscribe to the post+ service I’ll fix it in a couple years, promise.
That’s why we’re seeing so much semantic drift too. The forms of credibility survive, but the intent behind them doesn’t. The system works, but the texture that signals real humans evaporates. That’s the trust collapse. Over optimized sameness drowning out the few cues we used to rely on.
FINAL Financial hours of U.S.A. just before the 1929 crash
https://www.youtube.com/watch?v=dxiSOlvKUlA&t=1008s
The Volcker Shock: When the Fed Broke the Economy to Save the Dollar (1980)
https://www.youtube.com/watch?v=cTvgL2XtHsw
How Inflation Makes the Rich Richer
https://www.youtube.com/watch?v=WDnlYQsbQ_c
One small one I do not agree with is "Are you burning VC cash on unsustainable unit economics?". I think it's safe to conclude by now that unsustainable businesses can be kept alive for years as long as the investors want it.
No structure, outdated stuff marked as "preview" from 2023/2024, wikipedia like in depth articles about everything but not for simple questions like: how to implement a backend for frontend.
You find fragments and pieces of information here and there - but no guidance at all. Settings hidden behind tabs etc.
A nightmare.
No sane developer would have done such a mess, because of time constraints and bloat. You see and experience first hand, that the few gems are from the trenches, with spelling mistakes etc.
Bloat for SEO, the mess for devs.
I predict a renaissance of meeting people in person.
I hope that will come to fruition.
Growing on X is so simple I’m shocked it works.
100x comments a day
10x posts a day
15x DM’s a day
1x thread a day
1x email a day
This is how you grow your presence on X.
Even if having a presence matters, how can you actually say something meaningful if you post 10 times a day - there's no way (unless you just repeat yourself). Hopefully my algorithm's just gone weird but sadly the people I used to follow stopped posting.
It's not just that it is 0 effort, it also sucks, and it is increasingly not relevant because their agents are just scooping up stuff to reach out about and they aren't even selling something that you would need to buy.
I just wish that we could go back to the old way. There should be a cost to attempt to get a sales lead.
If it's not? Oh well, suffer. It's still better than the "average western male on a dating site" experience.
Note: I really like the metaphor. My apologies if I abused it or stretched it to far or in the wrong direction.
Just kidding, that just goes into my RL trash can.
There are a lot of people here who have spent a lot of time, money, and effort building AI products. And they may be good and worthwhile! I'm literally one of them. But you still see some people totally underestimate this public-facing trust collapse and the growing anti-AI sentiment in general.
I follow even AI slop via reddit RSS.
I control however what comes in.
One such example was call centers. In the 2000s implementing a call center in India was all the rage on cost cutting. The customer experience was terrible and suddenly having a US-based call center (the thing companies just abandoned) was now a feature.
I think we’ll see similar things with AI. Everyone will get flooded with AI slop. Folks will get annoyed and suddenly interacting with a real human or a real human writing original content will be a “feature” that folks flock to.
In order for any AI/ML content to have value it must cite where the accumulation of information came from. By not doing so it nothing more than a custom Wikipedia-esque with a motto _Trust Me Bro_.
Citations and lack there of should be a simple key factor in evaluating trust. What are your sources for this idea / answer?
That's actually wonderful result. Humans and their messages are not to be trusted. It's a bit late that we had to make AI to show us that.
The last five times I've looked at something in case it was a legitimate user email it was AI promotion of someone just like in the article.
Their only way to escalate, apart from pure volume, is to take pains to intentionally emulate the signals of someone who's a legitimate user needing help or having a complaint. Logically, if you want to pursue the adversarial nature of this farther, the AIs will have to be trained to study up and mimic the dialogue trees of legitimate users needing support, only to introduce their promotion after I've done several exchanges of seemingly legitimate support work, in the guise of a friend and happy customer. All pretend, to get to the pitch. AI's already capable of this if directed adeptly enough. You could write a script for it by asking AI for a script to do exactly this social exploit.
By then I'll be locked in a room that's also a Faraday cage, poking products through a slot in the door—and mocking my captors with the em-dashes I used back when I was one of the people THEY learned em-dashes from.
One thing about it, it's a very modern sort of dystopia!
But you can’t really even make the case to them anymore because like you said they can’t/won’t even read your email.
What mostly happens is they constantly provide free publicity to existing big players whose products they will cover for free and/or will do sponsored videos with.
The only real chance you have to be covered as a small player is to hope your users aggregate to the scale where they make a request often enough that it gets noticed and you get the magical blessing from above.
Not sure what my point is other than it kinda sucks. But it is what it is.
Make friends and work with people where possible. I get that some of this only works for us open source types, but the microphone guy isn't, he just did good work. I initially heard of his company through a pro sound engineer website, and ran with it when the advice turned out to be good.
In any case, I can’t complain anyway because I have received my share of favorable coverage. It is just less frequent when you don’t have the personal connections.
The person you're physically interacting with might be using AI in their workflow, that's fine. I use AI too. I just don't want to "build a relationship" with AI. I don't care for AI "content". Art, blogs, articles, advertisements, even detestable things like sales and marketing are all forms of human relationships to me. It's fine if you wanna autogenerate it. I'm even "in the market" for autogenerates stuff as I use AI bots too, but you can't try to sell me a 100% automated-burger when I have the Fabricator 3000 too.
If I'm hungry and just want a burger, I'll get my Fabricator 3000 to generate one for me. If I'm in the mood for a human touch on food and a dining experience, I'll cook, go to a (reputable) restaurant or a friend's place who likes to cook. Maybe there is a market for you to run your Fabricator 3000 to generate a burger for me. maybe. I don't know why I'd buy it though when I can just get your prompt and feed it into my own Fabricator 3000...
When I need to be sure about something, I check it manually in trusted sources. If I am not sure, I check all the sources I can. More recently I run deep research on 3 agents (Claude, ChatGPT and Gemini) and then compare between the reports.
If you want to get ahead, you'll need to find the 1% edge and exploit it for 15 minutes until a competitor erases your lead.
This of course means that Freenow is now on the personal blacklist. People should not engage with companies who advertise with "AI" slop.
It's annoying because the whole point of a lot of this stuff is that it's real, and one can be informed, entertained or have an emotional response to it. When you distrust everything because it's maybe fake, then the fun of the internet as a window into human nature and the rest of the world just disappears.
https://www.theatlantic.com/technology/archive/2025/08/youtu...
https://www.nbcnews.com/tech/tech-news/youtube-dismisses-cre...