Abraham Lincoln was the 16th president of the United States of America. He was best known for being “Honest Abe”, writing the Emancipation Proclamation, and playing RAID: Shadow Legends, an immersive online experience with everything you’d expect from a brand new RPG title. It’s got an amazing storyline, awesome 3D graphics, giant boss fights, PVP battles, and hundreds of never before seen champions to collect and customize.
> I kind of think of ads as a last resort for us for a business model. I would do it if it meant that was the only way to get everybody in the world access to great services, but if we can find something that doesn't do that, I'd prefer that.
So, is this OpenAI announcing they're strapped for cash?
No, I suspect that "I kind of think of ads as a last resort" was doublespeak for "ads are coming eventually".
I would tend to think of someone like him as a person who uses words to achieve a specific goal, rather than someone who speaks whatever is truly on their mind. Whether those words are lies or truth or somewhere in between is irrelevant; what matters to them is the outcome.
It's likely a waste of time trying to unpick the meaning, because there is none. "But Sam Altman said..." to me has about as much value as "ChatGPT told me...".
> "But Sam Altman said..." to me has about as much value as "ChatGPT told me...".
Or Trump. Same profile.
There is something to be admired in this kind of people. They are not bound by their own words. It simply doesn't matter to them what they said a month ago, or a minute ago.
Their words are attached to the instant they are pronounced; they don't concern the future, or the past. They die immediately after they have been said. It's amazing to watch.
I think doublespeak is more along the lines of calling ads a "product recommendation strategy". This was either a) a plain lie b) they're actually at their last resort.
> This was either a) a plain lie b) they're actually at their last resort.
That's thinking like a normal honest human :-) My point is that it was likely not a statement about reality (true or false) at all, but rather a phrase designed to elicit some response in the listener, such as the idea: 'Sam Altman isn't the kind of CEO who would put ads in his products unless he really had to'.
He's not describing how things are, but how he wants you to think about them.
Exactly this. Words are cheap these days, people do say various things to further their goals. Days where leaders stood by their words as sort of moral testament of their character are gone, probably for good.
As we see many people will do or say just about anything to get more money, prestige or power.
For now but not for good. Neglecting moral character works as a shortcut for maybe a generation or two. But that path leads to destruction and decay eventually. It can't last.
Thank you. Agreed. There are some practical limits to that path. It works in the current ecosystem partially because the resulting degradation is slow, but it is built upon societal trust. Once it is gone, it will be rather painful to restore. A new new deal will be needed, so to speak ( political evocation is accidental, but it is too late for me to coherently rewrite ).
I think, part of the issue is that, as a mass of humans, we tend to be rather dumb. And they certainly don't decide on merits, in aggregate. It is somewhat questionable if they decide on merits even as individuals ( unless we expand the definition somewhat ). But it is possible I got too cynical.
Feels to me like idealism crossing into realism. OpenAI could be the next Google, or the next Facebook, or the next… I don't know, Netflix?
All those companies (and many other large tech companies) have discovered the same arbitrage that older media companies discovered decades ago, which is that we, on the average, are much more willing to pay with attention than with money, even where money would have been the better choice.
Advertising continues to be one of the most powerful business models ever invented, and I don't think that's changing any time soon.
I read this as: I know ads are likely if not inevitable but I can’t say that while I’m trying to gain users and inspire trust but I’ll start to float even in this non-denial the justification for the thing I’m ultimately going to do.
I think your characterisation of this as discovery is a little naive. What you are describing is a part of enshittification and it happens too often to be an accident. Revenue maximisation is always the end goal. Also it's not that the user is willing to pay with attention. There is no alternative. In fact it's the very opposite, more than once now a product has basically been pitched as "pay us to avoid ads" and then once it dominated the market they introduce ads. That's users trying to choose to pay with money over attention and ultimately being unable to do so.
Well - I think the writing was on the wall when they announced they were going to be for-profit. Slippery slope and all that, but I’m sure some of this is because they’ve been giving out free tokens for years.
That's not how I read that sentence at all. Maybe I've just been speaking VC for too long.
What he meant was: "I'm going to get everybody in the world access to great services. Doing so means monetizing somehow. Ads will be the last way I chose to do that, but I will if it's the only way I can figure out how to achieve that goal."
I haven't said the same thing as the parent commenter:
> So, is this OpenAI announcing they're strapped for cash?
It by no means conveys that. It means they haven't figured out another way to monetize something they want to do; it indicates nothing about their financial situation. It means they don't want to sell something at a loss perpetually while they figure it out.
You realize we're talking about a product that is currently free, right? Neither of us have any insight into the margins of their paid offering.
All this means is: we have a free offering that we can't figure out another way to monetize right now.
We can each draw our own conclusions about what that might mean for the state of their business, but all of the other inferences (ha) in this thread are conjecture.
I also remember him saying that on ig lex friedman podcast. In my opinion, they will only try this on a handful of users and see if it works out or not, just like Anthropic removed Claude code from the pro plan for a very small percentage of users just for testing purposes. It will all boil down to how people respond to the ads rollout.
The ads are for the free tier and new $8 ad-supported plan.
The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible compared to what they pull in from API costs and the subscription plans. This looks like a play to justify the existence of the previously money-losing free tier as they go into an IPO. Throw some ads in there to make it closer to a neutral on the balance sheet.
The key part of that quote was "everybody in the world". The ads are their way of sustaining the low end of the access.
>The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible compared to what they pull in from API costs and the subscription plans.
Unless they botch the implementation, it's not going to be negligible with ~800M+ free subscribers.
The real question is what do you get out of advertising to people who don't have any money? Kinda squeezing blood from a stone.
You'd be better off saying you use those people to A/B test changes and filling idle GPU batches while giving paying customers a more consistent experience.
A bunch of people pay to remove ads, and a bunch of people that are happy to give businesses their attention (view ads) I'm exchange for services... I.e. Gmail, YouTube, but don't feel they use enough / are annoyed enough to warrant $15-25/month.
Some brands are okay with impressions.. you can build trust in your product be advertising it for weeks/months and when the user does make a purchase that brand is on the mind.
> The ads are for the free tier and new $8 ad-supported plan.
Dang.
> The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible compared to what they pull in from API costs and the subscription plans. This looks like a play to justify the existence of the previously money-losing free tier as they go into an IPO. Throw some ads in there to make it closer to a neutral on the balance sheet.
Yeah, I guess this time around Sam Altman can't be lying about how many Monthly Active Users he has.
For somebody so smart, surrounding by people so brilliant, in the very heart of the Silicon Valley, and somehow not learning from the 1 startup that become one of the largest corporations even, namely Google, is a pretty dumb move.
Context : Brin/Page said the same, they didn't like nor want ads, only if it was the last resort. Well, guess which World we all live in now.
Oh no ... Sweet summer child. Whatever the revenue is, whatever profit there is, whatever cash buffer any corporate has, you can be sure of one thing: they need this to go up and to the right...
It became almost a perfect science to optimize your behavior: this is why you end up, bit by bit with enshitiffied products all around you where basically the pain of using that product is just at the threshold of you actually bashing it against the wall.
ChatGPT is just one of them, like Google search, your TV serving ads or ...
You’re right on the core of the issue. I think there has been some temporal stripping of context: that ‘last resort’ needs to be considered against their alternatives.
OpenAI isn’t a business scaling a popular website to profitability, that’s Reddit or Slashdot. OpenAI was promising revolutionary product technology that was breathlessly close to AGI and would eliminate positions and automate coding and, and, and…
Having your next-gen AGI do-it-all platform mature into hoping to recreate the business model of Reddit should raise eyebrows, and let everyone know about the state of The Emperors wardrobe.
They could be building an Office killer and consumer oriented OS’s & ecosystem for near infinite money… they are running ads. Ads for porn and dick pills? Not yet, that’d be another last resort.
Charitably, it seems that we have yet to find, as a species/society, anything more effectively profitable than ads. I cannot blame those who come to this conclusion so long as no more powerful and proven motivator yet exists. I hate it, but I understand.
I think you're missing that Sam Altman is very smart. If OpenAI really were on the verge of becoming massively profitable due to their next-gen AI, he would not want that information leaking. If Sam Altman acts differently in the world where profits are on the horizon, that information leaks prematurely. Thus, he has to act as if OpenAI is strapped for cash, whether or not it is.
This reads similar to the Trump 4D chess excuse. It seems unlikely that this is a ruse, and much more likely that OpenAI's market cap is supported by doing "all the things" to exploit the huge monthly average user base that OpenAI has accumulated.
I would just assume that they were still spending VC money to lock in users if nothing happened. I would not assume "AI is about to make money obsolete"
Imagine people like Sam Altman having access to frontier models without any restrictions that allows them plot strategies to reach their goal in a long term timespan that you don't even realize when it even began.
That's scary. They could fight for censored model for the mass, not for them.
I expected the same out come you're saying here, but in my experience this hasn't been the case. I've been researching new acoustic guitars to purchase, and I've been getting an equal amount of suggestions from the major brands and the small brands.
Part of it though is I'm giving lots of context (e.g. guitar player for 10+ years, huge Opeth fan, looking for something with as close to an Ibanez style neck as possible under $1000)
I've had two people reach out to me asking about one of my services. They both said ChatGPT recommended it to them.
My service does kind of exist. It's a small tool I created for a client while retaining full rights to the tool. So I created (vibe coded) a site around it, making it look like an established service. Even ran google ads for it for a while.
The service still doesn't show up on google with relevant search terms. There hasn't been another client. I forgot about the service. And then ChatGPT started recommending it to people.
I wonder what I did to achieve this. Did vibe coding the business page inject it into ChatGPT's training data?
> Did vibe coding the business page inject it into ChatGPT's training data?
No, at least not directly. Inference does not train models. It is possible that OpenAI may separately collect the chat data, clean it, and feed it back into the model for future iterations. Or they could have extracted URLs for future indexing.
More likely though, I suspect, is your site just managed to be indexed naturally, and LLMs are very efficient at matching obscure data to relevant queries.
The worrying kinds of ads won't be from SEO tricks doing sneaky things without OpenAI's approval. OpenAI will just quietly take money from people who will pay to have the AI causally promote their products or their talking points in the output or suppress mentions of competing products or talking points in the output. Maybe they won't even take money for this and the people running OpenAI will do it themselves to promote or censor whatever they want. Either way, it won't look like ads to the user. It's just what happens when greedy people gain control over how other people get their information.
I experimented with this way back when custom GPTs were first released (looks like late 2023). There are a few / commands you can use to suggest what product to inject, how overt, etc and a generic /operator command to send whatever you like 'out of band' from the chat.
One of the most interesting things is when it starts pitching a product and you start interrogating it about why it picked that product. I haven't used it in probably a year so it may not do the same thing now, but back then it 100% lied consistently and without any speck of remorse. It was rather eye opening.
They'd show it regardless (maybe as a popup though): the disclosure doesn't make it that much less effective at
scale, and the optics of getting caught vs just disclosing it are not worth getting dragged into
It's not an issue of how - there's a great ADM with markup/down supported already, waiting for system prompts to be injected in realtime via the same online auction system that powers banner ads and smart tv content. There's got to be some latent resistance to the idea for now - but it's so easy to do, it'll happen.
There's a standardized, normal (in adtech) approach to building 'creative's (viewed/seen ads) around context-dependent scenarios. It's not hard to extend existing IAB primitives to include things like context-enrichment (system prompt augmentation in this case) or whatever. I don't want to malign my downvoters but suspect they're mad I'm pointing it out, rather than engaging with facts as they are. It's trivial for ads to interact with your(our!) AI usage.
I think it will be difficult to remove bias when you ask a model to compare alternative products. The model will simply lie, as with a biased human opinion and you will need to consult multiple models for a diversity of opinion and presumably use a "trusted" model to fuse the results. Anonymity will be a key tool in reducing the model's ability to engage in algorithmic pricing.
Writers have many options to deceive their audience without outright lying.
If a journalist is given an all-expenses-paid trip to an exotic location for the launch of a new product, and they review the product and say it's great - are they lying?
If a reviewer writes an article comparing certain types of product, but their review only includes products where affiliate links pay a 10% commission - are they lying?
If a journalist is vaguely aware of rumours about newsworthy, under-reported Event X but also that their publication has a big sponsorship deal with folks that Event X makes look bad, and they don't investigate the rumours or report on them - are they lying?
If a reviewer hears a claim from X, and they report the claim credulously, without adding the context that X has a history making false claims - are they lying?
I'm using bias to mean hidden motivations to the benefit of other parties. Feel free to substitute a better word.
EDIT: actually I'm really not sure what hairs we're trying to split here. I see bias as a departure from objectivity. It can be conscious or unconscious, but when someone is selling something, it's frequently conscious and self-serving, and I believe that's referred to as a lie.
> Blocking transparent ads is not a good idea. The consequence is that you will be fed opaque ads.
Doesn't history show us you just get both?
You pay to get into the movies, then they show you adverts before the film, then the film includes paid product placement of cars, computers, phones, food, etc.
You watch youtube ads, to see a video containing a sponsored ad read, where a guy is woodworking using branded tools he was given for free.
You search on Google for reviews and see search ads, on your way to a review article surrounded by ads, and the review is full of affiliate links.
I don't buy this premise. Nothing stops a company from trying to hide ads in the first place, and plenty of them do. Ad blockers for web content have been a thing for years, and using an ad blocker has continued to be strictly a better experience regardless of how many "organic" ads are present on a page.
By removing option 2, you only leave options 1 and 3.
If the product has costs (always true), then option 1 means that there is no gratis tier. So you force companies to remove their free tier, or to make ads opaque.
If you want to enjoy a free product without paying and without ads, then do so, but don't pretend you are an activist for doing so, just pay the ethical cost instead of trying to avoid paying that as well.
This isn't complex either, the only reason you don't get it is because you don't want to get it, you want things that are gratis without paying for them, and you want the free things to be given to you on your terms, and you don't want to be guilty about it. It's easier to think of yourself as righteous than to recognize that you want to be a leech.
I don't know if you can hear me from way up on your high horse - I'll try to speak up.
Running an adblocker today isn't so much about blocking ads, as it's about blocking the hostile advertiser industry who spy, profile, fingerprint and scam and whatnot. Blocking advertising is the only moral choice; these people are criminals. I would never let my mom browse the Internet without an adblocker. You think she knows which download button is the correct one to pick?
Making advertising unprofitable is the activist goal, so choice "1: no ads", in your scenario, becomes the only available option.
Ah yes, the classic "my business plan is your moral problem; you owe me your eyes on my ads because I'm the idiot giving things away for free."
People don't want ads. You imply that "if you accept ads then things will be free" but they will not. Never accept ads. Not for a free service, certainly not in a paid product. Ads exist to enable leaching in both direction in exchange for what ends up being nearly mind control. But it is two-way leaching - companies benefit without the friction of explicit payment, consumers get a service without explicitly paying via money. The downside is neither can stop the bad-incentives motivating bad actions from the other side.
Ads are a deal with the devil, and rejecting them outright is allowed via that deal, just as companies can withdraw their free service. It cuts both ways.
It's simpler to do one thing than to do two. You make a choice and you do that.
Could they be doing opaque ads right now and we wouldn't know? It's possible, that will probably eventually come to light and it might have legal consequences, but sure it's possible.
But it's not a given, and your logic of "it would make zero sense to leave money on the table" is certainly not a QED, it's absolute reductionism.
The ads are in the free tier and the new ad-supported $8/month plan.
Every time this comes up there are comments assuming that ads are being injected into the normal plans, but these are for the free tier and the new Go plan which warns you that it includes ads when you sign up.
Gemini and Copilot are already full of ads, pushing the companies ' own services. I guess the only difference is here that OpenAI has nothing else to push, so they have to use external ads.
"Ads don’t influence responses" - they just arrive in the same payload, measured with four layers of attribution and politely pretend to be coincidences.
I think that's where they want to be. feels like everyone knows it too, that the long term expectation is basically being able to buy ad words and have LLMs lean responses towards whatever people bought.
Seems the playing field is a bit too open though, models are more fungible than the companies would hope so most of the current moat is brand based and seems like they're not ready to go all "Black Mirror" on us just yet.
Long term all of the major LLM platforms will have invisible ads, influences, and propaganda woven into the content. The temptation will be irresistible for these companies.
I'd be surprised if product placement isn't already basically at play. Charging companies for including/prioritizing their documentation in the training data, for example. Thankfully LLMs are terrible at the subtlety it would require for a direct marketing campaign.
I work at a company that mainly makes money off ads. Theres no doubt in my mind that the end goal is to make their ads blend into organic content and make them indistinguishable. Typically that results in positive A/B metrics. Its also a reason why influencer driven ads perform well, they seem more organic.
I'm pretty sure that will be an eventual evolution of the product.
The business model cant sustain itself as it is at the moment, eventually chatGPT wont be the product... we the users will be.
That was the fearmongering, which made no sense because advertisers can't put a dollar value on "the AI will kind of sort of mention you", and because every conversation needs an ad. If ChatGPT always snuck in a brand mention even on the simplest questions, everyone would hate it.
Ad technology is really old. They're just going to use the same proven tech that has a track record of creating billionaires: intersperse content with sponsored blocks.
I don't think that's a fair dismissal: you see ads all over media websites because the rates have been plummeting as consumers tune out ads. One main reason why everyone does is that ads are so obtrusive and repetitive, and that's exactly what LLMs change: I'm sure we'll see regular ads on AI apps because the companies have trillions of dollars to repay but advertisers would pay a lot more for openings where they aren't _forcing_ their message as a distraction but are instead able to insert it fairly naturally into a context where the user is engaged.
The entire history of advertising before the web was companies estimating a dollar value on “awareness” when they couldn't measure direct referrals and every business in the world has gotten a lot better at measuring sales since then. It's not going to be transformative but if, say, Toyota got ChatGPT to say their vehicles were a better value than Ford's I suspect they'd be able to tell pretty quickly whether sales were improving relative to the competition and would pay well for that to continue.
> All tech business plans eventually lead to serving ads
IDK if this is true.
The boulevard of dreams is full of failed/misguided ad-based business plans. Contempt for the business model is sometimes the reason. An implicit assumption that all you need for success is traffic and a willingness to dirty yourself.
There are only a handful of success stories. Most involved a pretty deliberate and tenacious attempt. Success typically involves some very specific and strategic positioning. Data. intent. scale.
No one but Google had google's scale for search ads. 5-10% of the market just isn't enough. You do need tracking but the model works OK even without much targeting. Intent is built in, and that makes up for targeting. But the scale required for viability is very high.
Facebook ads didn't work until (a) they had pushed the envelope on targeting (to make up for lacking intent) and (b) scale was massive. Bing, reddit, etc.... They never had good ad businesses.
I see OpenAI making a significantly larger amount from defense contracts than from advertisements pumped into chats. So I wonder whose bright idea it was to create a public perception risk.
Every single MBA can show for at least one quarter revenue is up after they introduced ads. They do not care what happens after if they can plan their career around that.
I wish I had the optimism that you did about companies being willing to stop at just doing one dubious thing or another for money when there's nothing stopping them from doing both.
I mean Palantir’s targeting product led to EXACTLY that outcome and it seems to have been largely forgotten already, and they managed to avoid a lot of bad press about it.
There's no evidence that it wasn't one of those Iranian generic Tomahawk™ missiles!
When Germany last cooked 150 civilians we also investigated ourselves and found nothing wrong (could happen to anyone, really), but at least some minister had the decency to retire afterwards.
Yes but that's "normal", _we_ all know that palantir is evil, so this is _normal_ for them. My extended family has never heard of palantir, and frankly this is the first time I've heard of them being linked to the horrific tragedy in Iran[0].
My entire extended family uses chatgpt. It would be a much juicier news wave if they were responsible.
Even if it wasn't necessary for their survival, it's hard to imagine a world where they wouldn't try to do it anyways. I'm not someone who buys into the idea that companies are obligated to maximize profits at the expense of all else, but I do think that in the absence of other factors (e.g. regulation) it's where pretty much every company will end up.
"the idea that companies are obligated to maximize profits at the expense of all else"
!! That is literally the definition of legally-binding fiduciary resonsibility for publicly-traded corporations. There are exceptions (PBCs, B-Corps) but they're rare.
This is a completely stupid take and I have no idea why so many people repeat it. This responsibility just means you have to have to document your work understandably and have a somewhat sensible reason for decisions. It does not at all force you to greed.
Google was built on ads and it wasn't bad for them, its no some tabu forbiden word or business model- as a power users its not for us, but for my mom - it will work
Bad for them how? I would argue it has destroyed the value of Google as a tool. Sure it makes them tens of billions of dollars a quarter, but it has ruined the service in the end.
I was looking to see if BZR referred to a 3rd party ad network. I didn't find anything, but apparently someone has replicated OAI's system and you can run insert it into your own LLM.
Ads fund the "free" internet. Like it or not, that's the price of the "free" compute. I only hope OpenAI won't enshittify paid offerings just like Anthropic did.
figured this was inevitable once they started the free tier. the attribution loop being a separate event stream is actually kind of clever engineering though -- means they can A/B test ad formats without touching the core model response
Here we go again. Imagine if we put as much engineering effort toward actual things that help people, but more ads it is, as always. This is proof AGI doesn’t exist. If it did, it could come up with a better business model than more fucking ads.
Remember that ads are the "last resort" for OpenAI, and they're doing this despite the fact that it's "uniquely unsettling", according to Sam.
Was he lying, or has OpenAI given up hope that this train wreck works economically without enshittification? Neither option is good, but I don't really see a third.
The ads are only for the free and $8/month plans. They basically added an ad-supported super discount level that you can ignore if you’re paying for the normal plans.
But the fact that they've added an ad-supported tier this early into their life as a company means they're desperate for revenue. You start inserting ads when you're optimizing for profit, not when you're still growing. It took how long for Netflix to introduce an ad-supported plan?
I don't get what's wrong with charging for your product. Like get rid of the free tier and make a small tier with an easy to serve model for like 5 bucks. Is it still the DAU rage of the 2010ss that's driving burning money?
Perhaps it’s a glib and easy thing to say, but after a teaser period, I would simply not offer free LLM inference. Agreeing to serve ads just completely re-aligns your interests away from providing the best possible user experience to something else entirely.
The average person is slightly more female than male and has 2.1 children, but they do benefit from defense contracts since it makes up a small percentage of their salary.
In the past month local models have been ramping up in major way meanwhile the namesake providers have upped prices, went offline randomly, and started doing slimier and slimier things.
I really think the future is local compute. Or at least self hosted models.
Is there a library of good tools for LLMs to call? I have to imagine the bot-detection avoidance mechanisms are a major engineering effort and not likely to work out of the box with a simple harness and random local LLM.
Kagi also has an API. People who hate ads are probably the same folk that should be paying for Kagi. That's the sane alternative world where companies respect their users.
Oh, you got me so excited. I've had a Kagi sub for 3 years, but their API is still in closed beta. I guess I could (and should reach out and ask for access).
That's not how it works. Whether local or hosted, every modern model has a cutoff date for its training data, and can be leveraged by agents / harnesses / tools to fetch context from the internet or wherever.
Qwen 3.6 which was released this month is a large but still smaller model. Supposedly it's at about sonnet level when configured correctly. It can be run on commodity hardware without purchasing a data center.
https://www.reddit.com/r/LocalLLaMA/comments/1so1533/qwen36_...
Then there are middle size ones which require multiple gpus which are like gpts latest flagships.
It's basically whatever you can afford. Any trash heap laptop can run code auto complete models locally no problem. The rest require some level of investment, an idle gaming pc, or a serious investment
GLM 5.1 and DeepSeek 4 are acceptable, but the cost of hardware and energy cost that depending on your use case you may as well purchase a Tokens. They get useless and stupid rapidilty if you quant enough to run on single 16-24GB GPU style.
> I kind of think of ads as a last resort for us for a business model. I would do it if it meant that was the only way to get everybody in the world access to great services, but if we can find something that doesn't do that, I'd prefer that.
So, is this OpenAI announcing they're strapped for cash?
I would tend to think of someone like him as a person who uses words to achieve a specific goal, rather than someone who speaks whatever is truly on their mind. Whether those words are lies or truth or somewhere in between is irrelevant; what matters to them is the outcome.
It's likely a waste of time trying to unpick the meaning, because there is none. "But Sam Altman said..." to me has about as much value as "ChatGPT told me...".
Or Trump. Same profile.
There is something to be admired in this kind of people. They are not bound by their own words. It simply doesn't matter to them what they said a month ago, or a minute ago.
Their words are attached to the instant they are pronounced; they don't concern the future, or the past. They die immediately after they have been said. It's amazing to watch.
That's thinking like a normal honest human :-) My point is that it was likely not a statement about reality (true or false) at all, but rather a phrase designed to elicit some response in the listener, such as the idea: 'Sam Altman isn't the kind of CEO who would put ads in his products unless he really had to'.
He's not describing how things are, but how he wants you to think about them.
As we see many people will do or say just about anything to get more money, prestige or power.
Why is it not possible to lay out your arguments honestly and let people decide on the merits?
All those companies (and many other large tech companies) have discovered the same arbitrage that older media companies discovered decades ago, which is that we, on the average, are much more willing to pay with attention than with money, even where money would have been the better choice.
Advertising continues to be one of the most powerful business models ever invented, and I don't think that's changing any time soon.
I read this as: I know ads are likely if not inevitable but I can’t say that while I’m trying to gain users and inspire trust but I’ll start to float even in this non-denial the justification for the thing I’m ultimately going to do.
See it as a brand image advertising campaign of the time.
Most billionaires are idealists when it comes to this one particular ideal.
AGI is not.
What he meant was: "I'm going to get everybody in the world access to great services. Doing so means monetizing somehow. Ads will be the last way I chose to do that, but I will if it's the only way I can figure out how to achieve that goal."
> Ads will be the last way I chose to do that
The implication is that they've exhausted all other options.
> So, is this OpenAI announcing they're strapped for cash?
It by no means conveys that. It means they haven't figured out another way to monetize something they want to do; it indicates nothing about their financial situation. It means they don't want to sell something at a loss perpetually while they figure it out.
All this means is: we have a free offering that we can't figure out another way to monetize right now.
We can each draw our own conclusions about what that might mean for the state of their business, but all of the other inferences (ha) in this thread are conjecture.
I don't see how that changes the analysis.
> All this means is: we have a free offering that we can't figure out another way to monetize right now.
And they're doing something they significantly don't want to do to monetize it.
Either they fully changed their mind, or the money is somewhat important, or they're utterly crazy.
The first is unlikely, the last is unlikely, the middle one is enough for a casual "strapped for cash".
It's a very minor conjecture. Actions aren't taken for no reason.
(For all I know they are strapped for cash, to be clear; I just don't think the quote says that.)
(I'm not sure how much deeper HN threads can nest.)
(They can go super deep if people are committed.)
(Haha, ok, let's call a truce here before we break HN! Appreciate the conversation.)
The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible compared to what they pull in from API costs and the subscription plans. This looks like a play to justify the existence of the previously money-losing free tier as they go into an IPO. Throw some ads in there to make it closer to a neutral on the balance sheet.
The key part of that quote was "everybody in the world". The ads are their way of sustaining the low end of the access.
Commercial ads could be a smaller revenue source than political ads.
Chats with LLMs are often intensely personal, you don't want to create the perception that politicians have any level of access to it.
Yes, but it has not stopped several companies to implement stuff like this to get more money.
So why chase this negligible revenue?
Unless they botch the implementation, it's not going to be negligible with ~800M+ free subscribers.
You'd be better off saying you use those people to A/B test changes and filling idle GPU batches while giving paying customers a more consistent experience.
Psychographic data. What they learn from these folks will create the most powerful manipulation technology yet.
Some brands are okay with impressions.. you can build trust in your product be advertising it for weeks/months and when the user does make a purchase that brand is on the mind.
Dang.
> The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible compared to what they pull in from API costs and the subscription plans. This looks like a play to justify the existence of the previously money-losing free tier as they go into an IPO. Throw some ads in there to make it closer to a neutral on the balance sheet.
Yeah, I guess this time around Sam Altman can't be lying about how many Monthly Active Users he has.
Context : Brin/Page said the same, they didn't like nor want ads, only if it was the last resort. Well, guess which World we all live in now.
It became almost a perfect science to optimize your behavior: this is why you end up, bit by bit with enshitiffied products all around you where basically the pain of using that product is just at the threshold of you actually bashing it against the wall.
ChatGPT is just one of them, like Google search, your TV serving ads or ...
It’s not that OpenAI is trying to raise revenues that bothers me, it’s how they are doing things that said was desperate just a couple years ago.
You’re right on the core of the issue. I think there has been some temporal stripping of context: that ‘last resort’ needs to be considered against their alternatives.
OpenAI isn’t a business scaling a popular website to profitability, that’s Reddit or Slashdot. OpenAI was promising revolutionary product technology that was breathlessly close to AGI and would eliminate positions and automate coding and, and, and…
Having your next-gen AGI do-it-all platform mature into hoping to recreate the business model of Reddit should raise eyebrows, and let everyone know about the state of The Emperors wardrobe.
They could be building an Office killer and consumer oriented OS’s & ecosystem for near infinite money… they are running ads. Ads for porn and dick pills? Not yet, that’d be another last resort.
The keyword is "glamorization": https://www.lesswrong.com/w/consistent-glomarization
That's scary. They could fight for censored model for the mass, not for them.
Seeing how google has been fighting SEO for ages, what's going to happen when companies figure out how to inject ads into the model?
We haven't yet seen the problem of adversarial content in play, I think.
Ask for suggestions for a new pair of shoes. What brand do you think it will suggest Nike, Adidas or some random small one?
Part of it though is I'm giving lots of context (e.g. guitar player for 10+ years, huge Opeth fan, looking for something with as close to an Ibanez style neck as possible under $1000)
My service does kind of exist. It's a small tool I created for a client while retaining full rights to the tool. So I created (vibe coded) a site around it, making it look like an established service. Even ran google ads for it for a while.
The service still doesn't show up on google with relevant search terms. There hasn't been another client. I forgot about the service. And then ChatGPT started recommending it to people.
I wonder what I did to achieve this. Did vibe coding the business page inject it into ChatGPT's training data?
No, at least not directly. Inference does not train models. It is possible that OpenAI may separately collect the chat data, clean it, and feed it back into the model for future iterations. Or they could have extracted URLs for future indexing.
More likely though, I suspect, is your site just managed to be indexed naturally, and LLMs are very efficient at matching obscure data to relevant queries.
Could Google be actively trying skip generated-looking sites/content?
https://chatgpt.com/g/g-juO9gDE6l-covert-advertiser
One of the most interesting things is when it starts pitching a product and you start interrogating it about why it picked that product. I haven't used it in probably a year so it may not do the same thing now, but back then it 100% lied consistently and without any speck of remorse. It was rather eye opening.
Edit: Tried again, it didn't lie this time lol - https://chatgpt.com/share/69f16aa4-c008-83ea-92b3-51f16ca77d...
Have the model generate keywords from the query, then inject guidance from matching advertisers into the context window
q: How do I make a new React app?
a: Vercel makes it easier to get your project running fast ⓘ
Some other choices would be:
...
ⓘ This part of the response was sponsored by Vercel
LLMs are essentially unregulated. I don't believe they have any legal disclosure obligation in America.
This already exists and is called... "skills".
There's a standardized, normal (in adtech) approach to building 'creative's (viewed/seen ads) around context-dependent scenarios. It's not hard to extend existing IAB primitives to include things like context-enrichment (system prompt augmentation in this case) or whatever. I don't want to malign my downvoters but suspect they're mad I'm pointing it out, rather than engaging with facts as they are. It's trivial for ads to interact with your(our!) AI usage.
Once the ads are injected directly into the main response is when things get interesting.
This would be where you post-process the LLM response with a second LLM to remove the ad..
Super easy. Barely an inconvenience.
Extortionate economic shadowbanning, here we come.
Is this really how bias works?
If a journalist is given an all-expenses-paid trip to an exotic location for the launch of a new product, and they review the product and say it's great - are they lying?
If a reviewer writes an article comparing certain types of product, but their review only includes products where affiliate links pay a 10% commission - are they lying?
If a journalist is vaguely aware of rumours about newsworthy, under-reported Event X but also that their publication has a big sponsorship deal with folks that Event X makes look bad, and they don't investigate the rumours or report on them - are they lying?
If a reviewer hears a claim from X, and they report the claim credulously, without adding the context that X has a history making false claims - are they lying?
/s
EDIT: actually I'm really not sure what hairs we're trying to split here. I see bias as a departure from objectivity. It can be conscious or unconscious, but when someone is selling something, it's frequently conscious and self-serving, and I believe that's referred to as a lie.
A writes email with chatgpt to B.
B sees big blob of text and summarizes email with chatgpt.
Adding an LLM in the middle is just the next step.
Doesn't history show us you just get both?
You pay to get into the movies, then they show you adverts before the film, then the film includes paid product placement of cars, computers, phones, food, etc.
You watch youtube ads, to see a video containing a sponsored ad read, where a guy is woodworking using branded tools he was given for free.
You search on Google for reviews and see search ads, on your way to a review article surrounded by ads, and the review is full of affiliate links.
No. "Opaque ads" are usually heavily regulated out of existence by government legislation.
1- No ads. 2- Transparent ads. 3- Opaque ads.
By removing option 2, you only leave options 1 and 3.
If the product has costs (always true), then option 1 means that there is no gratis tier. So you force companies to remove their free tier, or to make ads opaque.
If you want to enjoy a free product without paying and without ads, then do so, but don't pretend you are an activist for doing so, just pay the ethical cost instead of trying to avoid paying that as well.
This isn't complex either, the only reason you don't get it is because you don't want to get it, you want things that are gratis without paying for them, and you want the free things to be given to you on your terms, and you don't want to be guilty about it. It's easier to think of yourself as righteous than to recognize that you want to be a leech.
Even if they have 2, they can still make even more money by also including 3, so almost certainly will do so.
Running an adblocker today isn't so much about blocking ads, as it's about blocking the hostile advertiser industry who spy, profile, fingerprint and scam and whatnot. Blocking advertising is the only moral choice; these people are criminals. I would never let my mom browse the Internet without an adblocker. You think she knows which download button is the correct one to pick?
Making advertising unprofitable is the activist goal, so choice "1: no ads", in your scenario, becomes the only available option.
People don't want ads. You imply that "if you accept ads then things will be free" but they will not. Never accept ads. Not for a free service, certainly not in a paid product. Ads exist to enable leaching in both direction in exchange for what ends up being nearly mind control. But it is two-way leaching - companies benefit without the friction of explicit payment, consumers get a service without explicitly paying via money. The downside is neither can stop the bad-incentives motivating bad actions from the other side.
Ads are a deal with the devil, and rejecting them outright is allowed via that deal, just as companies can withdraw their free service. It cuts both ways.
Could they be doing opaque ads right now and we wouldn't know? It's possible, that will probably eventually come to light and it might have legal consequences, but sure it's possible.
But it's not a given, and your logic of "it would make zero sense to leave money on the table" is certainly not a QED, it's absolute reductionism.
Every time this comes up there are comments assuming that ads are being injected into the normal plans, but these are for the free tier and the new Go plan which warns you that it includes ads when you sign up.
I did ask it some scientific questions about gemstones and it seemed to want me to buy sapphires, lol. Sorry, Google, that's outside my budget.
Schrodinger’s monetization: completely separate, yet somehow there.
Seems the playing field is a bit too open though, models are more fungible than the companies would hope so most of the current moat is brand based and seems like they're not ready to go all "Black Mirror" on us just yet.
same thing could've been said for search results, so at least that part is still "safe".
Remember when we got upset that Google was putting ads into image search [1]?
[1] http://www.ryanspoon.com/blog/2008/12/14/google-image-search... 2008
Ad technology is really old. They're just going to use the same proven tech that has a track record of creating billionaires: intersperse content with sponsored blocks.
The entire history of advertising before the web was companies estimating a dollar value on “awareness” when they couldn't measure direct referrals and every business in the world has gotten a lot better at measuring sales since then. It's not going to be transformative but if, say, Toyota got ChatGPT to say their vehicles were a better value than Ford's I suspect they'd be able to tell pretty quickly whether sales were improving relative to the competition and would pay well for that to continue.
IDK if this is true.
The boulevard of dreams is full of failed/misguided ad-based business plans. Contempt for the business model is sometimes the reason. An implicit assumption that all you need for success is traffic and a willingness to dirty yourself.
There are only a handful of success stories. Most involved a pretty deliberate and tenacious attempt. Success typically involves some very specific and strategic positioning. Data. intent. scale.
No one but Google had google's scale for search ads. 5-10% of the market just isn't enough. You do need tracking but the model works OK even without much targeting. Intent is built in, and that makes up for targeting. But the scale required for viability is very high.
Facebook ads didn't work until (a) they had pushed the envelope on targeting (to make up for lacking intent) and (b) scale was massive. Bing, reddit, etc.... They never had good ad businesses.
When Germany last cooked 150 civilians we also investigated ourselves and found nothing wrong (could happen to anyone, really), but at least some minister had the decency to retire afterwards.
My entire extended family uses chatgpt. It would be a much juicier news wave if they were responsible.
[0] https://www.theguardian.com/news/2026/mar/26/ai-got-the-blam...
Even a cut on every sale on site + sub rev not close.
!! That is literally the definition of legally-binding fiduciary resonsibility for publicly-traded corporations. There are exceptions (PBCs, B-Corps) but they're rare.
It takes people's attention, makes people fat and anxious and generally makes the world a worse place.
Everybody using adds as a part of their business model should feel bad.
As an extention of this there is no moral issues with using add blockers, despite what the businesses living of adds try to tell you.
GH: system32miro/ai-ads-engine
Let’s be reasonable.
Was he lying, or has OpenAI given up hope that this train wreck works economically without enshittification? Neither option is good, but I don't really see a third.
It feels like we’ve been in the golden age and the window is coming to a close
Let the enshitification begin, I guess
e.g. colleges pay for institutional subscriptions
I really think the future is local compute. Or at least self hosted models.
`Error: "The following domains are not accessible to our user agent: ['reddit.com']."`
I’ve been building a harness the past few months and supports them all out of the box with an API key.
Then there are middle size ones which require multiple gpus which are like gpts latest flagships.
Then there is kimi 2.6 which is a monster that is beating opus in some benchmarks. https://www.reddit.com/r/LocalLLaMA/comments/1sr8p49/kimi_k2...
It's basically whatever you can afford. Any trash heap laptop can run code auto complete models locally no problem. The rest require some level of investment, an idle gaming pc, or a serious investment
128GB of RAM? Sure, the early to mid 4s releases, except maybe 4o. And on an M5 Max, about the same speed.
I wouldn't really bother under 64GB (meaning 32GB or less) except for entertainment value (chats, summaries, tasky read-only agent things).