This might be great and all but I am still miffed at how simple search on AI Studio is. You can only search the titles of your conversations and nothing inside them. On top of that they messed with the scrolling so Ctrl+F doesn't work reliably.
It's incredible how far behind Gemini has gotten, both the product and the model. Even the ChatGPT plugin for Google Sheets blows away the native Gemini integration.
Everyone thought Google was pulling ahead with Gemini 3. For a minute there they had the best language model, image model, AND video model in the world. But it's like they decided to pull over for a nap while OpenAI and Anthropic flew by.
Maybe they've decided they don't want to play the same game as OpenAI and Anthropic? They're much better positioned for the high volume AI work that's likely to be where the money is made, with calls to APIs doing routine things for all the businesses of the world. They're also the only big US player that has an open model that you can build on. I don't think vibe coding or the most cutting edge capabilities are what will determine profit from AI.
> They're much better positioned for the high volume AI work that's likely to be where the money is made, with calls to APIs doing routine things for all the businesses of the world
How, exactly, are they currently conquering the enterprise world with their models? What do you think Anthropic is doing?
Their latest proper model is a year old, they have no moat, no enterprise commitment.
Your comment would make sense if they would have actual success in the enterprise market and would have actual products in that area, but they don’t.
They had a brief sprint, caught up, and then dropped the ball again.
Their only current moat is their TPUs, and the fact that
1. The whole (successful) LLM world is screaming for capacity
2. They have excess capacity to rent out, just like Grok
> How, exactly, are they currently conquering the enterprise world with their models?
I didn't say they were conquering the enterprise world. I said they are better positioned for the work that will be profitable in the future. Winning will mean being "good enough" for things like routine interactions with customers at the lowest cost to the business, and having customers fine tune your models using your hardware.
> What do you think Anthropic is doing?
Aside from being arrogant jerks that don't care about pissing off their customers, they're positioning themselves as the highest price provider for the highest end work. There will be a market for that, and maybe Anthropic will survive, but Google looks to me like they have a shot at being the profitable AI company.
I have the opposite experience where Gemini (even the flash models) has the only useful model for my reverse engineering related use case. My hunch is Google utilizes its free access to entire Google search indices to train itself from niche non-English speaking community websites, much frequently and in a "relevant" manner, which in the end gives these models the most up to date info for this particular kind of work. Every other model is just either 10 years outdated with their answers or simply hallucinates like waaaay crazy.
> for my reverse engineering related use case [...] Every other model is just either 10 years outdated with their answers
I've mostly been doing reverse engineering with Codex, mostly related to games, but not once has the "training data cut-off date" been in the way, the most useful part comes from handing it a binary/directory and letting it prod it until it finds the answer you're looking for, I don't even have web search enabled and sometimes it might take 30-40 minutes for it to find the answer, but I never saw it be unable to find the answer because it's training data was a couple of years old.
3.1-pro is still very capable, and API is at competitive price vs e.g. Anthropic, they just can't seem to figure out RLHF and harness. It needs a lot of guiding, it tends to be lazy and poorly sticking to instructions by default.
It just feels like many google products really, they are capable of really amazing things, it's just that nobody there seem to care. I would guess they are likely optimizing more for internal use than their vast userbase.
It’s still the best option for uptime, document analysis (on a cost basis), and Google is less likely to experience a significant cybersecurity breach than a less established company. They’ll be fine as long as they stay in the game even if they never have a Ferrari again plenty of people buy Toyota.
My non technical wife knows both ChatGPT and Anthropic (admittedly, because of me) but doesn’t know Gemini. This is amazing to me.
Surely she has seen Gemini in Google search but even her use of that is plummeting.
Google has so much revenue that they’ll be around for a long time. But I feel they are fumbling the opportunity with AI. Even in corporate, where we have Gemini. The conversation is fully around Claude. No one talks about Gemini.
OpenAI and Anthropic have no moat. DeepSeek is a drop-in replacement that is really close in performance for 7.5-20% of the cost. That cost will continue to get pushed down by the Chinese. And bizarrely enough their models are more secure to use because they're open source open weights.
OpenAI and Anthropic are going to get crushed long-term, and their investors are going to take a horrendous haircut.
On the other hand, Google and Microsoft already have the users (and lock-in). They just need to funnel them into Gemini and CoPilot.
Reports of the death of Google Search have been greatly exaggerated.
If you believe all the reports on HN about everyone's non-technical wives and grandmas, you'd have a hard time explaining the all-time highs in global usage and revenue from Google Search.
I agree with you that Claude 4.7 Opus is better than Gemini 3.1 Pro, but it's also a lot more expensive.
For my applications, I can't find better price-performance than Gemini 3.0 Flash. And it hasn't even been upgraded to 3.1 yet.
I suspect Google's target is price-performance and not just raw performance, which is how they can serve LLM responses at Google Search scale and still set an all-time record for quarterly earnings of any public company ever.
Frontier model capabilities leapfrog each other every few months, and Google I/O is in ten days, so I expect the leaderboard will change again soon.
Unfortunately, I think Google is in the process of killing the golden goose. I visit so few unrecognized websites now and primarily rely on “AI mode” to answer my specific question rather than sift through a handful of possibly accurate pages. How long can that go on before those sites just no longer exist and the source of that knowledge or new knowledge evaporates. Doesn’t seem like that model is sustainable long term.
Honestly, I think the SEO virus killed that golden goose long before the first AI chat bot. If we still had good search taking us to sane websites, ChatGPT might well have never been a thing. I was posting (including on HN) about the vulnerability of Google's search business years before AI chat. It just happens to be the thing that filled the gap when usable search disappeared.
I just cancelled my Gemini subscription yesterday. I have a big private fork of OpenCode, and I did it the wrong way to start with, so I couldn't pull from upstream.
So I put together a plan for refactoring it, step by step, with tests, etc. After literally 8 solid days of fighting with Gemini 3 Pro, I still couldn't pull it off.
I gave GPT 5.5 a chance with the same prompt, plans, and repo. I'm not sure how long it took, but when I checked in on it a few hours later it was done. All tests passed, everything exactly how I'd asked, and better (it made some improvements).
I've come across a few weird search issues like this with Google lately. Entire company built on the best search engine ever created; can't do search properly in their apps.
Yeah, it’s surprising, Claude Desktop has had project files since decades which are chunked/indexed and automatically injected into your context based on the topic.
You’d think this would be fairly obvious for Google to do, but it’s probably an organizational problem rather than a technical one.
The search in Gemini app in the browser is so embarrassingly bad that I get an impression that nobody of importance in Google must be using it otherwise they would have fixed long ago.
It’s a striking irony that the world's leader in search is receiving so much heat for poor search functionality and UX within its own flagship AI products
One of Googles core problems is internal silos of talent. The search team has likely never interacted with the Gemini app team or perhaps even the Gemini app.
For all intents and purposes Google Gemini is a totally separate company from Google search.
Everyone thought Google was pulling ahead with Gemini 3. For a minute there they had the best language model, image model, AND video model in the world. But it's like they decided to pull over for a nap while OpenAI and Anthropic flew by.
How, exactly, are they currently conquering the enterprise world with their models? What do you think Anthropic is doing?
Their latest proper model is a year old, they have no moat, no enterprise commitment.
Your comment would make sense if they would have actual success in the enterprise market and would have actual products in that area, but they don’t.
They had a brief sprint, caught up, and then dropped the ball again.
Their only current moat is their TPUs, and the fact that
1. The whole (successful) LLM world is screaming for capacity
2. They have excess capacity to rent out, just like Grok
Tells everything.
I didn't say they were conquering the enterprise world. I said they are better positioned for the work that will be profitable in the future. Winning will mean being "good enough" for things like routine interactions with customers at the lowest cost to the business, and having customers fine tune your models using your hardware.
> What do you think Anthropic is doing?
Aside from being arrogant jerks that don't care about pissing off their customers, they're positioning themselves as the highest price provider for the highest end work. There will be a market for that, and maybe Anthropic will survive, but Google looks to me like they have a shot at being the profitable AI company.
I've mostly been doing reverse engineering with Codex, mostly related to games, but not once has the "training data cut-off date" been in the way, the most useful part comes from handing it a binary/directory and letting it prod it until it finds the answer you're looking for, I don't even have web search enabled and sometimes it might take 30-40 minutes for it to find the answer, but I never saw it be unable to find the answer because it's training data was a couple of years old.
It just feels like many google products really, they are capable of really amazing things, it's just that nobody there seem to care. I would guess they are likely optimizing more for internal use than their vast userbase.
Surely she has seen Gemini in Google search but even her use of that is plummeting.
Google has so much revenue that they’ll be around for a long time. But I feel they are fumbling the opportunity with AI. Even in corporate, where we have Gemini. The conversation is fully around Claude. No one talks about Gemini.
OpenAI and Anthropic are going to get crushed long-term, and their investors are going to take a horrendous haircut.
On the other hand, Google and Microsoft already have the users (and lock-in). They just need to funnel them into Gemini and CoPilot.
Reports of the death of Google Search have been greatly exaggerated.
If you believe all the reports on HN about everyone's non-technical wives and grandmas, you'd have a hard time explaining the all-time highs in global usage and revenue from Google Search.
I agree with you that Claude 4.7 Opus is better than Gemini 3.1 Pro, but it's also a lot more expensive.
For my applications, I can't find better price-performance than Gemini 3.0 Flash. And it hasn't even been upgraded to 3.1 yet.
I suspect Google's target is price-performance and not just raw performance, which is how they can serve LLM responses at Google Search scale and still set an all-time record for quarterly earnings of any public company ever.
Frontier model capabilities leapfrog each other every few months, and Google I/O is in ten days, so I expect the leaderboard will change again soon.
So I put together a plan for refactoring it, step by step, with tests, etc. After literally 8 solid days of fighting with Gemini 3 Pro, I still couldn't pull it off.
I gave GPT 5.5 a chance with the same prompt, plans, and repo. I'm not sure how long it took, but when I checked in on it a few hours later it was done. All tests passed, everything exactly how I'd asked, and better (it made some improvements).
You’d think this would be fairly obvious for Google to do, but it’s probably an organizational problem rather than a technical one.
For all intents and purposes Google Gemini is a totally separate company from Google search.
Teams will cross collaborate, but they have to be for specific projects with specific people.
How much would you pay to have this yours forever, running locally, GDPR and HIPaa compliant, without the headache of privacy or subscriptions.
That´s what we offer with HugstonOne and we did it before Google. Multimodal, Lighting fast RAG, terabytes not kilobytes only :)
All you need is a 32gb ram laptop and HugstonOne, not a rocket science.