I love the quote from Gregory Terzian, one of the servo maintainers:
> "So I agree this isn't just wiring up of dependencies, and neither is it copied from existing implementations: it's a uniquely bad design that could never support anything resembling a real-world web engine."
It hurts, that it wasn't framed as an "Experiment" or "Look, we wanted to see how far AI can go - kinda failed the bar." Like it is, it pours water on the mills of all CEOs out there, that have no clue about coding, but wonder why their people are so expensive when: "AI can do it! D'oh!"
I’m super impressed by how "zillions of lines of code" got re-branded as a reasonable metric by which to measure code, just because it sounds impressive to laypeople and incidentally happens to be the only thing LLMs are good at optimizing.
It really is insane. I really thought we had made progress stamping out the idea that more LOC == better software, and this just flies in the face of that.
I was in a meeting recently where a director lauded Claude for writing "tens of thousands of lines of code in a day", as if that metric in and of itself was worth something. And don't even get me started on "What percentage of your code is written by AI?"
I completely agree. The issue is that some misconceptions just never go away. People were talking about how bad lines of code is as a metric in the 1980s [1]. Its persistence as a measure of productivity only shows to me that people feel some deep-seated need to measure developer productivity. They would rather have a bad but readily-available metric than no measure of productivity.
That's what got me. I've never written a browser from scratch but just telling me that it took millions of lines of code made me feel like something was wrong. Maybe somehow that's what it takes? But I've worked in massive monorepos that didn't have 3million lines of code and were able to facilitate an entire business's function.
To be fair, it easily takes 3 million lines of code to make a browser from scratch. Firefox and Chrome both have around ten times that(!) – presumably including tests etc. But if the browser is in large part third-party libraries glued together, that definitely shouldn't take 3 million lines.
FastRender isn't "in large part third-party libraries glued together". The only dependency that fits that bill in my opinion is Taffy for CSS grid and flexbox layout.
The rest is stuff like HarfBuzz for font rendering which is an entirely croumlent dependency for a project like this.
KPIs are slowly destroying the American economy. The idea that everything can be easily measured meaningfully with simple metrics by laypeople is a myth propagated by overpaid business consultante. It's absurd and facetious. Every attempt to do so is degrading and counter-productive.
These 'metrics' are deliberately meant to trick investors into throwing money into hyped up inflated companies for secondary share sales because it sounds like progress.
The reality was the AI made an uncompilable mess, adding 100+ dependencies including importing an entire renderer from another browser (servo) and it took a human software engineer to clean it all up.
If you want to learn more about the Cursor project directly from the source I conducted a 47 minute interview with Wilson Lin, the developer behind FastRebder, last week.
> According to Perplexity, my AI chatbot of choice, this week‑long autonomous browser experiment consumed in the order of 10-20 trillion tokens and would have cost several million dollars at then‑current list prices for frontier models.
Don't publish things like that. At the very least link to a transcript, but this is a very non-credible way of reporting those numbers.
That implies a throughput of around 16 million tokens per second. Since coding agent loops are inherently sequential—you have to wait for the inference to finish before the next step—that volume seems architecturally impossible. You're bound by latency, not just cost.
If I was to spend a trillion tokens on a barely working browser I would have started with the source code of Sciter [0] instead. I really like the premise of an electron alternative that compiles to a 5MB binary, with a custom data store based on DyBASE [1] built into the front end javascript so you can just persist any object you create. I was ready to build software on top of it but couldn't get the basic windows tutorial to work.
You would think a CEO with a product that caters to developers would know that everyone was going to clone the repo and check his work. He just squandered a whole lot of credibility.
I don't think the point was to say "look, AI can just take care of writing a browser now". I think it was to show just how far the tools have come. It's not meant to be production quality, it's meant to be an impressive demo of the state of AI coding. Showing how far it can be taken without completely falling over.
EDIT: I retract my claim. I didn't realize this had servo as a dependency.
This is entirely too charitable. Basically all this proves is that the agent could run in a loop for a week or so, did anyone doubt that?
They marketed as if we were really close to having agents that could build a browser on their own. They rightly deserve the blowback.
This is an issue that is very important because of how much money is being thrown at it, and that effects everyone, not just the "stakeholders". At some point if it does become true that you can ask an agent to build a browser and it actually does, that is very significant.
At this point in time I personally can't predict whether that will happen or not, but the consequences of it happening seem pretty drastic.
Yeah, but starting with a codebase that is (at least approaching) production quality and then mangling it into something that's very far from production quality... isn't very impressive.
> "So I agree this isn't just wiring up of dependencies, and neither is it copied from existing implementations: it's a uniquely bad design that could never support anything resembling a real-world web engine."
It hurts, that it wasn't framed as an "Experiment" or "Look, we wanted to see how far AI can go - kinda failed the bar." Like it is, it pours water on the mills of all CEOs out there, that have no clue about coding, but wonder why their people are so expensive when: "AI can do it! D'oh!"
I was in a meeting recently where a director lauded Claude for writing "tens of thousands of lines of code in a day", as if that metric in and of itself was worth something. And don't even get me started on "What percentage of your code is written by AI?"
[1] https://folklore.org/Negative_2000_Lines_Of_Code.html
The rest is stuff like HarfBuzz for font rendering which is an entirely croumlent dependency for a project like this.
The reality was the AI made an uncompilable mess, adding 100+ dependencies including importing an entire renderer from another browser (servo) and it took a human software engineer to clean it all up.
You can watch the full video on YouTube or read my extracted highlights here: https://simonwillison.net/2026/Jan/23/fastrender/
Don't publish things like that. At the very least link to a transcript, but this is a very non-credible way of reporting those numbers.
[0] https://sciter.com/
[1] http://www.garret.ru/dybase.html
EDIT: I retract my claim. I didn't realize this had servo as a dependency.
They marketed as if we were really close to having agents that could build a browser on their own. They rightly deserve the blowback.
This is an issue that is very important because of how much money is being thrown at it, and that effects everyone, not just the "stakeholders". At some point if it does become true that you can ask an agent to build a browser and it actually does, that is very significant.
At this point in time I personally can't predict whether that will happen or not, but the consequences of it happening seem pretty drastic.