And those weren't the only tells. Right now it's cringey but I have a sinking feeling that it's in the process of becoming normal. The post is on the front page after all.
Which means people either can't tell, or don't mind.
> I'm not joking and this isn't funny. We have been trying to build distributed agent orchestrators at Google since last year. There are various options, not everyone is aligned... I gave Claude Code a description of the problem, it generated what we built last year in an hour.
I am sure that whatever work was put into actually trying to implement that, was crucial in order to instruct Claude what to do. System design doesn't come by itself.
I'm pro LLM/AI, but most of hype are just pure vibes. There's no evidence, there are only anecdotes.
All the hype-men that I follow either have a stake at it (they either work for LLM provider or have an AI startup) or post billions of examples and zero revenue.
It's interesting that the the author created three saas over the weekend as some sort of proof that execution is useless now. But would any company ever buy one of these? No, because for a company to buy your saas you need salespeople or some marketing channel, compliance and regulatory checkmarks, SSO integrations, ability to take and implement special feature requests, SLAs, maintenance and support capacity, etc.
That's the execution part of creating a successful business and it's still entirely missing.
> I’m not exaggerating. And neither is anyone else.
> Stack Overflow, the site that defined a generation of software development, received 3,710 questions last month. That’s barely above the 3,749 it got in its first month of existence. The entire knowledge-sharing infrastructure we built our careers on is collapsing because people don’t need to ask anymore.
"Because people don't need to ask anymore."?!
Yeah, I wouldn't call it exaggerating, I think I would call it a fundamental misunderstanding.
I wanted to comment on the code examples he shared. But they're they're all closed source. Which is a decision given the premise of the whole article, err I mean ad, that implementations are free these days.
Dude decided to completely ignore how hostile SO became in the last years, because obviously it is AI and not "DUPLICATE", "RTFM", -100 downvotes for asking "simple" question, questions unanswered for decades, etc.
The funniest part is that from the image itself you can clearly, unambiguously see that the downfall started between 2020~2022. ChatGPT was launched in Nov 2022. So the downfall of SO couldn't be caused by AI, at least not initially. But the blog post author is so biased that they simply ignored the image they posted themselves.
I find that the LLMs are good at the 'glue code'. The "here's a rather simple CRUD like program, please tie all of the important things together in the right way". That was always a rather important and challenging bit of work, so having LLMs take it of our hands is valuable.
But for the code where the hard part isn't making things designed separately work together, but getting the actual algorithm right. That's where I find LLMs still really fail. Finding that trick to take your approach from quadratic to N log N, or even just understanding what you mean after you found the trick yourself. I've had little luck there with LLMs.
I think this is mostly great, because its the hard stuff that I have always found fun. Properly architecting these CRUD apps, and learning which out of the infinite set of ways to do this are better, was fun as a matter of craftsmanship. But that hits at a different level from implementing a cool new algorithm.
When software becomes cheap to build, a lot of strange second-order effects kick in. It’s not just that products are easier to create; sales, marketing, and every other business function can iterate faster alongside them. That speed erodes moats. As this reality sinks in, I think we’re headed for a brutal shakeout in SaaS.
People still argue that distribution is the real bottleneck now. But when the product itself is trivial to build and change, the old dynamics break down. Historically, sales was hard because you had to design and refine a sales motion around a product that evolved slowly and carried real technical risk. You couldn’t afford to pour resources into distribution before the product stabilized, because getting it wrong was expensive.
That constraint is gone. The assumptions and equations we relied on to understand SaaS no longer apply—and the industry hasn’t fully internalized what that means yet.
I think that we should discern between nonfunctional and functional requirements. Right now, implementation of functional requirements can be done very fast, with the effects that you mention.
But nonfunctional requirements such as reliability, performance and security are still extremely hard to get right, because they not only need code but many correct organizational decisions to be achieved.
As customers connect these nonfunctional requirements with a brand, I don't see how big SaaS players will have a problem.
For new brands, it's as hard as ever to establish trust. Maybe coding is a bit faster due to AI, but I'm not yet convinced that vibe coders are the people on top of which you can build a resilient organization that achieves excellence in nonfunction requirements.
I don't think you are correct. Take for example something like workday where companies manage their HR data. It's not only about the web interface but also about the fact that all of the implemented IT processes have been checked to be legally correct for each country, that the website is resilient to hackers and has availability that covers their customer's needs. You can't copy that without building a large org such as workday.
Even on a technical level the interfaces with country-specific legacy software used all over the place are so badly documented the AI won't help you to shortcut these kind of integrations. There are not 10k stackoverflow posts about each piece of niche software to train from.
Microsoft is a robust business, with corporate contracts going back 40 years. There are going to be exceptions and winners, and microsoft is probably a winner
Quality may become the defining factor. Anybody can vibe code software, but only someone who is actually capable of programming themselves and has some understanding of what good UX looks like, how to fix bugs (often subtle and not caught by LLMs), and how to achieve high performance and low resource consumption can build high quality software.
I think developers who have an inclination towards UI/UX and a good grip on the technical side are particularly well positioned right now.
Right now, you can take the raw html and css of a competitors website/saas product, give it to claude code, and it will rewrite your react/nextjs app to match the quality and style of the assets you gave it. You are underestimating how quickly these supposed moats have been decimated
I’ve been using Claude and it’s decent… sometimes. It’s still capable of making rookie mistakes and it will happily let those accrue if you don’t point them out (and even then, occasionally it won’t see the problem). Personal technical familiarity with the domain still brings a lot to the table.
I’m well aware of what makes sales difficult—I’ve lived it, both in early-stage, venture-backed environments and in long, enterprise sales cycles. In SaaS, almost everything about how sales works ultimately ties back to the cost of building software. Relationships and customer trust absolutely matter, but when building SaaS becomes trivial, the underlying equations change—and many of the old assumptions stop holding.
Only a third of the most is gone - development. Validation and infrastructure/operations are still alive and well, though LLMs make for decent system investigators.
The moat for artists hasn't actually gone down though. Even for those in Patreon with their styles directly trained are raking the same subscribers they have.
What does 'even' here mean? Patreon artists are doing fine because they're more like influencers. Artists looking for jobs in the industry (especially graphic design, but also film and game) are the ones who are in trouble.
Canva destroyed graphic design well before LLMs caught up, but UX is still (somewhat surprisingly) on the ropes.
My bet: front end devs who need mocks to build something that looks nice get crowded out by UX designers with taste as code generation moves further into "good enough" territory.
Then those designers get crowded out as taste generation moves into "good enough" territory.
Yes, the moat is disappearing. Go look at any stock index of publicly traded saas companies, they have all been selling off 50% in the last six months. This is going to be a bloodbath
Execution in the article is simply having faster tech operations. Execution in a startup context doesn't imply that - execution is how the business operates - strategy, operations, hiring, marketing, legal handling, product building - all of that.
Has AI made that easier, yes. But not how this article is soley focusing on an important but a very less significant part of a startup operation or the success it might have. Startup (even only a tech based) is not only tech product building, its a business. And, that execution incoporates all the functions of a business, not only tech product building.
i find it extremely ironic that he said he built 3 of these things in a weekend, and expects me to pay 100$/mo for one of them. You've just basically stated how dead simple it is to build it, I'm certainly not paying you that and instead going to build my own (although I didn't need an article to tell me this).
Matter of fact, the broader point of this entire article and my experience with AI so far as a veteran in the industry, I lean towards quickly building things now rather than paying for SaaS products, unless it's far too complex and not worth it.
A lot of cost of mature SaaS products come from security, scaling, expensive sales teams, etc. For me, if I have something sandboxed, not available to the public, and only powerful enough to serve only _me_ as a customer, then I don't need to pay those extra costs and I can build something a lot simpler, while still maintaining the core feature that I need.
Searching for what to solve becomes far more important than how to solve it. Which niche you serve, how underserved the problem is, how quickly you build a solution, and how fast you iterate based on user feedback become the real differentiators. As a problem gains popularity, competitors will enter at an increasing pace, and the product’s price will be competed down to the bare minimum. At that point, the only real advantage for a builder is to be a serial builder for deep niches, spotting them faster than others and delivering quality product to users before anyone else.
I don't see much discussion on maintenance of software built by LLMs using LLMs.
We already know the hard part of software engineering is designing and implementing code that is maintainable.
Can LLMs reliably create software and maintain it transparently without bringing in regressions ? How do people with no knowledge of software guide LLMs to build quality test suite to prevent regressions ?
Or is it the expectation that every new major release is effectively a rewrite from scratch ? Don't they have to maintain consistency with the UI, database and other existing artifacts.
Execution has gotten much cheaper in a small enough problem space. Yes, we get it, it's game changing. It doesn't mean you're building products. There's more to it than just writing code.
Writing a formbuilder and saying you've replicated Typeform is like finishing a todo app and saying you've replicated Jira. Yes, in a way I guess...but there is way more to the product and that's usually where the hard parts are.
I'm now several years out of a career as a web designer and running my own retail business on Shopify and so while I've always had a background in working with devs and having a vague idea of how to plan and spec something, my previous job was design and writing HTML and CSS and I always wanted to be able to make small tools or little fun projects for me but the other parts of the project - the js, caching, api integration etc were always beyond my skillset.
While I wouldn't say execution is necessarily "cheap" for everything, ChatGPT and Gemini helped me build out a little Spotify playlist generator [1] recently that scans my top 100 artists in the last 12 months then generate a playlist based on their bottom 50% of songs in terms of popularity with an option for 1 or 2 songs per artist.
Sadly the Spotify API limits will never allow me to offer it to more than 25 people at a time but I get so bored of their algorithm playing me the same top songs from artists it's a fun way for me to explore "lesser lights" and something I'd have absolutely never have been able to build before, let alone spin up in a couple of evenings.
It's quite liberating as a non-dev suddenly having these new tools available that's for sure.
That's what I was thinking as well after letting Claude do a godoist clone in a few vibe coding sessions. Literally got most features I care about with a ui similar to todoist with all the bootstraping done. Deploying to my nas later this week and cancelling my subscription.
It's way past the point of "just" doing MVPs or simple proof of concepts. I'm talking about user auth, dynamic input parsing, calendar views, tags, projects, history of events and more, given a few prompts.
One underestimated productivity booster is that you can write code on your phone by giving orders to a coding assistant in a spare moment. You can fill extra time that way instead of reading social media or playing a game.
I was just coding a personal website the other day while waiting for our number to be called at the DMV. I couldn’t really review the code but it did give me a chance to test on mobile.
This is without doing anything special, just using one instance of Claude Opus 4.5 and exe.dev.
I was just thinking about how harmful that idea is in general. I think true achievement and productivity comes from deep focus and immersion in the problem.
Ironically a lot of monotonous work that you were forced to do helped you immerse yourself in the problem domain and equipped you for the hard parts. Not just talking about AI btw, in general when people automate away the easy parts, the hard parts will suddenly seem more difficult, because there's no ramp-up.
While I know in some ways AI coding is helpful, the mode of work where you keep getting distracted while the agent works is much less productive when you just grind the problem.
I mean AI also helps you stay in the zone, but this 'casual' approach to work ultimately results in things not getting done, in my personal experience.
This is a pretty wild claim, so I think it is fair to be critical of the examples given:
- Driftless sounds like it might be better as a claude code skill or hook
- Deploycast is an LLM summarization service
- Triage also seems like it might be more effective inside CC as a skill or hook
In other words all these projects are tooling around LLM API calls.
> What was valuable was the commitment. The grit. The planning, the technical prowess, the unwavering ability to think night and day about a product, a problem space, incessantly obsessing, unsatisfied until you had some semblance of a working solution. It took hustle, brain power, studying, iteration, failures.
That isn't going to go away. Here's another idea: a discussion tool for audio workflows. Pre-LLMs the difficult part of something like this was never code generation.
Sigh. Lines of code is not execution man. Having functional apps is not execution. A full stack, AWS-deployed, multi-stage scaleable microservices wonder is not execution!
LLMs make it a lot easier to build MVPs, but the hard work of VALIDATING problems and their solutions, which IMO was always >80% of the work for a successful founder, is harder than ever. With AI we now get 100 almost-useful solutions for every real problem.
Ideas are cheap for a very narrow vision of "ideas". Sure, you can build your recipe site, TODO list or whatever it is cheaply and quickly without a single thought, but LLMs are still just assembling lots of open-source libraries _mostly_ written by humans into giant piles of spaghetti.
There's a hilarious thread on Twitter where someone "built a browser" using an LLM feedback loop and it just pasted together a bunch of Servo components, some random other libraries and tens of thousands of spaghetti glue to make something that can render a webpage in a few seconds to a minute.
This will eventually get better once they learn how to _actually_ think and reason like us - and I don't believe by any means that they do - but I still think that's a few years out. We're still at what is clearly a strongly-directed random search stage.
The industry is going through a mass psychosis event right now thinking that things are ready for AI loops to just write everything, when the only real way for them to accomplish anything is by just burning tokens over and over until they finally stumble across something that works.
I'm not arguing that it won't ever happen. I think the true endgame of this work is that we'll have personal agents that just do stuff for us, and the vast majority of the value of the entire software industry will collapse as we all return to writing code as a fun little hobby, like those folks who spend hours making bespoke furniture. I, for one, look forward to this.
The "built a browser" example you gave reminded me how I've "built a browser" as a kid in the 90s using Visual Basic (or something similar) - I've simply dragged the browser view widget, added an input and some buttons that called functions from the widget and there you go, another browser ready :-)
I agree with your vision of endgame. We wouldn't even need a screen, we will communicate verbally or with signs with our agents with some device that will have a long battery life and will always be on.
I just hope that we retain some version of autonomy and privacy because no one wants the tech giants listening in every single word you utter because your agent heard it. No-one wants it but some, not many, care.
Finally, developers are realizing that it's not how you write code or who writes the code, it's figuring out WHAT to write. LLMs are finally exposing this because the feedback cycle is so short.
> Finally, craftsmen are realizing that it's not how you woodwork or who makes the furniture, it's figuring out WHAT to make. IKEA is finally exposing this because the feedback cycle is so short.
I keep reading these pro LLM articles. And I keep thinking that it is abysmal. Sure, for now it works, because there are senior engineers that refine the rough edges and know what's going on.
And especially some folks keep claiming that one just needs to get better at prompting and describe a detailed spec.
Wanna know what a detailed spec is called? An unambiguous one? It's called code.
LLMs still feel like a very round-about way of re-inventing code. But instead of just a new language, it's a language that nondeterministically creates "code" or a resemblance thereof.
And I am aware that this is currently not a popular opinion on HN, so keep the downvotes coming.
If you use LLMs outside the popular Github languages, it will fail hard on you. It is glorified text-completion, that's what it is.
"The ability to actually build something—to turn a napkin sketch into working software—was the thing that separated dreamers from builders. It’s what made you valuable."
There is also the matter of having ideas that are good and knowing how to make them into good software, not something that simply "technically works". LLMs are not enough to overcome this barrier, and the author's examples seem to prove the point. The "working products with test suites, documentation, and polish" that are just another batch of LLM front-ends are frankly unimpressive. Is this the best that AI can offer?
>easily replicable probably it was not a good idea after all.
I mean, sometimes the hard work is creating object number 1. There are a crapload of inventions that we look back on and go "why did it take so long for us to make the first one", then after that whatever object/idea it was explodes over the planet because of the ease of implementation and the useful practical application.
I think this statement is marred by the our modern sensibilities that say everything must be profitable or it's a bad idea.
> This isn’t about one person copying one idea. It’s about the fundamental economics of software changing.
That "this isn't x, it's y" really is a strong tell
Which means people either can't tell, or don't mind.
I am sure that whatever work was put into actually trying to implement that, was crucial in order to instruct Claude what to do. System design doesn't come by itself.
I am also building some agents. It is almost hands off at this point.
> provides none
I'm pro LLM/AI, but most of hype are just pure vibes. There's no evidence, there are only anecdotes.
All the hype-men that I follow either have a stake at it (they either work for LLM provider or have an AI startup) or post billions of examples and zero revenue.
That's the execution part of creating a successful business and it's still entirely missing.
> Stack Overflow, the site that defined a generation of software development, received 3,710 questions last month. That’s barely above the 3,749 it got in its first month of existence. The entire knowledge-sharing infrastructure we built our careers on is collapsing because people don’t need to ask anymore.
"Because people don't need to ask anymore."?!
Yeah, I wouldn't call it exaggerating, I think I would call it a fundamental misunderstanding.
I wanted to comment on the code examples he shared. But they're they're all closed source. Which is a decision given the premise of the whole article, err I mean ad, that implementations are free these days.
It's just they are asking there where they expect they'll reach a better answer faster than on SO.
But for the code where the hard part isn't making things designed separately work together, but getting the actual algorithm right. That's where I find LLMs still really fail. Finding that trick to take your approach from quadratic to N log N, or even just understanding what you mean after you found the trick yourself. I've had little luck there with LLMs.
I think this is mostly great, because its the hard stuff that I have always found fun. Properly architecting these CRUD apps, and learning which out of the infinite set of ways to do this are better, was fun as a matter of craftsmanship. But that hits at a different level from implementing a cool new algorithm.
People still argue that distribution is the real bottleneck now. But when the product itself is trivial to build and change, the old dynamics break down. Historically, sales was hard because you had to design and refine a sales motion around a product that evolved slowly and carried real technical risk. You couldn’t afford to pour resources into distribution before the product stabilized, because getting it wrong was expensive.
That constraint is gone. The assumptions and equations we relied on to understand SaaS no longer apply—and the industry hasn’t fully internalized what that means yet.
But nonfunctional requirements such as reliability, performance and security are still extremely hard to get right, because they not only need code but many correct organizational decisions to be achieved.
As customers connect these nonfunctional requirements with a brand, I don't see how big SaaS players will have a problem.
For new brands, it's as hard as ever to establish trust. Maybe coding is a bit faster due to AI, but I'm not yet convinced that vibe coders are the people on top of which you can build a resilient organization that achieves excellence in nonfunction requirements.
Brand means almost nothing when a competitor can price the software at 90% cheaper. Which is what we are going to see
Even on a technical level the interfaces with country-specific legacy software used all over the place are so badly documented the AI won't help you to shortcut these kind of integrations. There are not 10k stackoverflow posts about each piece of niche software to train from.
I think developers who have an inclination towards UI/UX and a good grip on the technical side are particularly well positioned right now.
while your statement is true - this is actually a very minor reason why sales is hard.
My bet: front end devs who need mocks to build something that looks nice get crowded out by UX designers with taste as code generation moves further into "good enough" territory.
Then those designers get crowded out as taste generation moves into "good enough" territory.
My decades of experience suggests that the opposite will happen. People will realize that the software industry is 100% moat and 0% castle.
People will build great software that nobody will use while a few companies will continue to dominate with vaporware.
that makes no sense. "Dominate" implies people use or buy you software. If you produce nothing ("vaporware") how can you dominate?
Except for the token cost maybe.
A lot of cost of mature SaaS products come from security, scaling, expensive sales teams, etc. For me, if I have something sandboxed, not available to the public, and only powerful enough to serve only _me_ as a customer, then I don't need to pay those extra costs and I can build something a lot simpler, while still maintaining the core feature that I need.
We already know the hard part of software engineering is designing and implementing code that is maintainable.
Can LLMs reliably create software and maintain it transparently without bringing in regressions ? How do people with no knowledge of software guide LLMs to build quality test suite to prevent regressions ?
Or is it the expectation that every new major release is effectively a rewrite from scratch ? Don't they have to maintain consistency with the UI, database and other existing artifacts.
Writing a formbuilder and saying you've replicated Typeform is like finishing a todo app and saying you've replicated Jira. Yes, in a way I guess...but there is way more to the product and that's usually where the hard parts are.
While I wouldn't say execution is necessarily "cheap" for everything, ChatGPT and Gemini helped me build out a little Spotify playlist generator [1] recently that scans my top 100 artists in the last 12 months then generate a playlist based on their bottom 50% of songs in terms of popularity with an option for 1 or 2 songs per artist.
Sadly the Spotify API limits will never allow me to offer it to more than 25 people at a time but I get so bored of their algorithm playing me the same top songs from artists it's a fun way for me to explore "lesser lights" and something I'd have absolutely never have been able to build before, let alone spin up in a couple of evenings.
It's quite liberating as a non-dev suddenly having these new tools available that's for sure.
[1] https://github.com/welcomebrand/Spotify-Lesser-Lights
It's way past the point of "just" doing MVPs or simple proof of concepts. I'm talking about user auth, dynamic input parsing, calendar views, tags, projects, history of events and more, given a few prompts.
Great ideas are rare.
"AI startups say the promise of turning dazzling models into useful products is harder than anyone expected":
https://www.wired.com/story/artificial-intelligence-startups...
This is not new. There is tech that enables new possibilities, but it's not a f---ing magic wand.
I was just coding a personal website the other day while waiting for our number to be called at the DMV. I couldn’t really review the code but it did give me a chance to test on mobile.
This is without doing anything special, just using one instance of Claude Opus 4.5 and exe.dev.
Ironically a lot of monotonous work that you were forced to do helped you immerse yourself in the problem domain and equipped you for the hard parts. Not just talking about AI btw, in general when people automate away the easy parts, the hard parts will suddenly seem more difficult, because there's no ramp-up.
While I know in some ways AI coding is helpful, the mode of work where you keep getting distracted while the agent works is much less productive when you just grind the problem.
I mean AI also helps you stay in the zone, but this 'casual' approach to work ultimately results in things not getting done, in my personal experience.
- Driftless sounds like it might be better as a claude code skill or hook
- Deploycast is an LLM summarization service
- Triage also seems like it might be more effective inside CC as a skill or hook
In other words all these projects are tooling around LLM API calls.
> What was valuable was the commitment. The grit. The planning, the technical prowess, the unwavering ability to think night and day about a product, a problem space, incessantly obsessing, unsatisfied until you had some semblance of a working solution. It took hustle, brain power, studying, iteration, failures.
That isn't going to go away. Here's another idea: a discussion tool for audio workflows. Pre-LLMs the difficult part of something like this was never code generation.
Treat it rhetorically.
There can be no question that the cost coefficients of Ideas vs. Execution have changed, with LLMs
LLMs make it a lot easier to build MVPs, but the hard work of VALIDATING problems and their solutions, which IMO was always >80% of the work for a successful founder, is harder than ever. With AI we now get 100 almost-useful solutions for every real problem.
There's a hilarious thread on Twitter where someone "built a browser" using an LLM feedback loop and it just pasted together a bunch of Servo components, some random other libraries and tens of thousands of spaghetti glue to make something that can render a webpage in a few seconds to a minute.
This will eventually get better once they learn how to _actually_ think and reason like us - and I don't believe by any means that they do - but I still think that's a few years out. We're still at what is clearly a strongly-directed random search stage.
The industry is going through a mass psychosis event right now thinking that things are ready for AI loops to just write everything, when the only real way for them to accomplish anything is by just burning tokens over and over until they finally stumble across something that works.
I'm not arguing that it won't ever happen. I think the true endgame of this work is that we'll have personal agents that just do stuff for us, and the vast majority of the value of the entire software industry will collapse as we all return to writing code as a fun little hobby, like those folks who spend hours making bespoke furniture. I, for one, look forward to this.
I just hope that we retain some version of autonomy and privacy because no one wants the tech giants listening in every single word you utter because your agent heard it. No-one wants it but some, not many, care.
Agents deployed locally should be the goal.
Nothing replaces making simple UX instead of complicated kitchen sink products.
It’s easy to make stuff. It’s harder to make stuff people want.
I am thankful for the increase in product velocity and I also recognize that a lot of stuff people make isn’t what people want.
Product sense and intuition still matter.
And especially some folks keep claiming that one just needs to get better at prompting and describe a detailed spec.
Wanna know what a detailed spec is called? An unambiguous one? It's called code.
LLMs still feel like a very round-about way of re-inventing code. But instead of just a new language, it's a language that nondeterministically creates "code" or a resemblance thereof.
And I am aware that this is currently not a popular opinion on HN, so keep the downvotes coming.
If you use LLMs outside the popular Github languages, it will fail hard on you. It is glorified text-completion, that's what it is.
Have they iterated on user feedback? Have they fixed obscure issues? Made any major changes after the initial version?
More importantly, can the author claim with a straight face that they no longer need to read or understand the code that has been produced?
Just another one of those “look, I built a greenfield pet project over the weekend, software engineering is dead.”
There is also the matter of having ideas that are good and knowing how to make them into good software, not something that simply "technically works". LLMs are not enough to overcome this barrier, and the author's examples seem to prove the point. The "working products with test suites, documentation, and polish" that are just another batch of LLM front-ends are frankly unimpressive. Is this the best that AI can offer?
> easily
reproducible now?
I mean, sometimes the hard work is creating object number 1. There are a crapload of inventions that we look back on and go "why did it take so long for us to make the first one", then after that whatever object/idea it was explodes over the planet because of the ease of implementation and the useful practical application.
I think this statement is marred by the our modern sensibilities that say everything must be profitable or it's a bad idea.
Yes.
> LLMs don't change the equation.
No. They make more things easily replicable.
"We made this and all it took was 500 juniors working for a year" used to be a reasonable business moat of effort. Now it's not.