I'm curious about how the barrier to entry for creating software products has changed since the rapid proliferation of AI development tools.
I'm curious about how the barrier to entry for creating software products has changed since the rapid proliferation of AI development tools.
24 comments
AI dev tools are that catamaran. They’ll get you across the pool; you might even get half a mile from shore, but there you are, in the middle of the lake, sitting on cardboard and duct tape, wishing you knew how to swim.
It’s the most fun I’ve had in a long time, designing ad hoc algorithms that mimic how we store and retrieve memory (and form context).
It’s gotten to the point where the simulation theory is palatable to me. How the simulation pulls in relevant context just in time is critical to making a believable experience. You can cut a lot of corners so long as the felt experience is believable (eg, the LLM can simulate a scenario with you without all the data necessary to provide a believable experience - this is a memory/context constraint that we are newly being introduced to with LLMs).
Ok, off the deep end, I know. But listen, it would be programmers that would catch wind of it first.
They are, in every case I've seen, creating software _demos_. Those things will fall over under their own weight with 1-2 more iterations.
Someone with no code experience can say "Make snake!" and for other contrived examples and maybe even add a handful of features but very quickly they will code themselves into a corner that they can't get out of. Heck, I sometimes go 3-4 prompts deep on something with Aider then git reset back once it turns out something isn't going to work out.
If some has _fully launched_ a product using only AI to write _all_ the code (Press X to doubt) then it's either a product that will never grow past its initial feature set and/or something trivially copied (with or without AI).
What AI tools may change is the ability for "ideas people" to create a basic MVP (Of the tool itself, I don't think you are going to get an LLM to churn out a whole SaaS codebase without a developer guiding) and raise interest/funding/recruit-others. That's not the "barrier to entry" lowering, that's just a "better slide deck".
Those who already have a high level idea of what to do and roughly how to execute it benefit the most from LLMs at the moment. This is very good for purely "technical" devs in greenfield environments. Less useful for super large interconnected codebases, but tools are getting there.
It will not, however, make a bad dev a good one magically. A bad software product is not usually bottlenecked by the software its running on, its bottlenecked by user experience and pmf. That still requires some skilled human input, but that could also change soon. Some people have better product intuition than others but couldn't execute on complex code, so LLMs do help here to an extent.
As of 2025, I think you still need to be a pretty decent dev even with LLM assistance.
The exception (in that you must learn something) is in design, though. If you ask AI to add something to your API, and do it repeatedly, you will end up with a very poorly designed API, with separate endpoints for updating separate fields in the same record, etc, which will happily work fine.
Unless you knew what to do from the start, you’re going to make a lot of tech debt.
But the common folk will be able to create pretty compelling software as easily as they can create pretty compelling photos at an extremely lower barrier of skill as compared to the 70’s and using film.
So, like with photography the jobs will be fewer and require more work and a higher bar of skill but also, the common person will be able to produce something pretty compelling knowing little to nothing about how it all works.
I did a short experiment by trying to build an app with Cursor in the stack and domain I know nothing about. I got it to the first stage, and it was cool. But the app kept crashing once in a while with the memory issues, and my AI friend kept coming up with solutions that didn't help, but made the code more and more overengineered and harder to navigate. I'd feel sorry for those who'd need to maintain tools like this on stages like the one I described. Maybe that's the state of future start-ups out there?
I've had many friends with "app ideas" and using tools can help them flesh out their value proposition
I have the same type of friends and they would often ask me for help. But when you ask for even the minimal amount of work, they often just give up.
For instance, have an idea for an app? Draw out the screens. You can use a pen and paper. Where does this button take you?
AI can help with that sure, but its still a lot of work to go over and iterate with AI
For software meant to be consumed by the masses it's too unreliable for the all the boring details, but otherwise if you want something that serves a specific purpose then sure, it seems to work really well.
Otherwise though I haven't really hard of any non-technical founders leveraging it to finally get their app off the ground.
It's a dynamic I can't quite put a name on right now but I think that's a barrier.
I think we'll have a new programming language(natural language with rules), I'm biased though as I've made that language :)
Going lot lower
Chat GPT and friends are really good at grunt work, but as far as picking the tech stack and architecting out an actual solution, it falls flat.
Plus if you ever run into any real trouble, chat GPT has a very nasty habit of just telling you to keep doing it the same way. I've had times where I'll post the same code in multiple LLMs and get multiple incorrect answers. All while I'm thinking if I was an actual web developer this would take 30 seconds...
It saves time in searching documentation but sometimes hallucinates.
The key here is that I can tell the difference and I have spent time in many codebases and read up on code design theory.
So in my case it’s a multiplier of clear understanding and somewhat sufficient subject matter expertise.
For someone without expertise, the LLM quickly becomes a multiplier of the Dunning-Kruger Effect.
I know enough to not try and write an organic chemistry paper with an LLM. But Twitter tells everyone they can do a similar thing in the area of software engineering.