Ask HN: Are AI dev tools lowering the barrier to entry for creating software?

I am seeing more and more stories about people that don't know how to program are using AI to create software products. On a surface level, that suggests that the barrier to entry for software development is lower. But there are at least two new factors: cost of the tools and expectations in the market, which change the equation regarding what makes a product viable.

I'm curious about how the barrier to entry for creating software products has changed since the rapid proliferation of AI development tools.

27 points | by joe8756438 14 hours ago

24 comments

  • sevensor 12 hours ago
    Every summer, my community pool has a cardboard regatta. Kids can use as much duct tape as they want to waterproof a cardboard box and paddle it 25 yards to the other side. Half of the vessels sink within a length or two and the kids have to swim to the edge of the pool. There’s no age limit, and last year a grown man entered a fully engineered catamaran design that beat all the others handily. The secret was using way more duct tape than anybody else.

    AI dev tools are that catamaran. They’ll get you across the pool; you might even get half a mile from shore, but there you are, in the middle of the lake, sitting on cardboard and duct tape, wishing you knew how to swim.

    • drzzhan 12 hours ago
      The best analogy I've ever seen. Thank you for sharing!
  • bloomingkales 1 hour ago
    I don’t think it’s lower at all. If you try to build any remotely ambitious app with AI, you will easily find yourself in a very long abstract problem solving space that requires a good amount of creativity.

    It’s the most fun I’ve had in a long time, designing ad hoc algorithms that mimic how we store and retrieve memory (and form context).

    It’s gotten to the point where the simulation theory is palatable to me. How the simulation pulls in relevant context just in time is critical to making a believable experience. You can cut a lot of corners so long as the felt experience is believable (eg, the LLM can simulate a scenario with you without all the data necessary to provide a believable experience - this is a memory/context constraint that we are newly being introduced to with LLMs).

    Ok, off the deep end, I know. But listen, it would be programmers that would catch wind of it first.

  • joshstrange 11 hours ago
    > I am seeing more and more stories about people that don't know how to program are using AI to create software products.

    They are, in every case I've seen, creating software _demos_. Those things will fall over under their own weight with 1-2 more iterations.

    Someone with no code experience can say "Make snake!" and for other contrived examples and maybe even add a handful of features but very quickly they will code themselves into a corner that they can't get out of. Heck, I sometimes go 3-4 prompts deep on something with Aider then git reset back once it turns out something isn't going to work out.

    If some has _fully launched_ a product using only AI to write _all_ the code (Press X to doubt) then it's either a product that will never grow past its initial feature set and/or something trivially copied (with or without AI).

    What AI tools may change is the ability for "ideas people" to create a basic MVP (Of the tool itself, I don't think you are going to get an LLM to churn out a whole SaaS codebase without a developer guiding) and raise interest/funding/recruit-others. That's not the "barrier to entry" lowering, that's just a "better slide deck".

  • boshalfoshal 12 hours ago
    I think the super played out twitter adage has some merit to it: "it makes 10x devs 100x devs."

    Those who already have a high level idea of what to do and roughly how to execute it benefit the most from LLMs at the moment. This is very good for purely "technical" devs in greenfield environments. Less useful for super large interconnected codebases, but tools are getting there.

    It will not, however, make a bad dev a good one magically. A bad software product is not usually bottlenecked by the software its running on, its bottlenecked by user experience and pmf. That still requires some skilled human input, but that could also change soon. Some people have better product intuition than others but couldn't execute on complex code, so LLMs do help here to an extent.

    As of 2025, I think you still need to be a pretty decent dev even with LLM assistance.

    • pockmarked19 11 hours ago
      Not high level, you need to know exactly how it’s done. If you don’t at the start, then you will by the time you arrive at the working commit.

      The exception (in that you must learn something) is in design, though. If you ask AI to add something to your API, and do it repeatedly, you will end up with a very poorly designed API, with separate endpoints for updating separate fields in the same record, etc, which will happily work fine.

      Unless you knew what to do from the start, you’re going to make a lot of tech debt.

  • zlagen 12 hours ago
    After using the AI chatbots for some time, I think that they are not so useful for non programmers other than for doing small tools, that may be difficult to modify and polish by a non programmer. But they still fail and have subtle errors too often so they are more useful for programmers which already know what the AI is doing and can spot mistakes.
  • GoldenMonkey 1 hour ago
    Increasing the bar, if anything. When your thinking is relegated to ai. You’ll lean too hard on it… as a crutch.
  • conception 6 hours ago
    Software engineering is going the way that Photography did. It didn’t happen overnight but in ten years it’ll be I’d bet. Good photographers are still needed when you need “real” work done, and those photographers use the tools available to them (photoshop et al).

    But the common folk will be able to create pretty compelling software as easily as they can create pretty compelling photos at an extremely lower barrier of skill as compared to the 70’s and using film.

    So, like with photography the jobs will be fewer and require more work and a higher bar of skill but also, the common person will be able to produce something pretty compelling knowing little to nothing about how it all works.

    • dansult 1 hour ago
      Thats a sombre take. My father in law was a professional photographer for 30 years. His experience was very much that everyone can take a decent photo today. But he saw diminishing opportunities as early as the days of small point and shoot cameras. Today, they're not used as much because commodity DSLRs are relatively cheap and dont require much if any training. Are we in the days of the Nikon Coolpix? Maybe, its definitely something to consider.
  • kanemcgrath 10 hours ago
    For people who want to learn programming, I think language models are a very powerful teaching tool. I have learned more programming than ever before because I have been able to ask questions, and get answers directly relevant to what I’m working on. Maybe it depends how you like to learn things, or maybe works better because I already know a base amount of programming knowledge, but I tell people who want to learn programming to ask chat gpt to teach them through a simple project.
  • n0rdy 8 hours ago
    I'd say not as of today's state of AI tools, but it's difficult to predict the future. So far, I can see many excited non-tech people who can build simple things or demos. But the real complexity starts behind that, once the solution needs to be deployed, maintained, extended with new features, bugs have to be fixed, etc. That's when it gets tricky.

    I did a short experiment by trying to build an app with Cursor in the stack and domain I know nothing about. I got it to the first stage, and it was cool. But the app kept crashing once in a while with the memory issues, and my AI friend kept coming up with solutions that didn't help, but made the code more and more overengineered and harder to navigate. I'd feel sorry for those who'd need to maintain tools like this on stages like the one I described. Maybe that's the state of future start-ups out there?

  • fxtentacle 10 hours ago
    I see no change. AI is the new "no code". Which means in both cases, projects outgrow their capabilities quite quickly and then they undergo a messy transition to traditional software development.
  • dvngnt_ 13 hours ago
    I think for prototyping and helping people with some programming abilities yes. No code tools have existed for years, so the barrier was somewhat low to build basic applications.

    I've had many friends with "app ideas" and using tools can help them flesh out their value proposition

    • bko 12 hours ago
      Do any of them actually flesh out "app ideas" with AI?

      I have the same type of friends and they would often ask me for help. But when you ask for even the minimal amount of work, they often just give up.

      For instance, have an idea for an app? Draw out the screens. You can use a pen and paper. Where does this button take you?

      AI can help with that sure, but its still a lot of work to go over and iterate with AI

  • Bjorkbat 10 hours ago
    Most of my observations have been that people are using it to make personal software. That is to say, software with an intended user base of just yourself and maybe friends and family.

    For software meant to be consumed by the masses it's too unreliable for the all the boring details, but otherwise if you want something that serves a specific purpose then sure, it seems to work really well.

    Otherwise though I haven't really hard of any non-technical founders leveraging it to finally get their app off the ground.

  • lowlevel 6 hours ago
    I find them pretty good at helping you learn about various libraries or functions and how to apply them, but you still have to have some idea what you're trying to do and how to approach it or it will lead you down a frustrating and hopeless path. So, not yet...
  • duxup 14 hours ago
    I feel like AI can get your started with less friction, but after that you need background knowledge to vet everything that you need to learn ... by not using AI.

    It's a dynamic I can't quite put a name on right now but I think that's a barrier.

  • dehrmann 12 hours ago
    The thing that's lowered it most in the last many years has been Roblox. What's raised it the most is the decline of desktop computing and the rise of app computing.
  • ingigauti 9 hours ago
    I think it's going lot lower. We are still at the horseless carage level with ai and coding, that is; using new tech in old way

    I think we'll have a new programming language(natural language with rules), I'm biased though as I've made that language :)

    Going lot lower

  • simonhfrost 13 hours ago
    I'm sceptical that you can create entire apps. It might be good to get an MVP off the ground, but once you need to modify code it gets exponentially complicated because: 1) you're not familiar with the code AI wrote and 2) what it writes is generally more complicated.
  • 999900000999 12 hours ago
    Not really.

    Chat GPT and friends are really good at grunt work, but as far as picking the tech stack and architecting out an actual solution, it falls flat.

    Plus if you ever run into any real trouble, chat GPT has a very nasty habit of just telling you to keep doing it the same way. I've had times where I'll post the same code in multiple LLMs and get multiple incorrect answers. All while I'm thinking if I was an actual web developer this would take 30 seconds...

  • andrei_says_ 10 hours ago
    I’d say that my best use so far is a more powerful autocomplete proposing full lines of repetitive code and sometimes writing short methods / functions for very common tasks (“export to csv with these headers”).

    It saves time in searching documentation but sometimes hallucinates.

    The key here is that I can tell the difference and I have spent time in many codebases and read up on code design theory.

    So in my case it’s a multiplier of clear understanding and somewhat sufficient subject matter expertise.

    For someone without expertise, the LLM quickly becomes a multiplier of the Dunning-Kruger Effect.

    I know enough to not try and write an organic chemistry paper with an LLM. But Twitter tells everyone they can do a similar thing in the area of software engineering.

  • ddgflorida 8 hours ago
    Definitely has lowered the entry cost.
  • apwell23 9 hours ago
    No
  • sergiotapia 11 hours ago
    Have you seen the quality of said products? The demand for engineers that actually know what they're doing will grow. If you actually know what you're doing and used to write code by hand (LOL!), AI just let's you fly.
  • ossforever1997 43 minutes ago
    [dead]
  • jackbroski82 2 hours ago
    [dead]