9 comments

  • manoDev 46 minutes ago
    There are two ways to look at it:

    - Software engineering is a cost center, they are middlemen between the C-level ideas and a finished product.

    - Software engineering is about figuring out how to automate a problem domain and unlock new capabilities in the process

  • cjfd 5 hours ago
    The article talks about 'software development will be democratized' but the current LLM hype is quite the opposite. The LLMs are owned by large companies and are quite impossible to train by any individual, if only because of energy costs. The situation where I am typing my code on my linux machine is much more democratic.
    • tkel 1 hour ago
      Right, people misuse this term "democratized" all the time. Because it sounds nice. But it's incorrect.

      Democracy is about governance, not access.

      A "democratized" LLM would be one in which its users collectively made decisions about how it was managed. Or if the companies that owned LLMs were ran democratically.

    • Havoc 4 hours ago
      It is democratising from the perspective of non-programmers- they can now make their own tools.

      What you say about big tech is true at same time though. I worry about what happens when China takes the lead and no longer feels the need to do open models. First hints already showing - advance access to ds4 only for Chinese hardware makers

      • ares623 4 hours ago
        They can rent their own tools, more like.
      • ldng 2 hours ago
        Terrible argument. They always could learn and DIY.
        • edgyquant 51 minutes ago
          You have to have a knack for it, most people are not programmer types
        • kqr 2 hours ago
          ... if they are privileged enough to be able to take time away from family and jobs.

          The current crop of LLMs are subsidised enough to make this learning less expensive for those with little of both time and money. That's what's meant by democratised.

      • cyanydeez 3 hours ago
        The people taking the lead in most of Ai in America are bootlickers of fascism. So not much difference than China on a long enough time line.
        • Havoc 2 hours ago
          The US losing the plot doesn’t change the fact that the tech is fundamentally democraticism on a personal level.

          If all the frontier models disappear into autocratic dark holes then yeah we have a problem but the fundamental freedom gain an “individuals can make tools without knowing coding” isn’t going anywhere

    • xg15 1 hour ago
      It's "democratizing" in the same way Uber "democratized" taxis...
    • heliumtera 1 hour ago
      You are assuming democracy wasn't designed to crush the individual and reduce autonomy at all cost. How cute.
  • kopirgan 1 hour ago
    Wow it mentions practically every flavour of the month technology that was supposed to make it drag and drop to make useful programs

    I recall Power builder in particular it was the rage.

  • jleyank 2 hours ago
    Developers are “unwanted overhead” until the customer money threatens to walk out the door. They’re going to damage their future products and probably reduce their customer base (fewer consumers) and then sit there looking like gaffed fish when the budget ink turns red. “Who would have thought…”

    Don’t facilitate losing your job.

    • marginalia_nu 2 hours ago
      Funny part is we've already had this exact thing happen with outsourcing. It sure looked like a bargain until you got to such pesky details as correctness and maintainability.
      • iugtmkbdfil834 1 hour ago
        I am starting to think it is a part of the management cycle. They new batch feels confident they can do X so they have to re-learn, while inflicting ridiculous amount of pain the process.

        Two years ago, one former exec at my place was perfectly happy to throw resources ( his word ) from India at a problem, while unwilling to pay the vendor for the same thing. I voiced my objection once, but after it was dismissed I just watched the thing blow up.

        I am not saying current situation is the same. It is not. But, it is the same hubris, which means miscalculations will happen ( like with Dorsey's Block mass firing ).

  • helsinkiandrew 3 hours ago
    I'd say that the article left out Software Reuse - talked a lot more about in the late 90's early 00's than now.

    You could argue that coding with LLM's is a form of software reuse, that removes some of its disadvantages.

    • utopiah 44 minutes ago
      I'm not familiar with Software Reuse but if it's about re-using software itself one advantage of a live codebase is that it's understood in the head of a human being. That means when an issue is opened, a person remembers if it's a new issue or not. It's not "just" semantic search where that person knows only if it's genuinely new or not (and thus can be closed) but rather why it exists in the first place. Is it the result of the current architecture, dependency choice, etc or rather simply a "shallow" bug that can be resolved with fixing a single function.
  • ryanjshaw 4 hours ago
    Until a year ago I believed as the author did. Then LLMs got to the point where they sit in meetings like I do, make notes like I do, have a memory like I do, and their context window is expanding.

    Only issue I saw after a month of building something complex from scratch with Opus 4.6 is poor adherence to high-level design principles and consistency. This can be solved with expert guardrails, I believe.

    It won’t be long before AI employees are going to join daily standup and deliver work alongside the team with other users in the org not even realizing or caring that it’s an AI “staff member”.

    It won’t be much longer after that when they will start to tech lead those same teams.

    • symfrog 2 hours ago
      The closer you get to releasing software, the less useful LLMs become. They tend to go into loops of 'Fixed it!' without having fixed anything.

      In my opinion, attempting to hold the hand of the LLM via prompts in English for the 'last mile' to production ready code runs into the fundamental problem of ambiguity of natural languages.

      From my experience, those developers that believe LLMs are good enough for production are either building systems that are not critical (e.g. 80% is correct enough), or they do not have the experience to be able to detect how LLM generated code would fail in production beyond the 'happy path'.

      • empath75 2 hours ago
        This is not my experience with claude code. It does forget big picture things but if you scope your changes well it’s fine.
        • symfrog 2 hours ago
          I would estimate that out of every 200 lines of code that Claude Code produces, I notice at least 1 issue that would cause severe problems in production.

          In my opinion these discussions should include MREs (minimal reproducible examples) in the form of prompts to ground the discussion.

          For example, take this prompt and put it into Claude Code, can you see the problematic ways it is handling transactions?

          ---

          The invoicing system is being merged into the core system that uses Postgres as its database. The core system has a table for users with columns user_id, username, creation_date . The invoicing data is available in a json file with columns user_id, invoice_id, amount, description.

          The data is too big to fit in memory.

          Your role is to create a Python program that creates a table for the invoices in Postgres and then inserts the data from the json file. Users will be accessing the system while the invoices are being inserted.

          ---

          • edgyquant 45 minutes ago
            What he’s saying is split this up into multiple tasks to create the table, insert the data etc
        • ajshahH 1 hour ago
          Yes, but knowing how to scope your changes requires a lot of expertise.
    • geraneum 55 minutes ago
      > poor adherence to high-level design principles and consistency. This can be solved with expert guardrails, I believe.

      That’s a bit… handwavy…!

    • Roark66 4 hours ago
      After 2 years of using all of these tools (Claude C, Gemini cli, opencode with all models available) I can tell you it is a huge enabler, but you have to provide these "expert guardrails" by monitoring every single deliverable.

      For someone who is able to design an end to end system by themselves these tools offer a big time saving, but they come with dangers too.

      Yesterday I had a mid dev in my team proudly present a Web tool he "wrote" in python (to be run on local host) that runs kubectl in the background and presents things like versions of images running in various namespaces etc. It looked very slick, I can already imagine the product managers asking for it to be put on the network.

      So what's the problem? For one, no threading whatsoever, no auth, all queries run in a single thread and on and on. A maintenance nightmare waiting to happen. That is a risk of a person that knows something, but not enough building tools by themselves.

      • kopirgan 1 hour ago
        Any comments on how the copyright issues are handled in corporate settings? I mean both in terms of staying clear of lawsuit+ ensuring what we produce remains safe from copying
      • ryanjshaw 3 hours ago
        Yup. I’m not expert so maybe I’m completely off base, but if I were OpenAI or Anthropic I’d likely just hire 1000 highly skilled engineers across multiple disciplines, tell them to build something in their domain of expertise, then critique the model’s output, iteratively work on guardrails for a month or two until the model one-shots the problem, and package that into the new release.
        • LiamPowell 2 hours ago
          That's exactly what they are doing via dataannotation.tech and other services.
    • bakugo 2 hours ago
      I've been hearing this for several years. How much longer is "it won't be long"?
  • bananaflag 5 hours ago
    Yeah but this time it's for real.

    All the other attempts failed because they were just mindless conversions of formal languages to formal languages. Basically glorified compilers. Either the formal language wasn't capable enough to express all situations, or it was capable and thus it was as complex as the one thing it was designed to replace.

    AI is different. You tell it in natural language, which can be ambiguous and not cover all the bases. And people are familiar with natural language. And it can fill in the missing details and disambiguate the others.

    This has been known to be possible for decades, as (simplifying a bit) the (non-technical) manager can order the engineer in natural, ambiguous language what to do and they will do it. Now the AI takes the place of the engineer.

    Also, I personally never believed before AI that programming will disappear, so the argument that "this has been hyped before" doesn't touch my soul.

    I have no idea why this is so hard to understand. I'd like people to reply to me in addition to downvoting.

    • danhau 3 hours ago
      Programmers have enjoyed an occupation with solid stability and growing opportunities. AI challenging this virtually over night is a tough pill to swallow. Naturally, many subscribe to the hope that it will fail.

      How far AI will succeed in replacing programmers remains to be seen. Personally I think many jobs will disappear, especially in the largest domains (web). But I think this will only be a fraction and not a majority. For now, AI is simply most useful when paired with a programmer.

      • aleph_minus_one 2 hours ago
        > Programmers have enjoyed an occupation with solid stability and growing opportunities.

        This is not the case:

        - Before the 90s, programming was rather a job for people who were insanely passionate about technology, and working as a programmer was not that well-regarded (so no "growing opportunities").

        - After the burst of the first dotcom bubble, a lot of programmers were unemployed.

        - Every older programmer can tell you how fast the skills that they have can become and became irrelevant.

        Over the last decade, the stability and opportunities for programmers was more like a series of boom-bust cycles.

      • cafebabbe 3 hours ago
        AI is useful when paired with an experienced programmer.

        Experienced through old-school (pre-LLM) practice.

        I don't clearly see a good endgame for this.

        • citrin_ru 2 hours ago
          Endgame is to produce AI which will not need any supervision by the time the current generation of experienced developers will retire or even sooner. I don’t know if it will happen but many bet on this and models are still improving, flattening is not yet seen.
          • ajshahH 1 hour ago
            This implies programming is done and there will be no other advancements.

            And flattening is being seen, no? Recent advancements are mostly from RL’ing, which has limitations (and tradeoffs) too. Are there more tricks after that?

        • duggan 2 hours ago
          Motivated novices will just learn differently, and produce different kinds of systems for different audiences with different expectations.

          Some will dig into obscurities that LLMs don't or can't touch, others will orchestrate the tools, Gastown-style, into some as-yet-unknown form.

          People will vibe themselves into a corner and either start learning or flame out.

    • t_mahmood 2 hours ago
      A manager is not going to handle all the nitty gritty details, that an engineer knows, fine say, they can ask a LLM to make a web portal.

      Does he know about SQL injection? XSS?

      Maybe he knows slightly about security stuffs and asks the LLM to make a secure site with all the protection needed. But how the manager knows it works at all? If you figure out there's a issue with your critical part of the software, after your users data are stolen, how bad the fallback is going to be?

      How good a tool is also depends on who's using it. Managers are not engineers obviously unless he was an engineer before becoming a manager, but you are saying engineers are not needed. So, where's the engineer manager is going to come from? I'm sure we're not growing them in some engineering trees

      • edgyquant 48 minutes ago
        There are already companies that exist to audit the security of codebases programmatically so this will just be part of the flow
      • skydhash 2 hours ago
        It's like saying "I want a bridge" and then expect steel beams and cables to appear (or planks and ropes) and that's all you need. The user needs are usually clear enough (they need a way to cross that body of water or that chasm), but the how is the real catch.

        In the real world, the materials are visible so people have a partial understanding on how it gets done. But most of the software world is invisible and has no material constraints other than the hardware (you can't use RAM that is not there). If the hardware is like a blank canvas, a standard web framework is like a draw by the numbers book (but one with lines drawn by a pencil so you can erase it easily). Asking the user to code with LLM is like asking a blind to draw the Mona Lisa with a brick.

    • ajshahH 1 hour ago
      > And it can fill in the missing details and disambiguate the others.

      Are you suggesting “And Claude, make no mistakes” works?

      Because otherwise you need an expert operating the thing. Yes, it can answer questions, but you need to know what exactly to ask.

      > This has been known to be possible for decades, as (simplifying a bit) the (non-technical) manager can order the engineer in natural, ambiguous language what to do and they will do it

      I have yet to see vibe coding work like this. Even expert devs with LLMs get incorrect output. Anytime you have to correct your prompt, that’s why your argument fails.

      • mexicocitinluez 1 hour ago
        I truly believe that people that see entire, non-trivial applications being bult without serious human intervention have not in fact worked on non-trivial applications.

        And while these tools can be invaluable in some cases, I still don't know how we get from "Hazy requirements where the user doesn't know what they even want" to "Production-ready apps built at the finger-tips of the PM".

        Another really important detail people keep missing is that we have to make thousands of micro-decisions along the way to build up a cohesive experience to the user. LLM's haven't really shown they're great at not building assumptions into code. In fact, they're really bad at it.

        Lastly, do people not realize how easy it to so convince an LLM of something that isn't true or vice versa? i love these tools but even I find myself trying to steer it into the direction that makes sense to me, not the direction that makes sense generally.

    • mexicocitinluez 1 hour ago
      > All the other attempts failed because they were just mindless conversions of formal languages to formal languages.

      This is just categorically false.

      No-code tools didn't fail because they were "mindless conversions of formal languages to formal languages". They failed because the people who were supposed to benefit the most (non-developers) neither had the time nor desire to build stuff in the first place.

    • quotemstr 3 hours ago
      The thing about talking to computers is less the formality and more the specificity. People don't know what they want. To use an LLM effectively, you need to think about what you want with enough clarity to ask for it and check that you're getting it. That LLMs accept your wishes in the form of natural language instead of something with a LALR(1) grammar doesn't magically obviate the need for specificity and clarity in communication.
      • bananaflag 3 hours ago
        Agree that one needs clarity, but how does that differ from my example with the manager and the engineer? The manager also (ideally) learns in time that, when they are more clear, the engineer does the work better.
        • elasticeel 1 hour ago
          Do they though? Our do they learn that having a good engineer means they can assign ambiguous tasks and the software developer can reason through good decision making and follow up with clarifying questions.

          LLMs need to get better at asking clarifying questions and trying to show the initial solution might not work. Even when they get better at that, this article states that managers not capable of thinking through the answers well enough will fall short and this is the space that developers live in.

        • skydhash 2 hours ago
          TLDR: Clarity in software engineering means detailing all the constraints, which no user (apart from lawyers and engineers) usually do, as the real world has constraints that software does not.

          The hardware offers so little guarantees that the whole OS job is to offer that. All layers are formal, but usefulness doesn't comes from that. Usefulness comes from a consistent models that embodies a domain. So you have the hardware that has capabilities but no model. Then you add the OS's kernel that will impose a model on the hardware, then you have the system libraries that will further restrict it to a certain domains. Then you have the general libraries that are more useful because they present another perspective. And then you have the application that use this last model according to a certain need.

          A good example is that you go from the sound card to the sound subsystem, the the alsa libraries, to pipewire, to an audio player or a media framework like the one in the browser. This particular tower has dozens of engineers that has contributed to it, and most developers only deal with the last layers, but the lesson is that the perspective of a user differs from the building blocks that we have in hand. Software engineering is to reconcile the twos.

          So people may know how the things should look or behave on their hand, but they have no idea on what the building blocks on the other hand. It's all abstract. The only thing real is the hardware and the energy powering it. Everything else needs to be specified with code. And in that world that forms the middle layer, there's a lot of rules to follow to make something good, but laws that prevent something bad are little. It's not like physical engineering where there are things you just cannot do.

          Just like on a canvas you can draw anything as long as it's inside the boundary of the canvas, you can do anything in software as long as it's inside the boundary of the hardware. OS in personal computers adds a little more restrictions, but it's not a lot. It's basically fantasia in there.

    • empath75 2 hours ago
      I spent the last two weeks at work building a whole system to deploy automated claude code agents in response to events and even before i finished it was already doing useful work and now it is automatically handling jira tickets and making PRs.
  • nsjdjdkdz 2 hours ago
    [flagged]
    • prsheetraj 2 hours ago
      Same phenomena noticed here at IBM Mumbai sir.
  • Havoc 4 hours ago
    History reviews is not a great way to approach ground breaking tech
    • elcapitan 3 hours ago
      "Not learning from history because the present is the present" is a pretty accurate description of the world in 2026, at least.
    • g947o 1 hour ago
      You are not going to stop people from reading into history, ever. If anything, people need to learn more about what happened in the past.
    • forgetfreeman 4 hours ago
      We have yet to invent ground breaking tech that transcends either human nature or the banal depravity that stems from the profit motive at scale. Prior history of major tech innovations therefore may have some insight to offer regarding expected outcomes of the current hype wave around AI. The notion that technology so cleanly breaks from underlying social paradigms as to be wholly unpredictable is one of the tech industries most persistently naive and destructive mythologies.