Vibe coding kills open source

(arxiv.org)

213 points | by kgwgk 3 hours ago

44 comments

  • WarmWash 2 hours ago
    Small bespoke personalized on the spot apps are the future with LLMs.

    The future will absolutely not be "How things are today + LLMs"

    The paradigm now for software is "build a tool shed/garage/barn/warehouse full of as much capability for as many uses possible" but when LLMs can build you a custom(!) hammer or saw in a few minutes, why go to the shed?

    • anticorporate 2 hours ago
      I think you're missing the enormous value in apps being standardized and opinionated. Standardized means that in addition to documentation, the whole internet is available to help you. Opinionated means as a user of an app in a new domain, you don't have to make a million decisions about how something should work to just get started.

      Sure, there will be more personalized apps for those who have a lot of expertise in a domain and gain value from building something that supports their specific workflow. For the vast majority of the population, and the vast majority of use cases, this will not happen. I'm not about to give up the decades of experience I've gained with my tools for something I vibe coded in a weekend.

      • tracker1 18 minutes ago
        I've seen plenty of "standardized" (ie, "Enterprise" applications)... I'd just assume a bespoke hammer that's simple and easy to understand over a complex beast of HammerFactoryFactory to deliver you a builder of custom hammer builders so you get the JobHammer you need as part of the IoC loader platform that is then controlled through a 1.2gb service orchestrator that breaks at 11am every third Tuesday for an hour. When all you need to do is post up a "Help Wanted" poster on a piece of wood.
      • seniorThrowaway 13 minutes ago
        AI's / LLM's have already been trained on best practices for most domains. I've recently faced this decision and I went the LLM custom app path, because the software I needed was a simple internal business type app. There is open source and COTS software packages available for this kind of thing, but they tend to be massive suites trying to solve a bunch of things I don't need and also a minefield of licensing, freemium feature gating, and subject to future abandonment or rug pulls into much higher costs. Something that has happened many times. Long story short, I decided it was less work to build the exact tool I need to solve my "right now" problem, architected for future additions. I do think this is the future.
      • Bishonen88 24 minutes ago
        Expertise won't be needed (it already isn't). One can create copies of apps with vague descriptions referencing those big apps:

        "Create a copy of xyz. It needs to look and behave similarly. I want these features ... And on top of that ...". Millions decisions not needed. A handful of vague descriptions of what one wants is all it takes today. I think claude and co. can even take in screenshots.

        Documentation won't be needed either IMO. Since humans won't write nor read the code. They will simply ask LLM's if they have a question.

        I totally am giving up my experience with various paid SaaS this year, which I was paying for last years. Not only am I able to add the features that I was wishing for those tools to have (and would have never made it into the real app because they're niche requests), but am saving money at the same time.

        And the above is just whats happening today. Claude Code is younger than 1 year old. Looking forward to come back to this thread in a year and swallow my words... but I'm afraid I won't have to.

      • digiown 17 minutes ago
        The apps/use cases for which such standardized and opinions tools can exist for, economically, mostly already exist IMO. Vibe coded tools fill an enormous space of semi-unique problems that only affect a small amount of people. For example various scripts to automate tasks imposed by a boss. The best balance is probably to use LLMs to use the standardized tools for you when available, so that things remain mostly scrutable.

        As the saying goes, 80% of users only use 20% of the features of your program, but they are different 20% parts. When the user vibecode the program instead, only their specific 20% needs to be implemented.

      • iknowSFR 2 hours ago
        Then you’re going to be left behind. I’m going to be left behind.

        Every problem or concern you raise will adapt to the next world because those things are valuable. These concerns are temporary, not permanent.

        • blibble 1 hour ago
          > Then you’re going to be left behind.

          I really, really don't care

          I didn't get into programming for the money, it's just been a nice bonus

          • frizlab 1 hour ago
            > I didn't get into programming for the money, it's just been a nice bonus.

            Exactly the same for me! If kind of feel like an artist whose paintings are worth more more easily than a paint or music artist… But boy would I be poor if this art were worthless!

    • II2II 1 hour ago
      > when LLMs can build you an custom(!) hammer or saw in a few minutes, why go to the shed?

      Because software developers typically understand how to implement a solution to problem better than the client. If they don't have enough details to implement a solution, they will ask the client for details. If the developer decides to use an LLM to implement a solution, they have the ability to assess the end product.

      The problem is software developers cost money. A developer using an LLM may reduce the cost of development, but it is doubtful that the reduction in cost will be sufficient to justify personalized applications in many cases. Most of the cases where it would justify the cost would likely be in domains where custom software is in common use anyhow.

      Sure, you will see a few people using LLMs to develop personalized software for themselves. Yet these will be people who understand how to specify the problem they are trying to solve clearly, will have the patience to handle the quirks and bugs in the software they create, and may even enjoy the process. You may even have a few small and medium sized businesses hiring developers who use LLMs to create custom software. But I don't think you're going to see the wholesale adoption of personalized software.

      And that only considers the ability of people to specify the problem they are trying to solve. There are other considerations, such as interoperability. We live in a networked world after all, and interoperability was important even before everything was networked.

    • palmotea 2 hours ago
      > The paradigm now for software is "build a tool shed/garage/barn/warehouse full of as much capability for as many uses possible" but when LLMs can build you an custom(!) hammer or saw in a few minutes, why go to the shed?

      1) Your specific analogy is kinda missing something important: I don't want my tools working differently every time I use them, also it's work to use LLMs. A hammer is kind of a too-simple example, but going with it anyway: when I need a hammer, I don't want my "LLM" generating a plastic one, then having to iterate for 30 minutes to get it right. It takes me far less than 30 minutes to go to my shed. A better example is would be a UI, even if it was perfect, do you want all the buttons and menus to be different every time you use the tool? Because you generate a new one each time instead of "going to the shed"?

      2) Then there's the question, can an LLM actually build, or does it just regurgitate? A hammer is an extremely we'll understood tool, that's been refined over centuries, so I think an LLM could do a pretty good job with one. There are lots of examples, but that also means the designs the LLM is referencing are probably better than the LLM's output. And then for things not like that, more unique, can the LLM even do it at all or with a reasonable amount of effort?

      I think there's a modern phenomenon where making things "easier" actually results in worse outcomes, a degraded typical state vs. the previous status quo, because it turns what was once a necessity into a question of personal discipline. And it turns out when you remove necessity, a lot of people have a real hard time doing the best thing on discipline alone. LLMs might just enable more of those degenerate outcomes: everyone's using "custom" LLM generated tools all the time, but they all actually suck and are worse than if we just put that effort into designing the tools manually.

      • tracker1 8 minutes ago
        I started picturing AI generating tools like it does images of people... I mean, of course every other hammer will have an extra head off to the side, or split into 3 handles.

        Seriously though, you can tell AI what libraries and conventions you want to follow... that's been a lot of what I've done with it recently... I've been relatively pleased with the results.

        I've said several times that it's not perfect, but it is an overall force multiplier. It's much like working disconnected with an overseas dev team, but you get turn around in minutes instead of the next morning in your email. The better instructions/specs you give, the better the results. On my best day, I got about 3 weeks of what would take me alone done, after about 3 hours of planning/designing and another 2-3 hours of iteration with Claude Code. On my worst day, it was frustrating and it would have been about the same amount of time doing it myself. On average, I'd say I get close to 5 days of work done in 5-6 hours of AI assisted coding. Purely anecdotally.

        That said, I usually have a technical mind for how I want the solution structured as well as features and how those features work... often it clashes with the AI approach and sometimes it swims nicely. I'll also say that not all AI coding is the same or even close in terms of usefulness.

    • jayd16 26 minutes ago
      Why use a battle tested, secure, library that you know solves your problem when you can burden your project with custom code you need to maintain?
      • seniorThrowaway 9 minutes ago
        While quality libraries do exist, let's not pretend that most people are validating and testing the libraries they pull in, that abandoned / unmaintained libraries aren't widely used, and that managing the dependency hell caused by libraries is free.
    • rurp 59 minutes ago
      The vast majority of users make zero changes to the default settings of an app or device, even for software they use all the time and where some simple builtin adjustments would significantly improve their experience.

      I simply can't imagine a world where these same people all decide they constantly want to learn a completely unique UX for whatever piece of software they want to use.

      • ryandrake 53 minutes ago
        All the people may not, but a decently skilled software engineer armed with an LLM, who doesn't have a lot of free time might be now be motivated to do it, whereas before it was like, "This thing is going to take months to replace, do I really want to write my own?"
      • 7e 42 minutes ago
        The LLM will know how the user operates, their proclivities and brain structure, and will design UX perfectly suited to them, like a bespoke glove. They won't have to learn anything, it will be like a butler.
        • parineum 36 minutes ago
          Why not just say that the LLM will just do all the work while you're making up future, hypothetical capabilities of LLMs?
    • candiddevmike 2 hours ago
      Because whatever you use a LLM to build will inevitably need more features added or some kind of maintenance performed. And now you're spending $200+/mo on LLM subscriptions that give you a half-assed implementation that will eventually collapse under its own weight, vs just buying a solution that actually works and you don't have to worry about it.
    • Ravus 1 hour ago
      I do not think that this is likely to be a successful model.

      When (not if) software breaks in production, you need to be able to debug it effectively. Knowing that external libraries do their base job is really helpful in reducing the search space and in reducing the blast radius of patches.

      Note that this is not AI-specific. More generally, in-house implementations of software that is not your core business brings costs that are not limited to that of writing said implementation.

    • jerf 2 hours ago
      "why go to the shed"

      A good question but there's a good answer: Debugged and tested code.

      And by that, I mean the FULL spectrum of debugging and testing. Not just unit tests, not even just integration tests, but, is there a user that found this useful? At all? How many users? How many use cases? How hard has it been subjected to the blows of the real world?

      As AI makes some of the other issues less important, the ones that remain become more important. It is completely impossible to ask an LLM to produce a code base that has been used by millions of people for five years. Such things will still have value.

      The idea that the near-future is an AI powered wonderland of everyone getting custom bespoke code that does exactly what they want and everything is peachy is overlooking this problem. Even a (weakly) superhuman AI can't necessarily anticipate what the real world may do to a code base. Even if I can get an AI to make a bespoke photo editor, someone else's AI photo editor that has seen millions of person-years of usage is going to have advantages over my custom one that was just born.

      Of course not all code is like this. There is a lot of low-consequence, one-off code, with all the properties we're familiar with on that front, like, there are no security issues because only I will run this, bugs are of no consequence because it's only ever going to be run across this exact data set that never exposes them (e.g., the vast, vast array of bash scripts that will technically do something wrong with spaces in filenames but ran just fine because there weren't any). LLMs are great for that and unquestionably will get better.

      However there will still be great value in software that has been tested from top to bottom, for suitability, for solving the problem, not just raw basic unit tests but for surviving contact with the real world for millions/billions/trillions of hours. In fact the value of this may even go up in a world suddenly oversupplied with the little stuff. You can get a custom hammer but you can't get a custom hammer that has been tested in the fire of extensive real-world use, by definition.

    • ryandrake 55 minutes ago
      The more I experiment with quickly coding up little projects with LLMs the more I am convinced of this. There is that saying: 90% of your customers use 10% of your software's features, but they each use a different 10%. Well, the ability to quickly vibe up a small bespoke app that does that 10% AND NOTHING ELSE is here now, and it kind of solves that problem. We don't need to put up with DoEverythingBloatWare (even open source DoEverything) when you can just have the bits and pieces you actually want/need.

      Also, you don't have to fear breaking updates--you know for sure that the software's UI will not just change out from under you because some designer had to pad their portfolio. Or that you're not going to lose a critical feature because the developer decided to refactor and leave it out.

      I'm currently going through and looking at some of the bigger, bloated, crashing slow-moving software I use and working on replacements.

    • FeloniousHam 43 minutes ago
      I can speak to this directly: I've customized a few extensions I use with VSCode, (nearly) completely having the AI generate/iterate over my feature request until it works. I don't have the time to learn the details (or different languages) of the various projects, but I get huge benefit from the improvements.

      - PRO Deployer

      - MS Typescript

      - Typescript-Go

      - a bespoke internal extension to automate a lot of housekeeping when developing against tickets (git checks, branch creation, stash when switching, automatically connecting and updating ticket system)

    • otikik 2 hours ago
      > when LLMs can build you an custom(!) hammer or saw in a few minutes, why go to the shed?

      Because I thought I needed a hammer for nails (employee payroll) but then I realized I also need it to screw (sales), soldering (inventory management) and cleanup (taxes).

      Oh and don't forget that next month the density of iron can lower up to 50%.

      • freedomben 2 hours ago
        Screw sales! I've definitely felt that way more than a few times :-D

        Good points. It does feel like that happens quite often

    • wasmitnetzen 1 hour ago
      Because I will probably ask the AI for a rock instead of a bespoke hammer. If I even know what a nail is.

      I very much like to use the years of debugging and innovation others spent on that very same problem that I'm having.

    • rglover 1 hour ago
      Would you trust your hand next to a saw made by an LLM?
      • tracker1 5 minutes ago
        Maybe. Were the designs reviewed by qualified engineers and gone trough rigorous QA cycles before getting placed in front of me?
    • skybrian 2 hours ago
      Maybe true for some apps, but I suspect we will still have a vibrant ecosystem of package managers and open source libraries and coding agents will know how to use them.
      • marginalia_nu 2 hours ago
        What would be the point of that? If LLMs ever actually become competent, surely they can just implement what they need.
        • wongarsu 2 hours ago
          The same reason why they exist now. Why spend millions of tokens on designing, implementing and debugging something, followed by years of discovering edge cases in the real world, if I can just use a library that already did all of that

          Sure, leftpad and python-openai aren't hugely valuable in the age of LLMs, but redis and ffmpeg are still as useful as ever. Probably even more useful now that LLMs can actually know and use all their obscure features

    • pier25 2 hours ago
      I don't think apps where people spend a lot of time are equivalent to small tools. You can vibe code a calculator but you probably spend most of your time on much more complex software.
      • groundzeros2015 1 hour ago
        A calculator that uses doubles for everything I guess.
    • groundzeros2015 1 hour ago
      Because it can’t really do that for any tools that matter.
    • pjmlp 1 hour ago
      Exactly, think StarTrek replicator.
    • reactordev 2 hours ago
      Why need a tool at all when the LLM can just build the house? What is a hammer? What is a keyboard? What’s a “Drivers License”?
    • squigz 2 hours ago
      Because going to the shed to get a work-tested tool is still faster than waiting on an LLM and hoping it meets every use-case you're likely to run into with that tool.

      Whatever it is, the future will also certainly not be what it was a couple decades ago - that is, every one inventing their own solution to solved problems, resulting in a mess of tools with no standardization. There is a reason libraries/frameworks/etc exist.

    • exe34 2 hours ago
      along that line of thinking, I've been wondering if there are better building blocks. right now we're asking llms to use the bricks designed for the human hand building a cathedral - what do the bricks look like when we want AI to build many sheds for specific use? functional programming? would the database ideas of data storage like the longhorn vapourware make a come back?
    • HugoDz 1 hour ago
      [dead]
    • draxil 2 hours ago
      I think that's an optimistic interpretation of how good LLMs are?

      But I think the reality is: LLMs democratise access to coding. In a way this decreases the market for complete solutions, but massively increases the audience for building blocks.

      • ipaddr 2 hours ago
        That you get no credit for open sourcing. Why would creators spend time anymore?
      • blibble 53 minutes ago
        > LLMs democratise access to coding

        by making the world dependent on 3, fascist adjacent, US tech companies?

        • vel0city 14 minutes ago
          I didn't know Mistral, Z.ai, Qwen, and Deepseek were all fascist adjacent US tech companies.
      • croes 2 hours ago
        >LLMs democratise access to coding

        Vibe coders don't code, they let code. So LLMs democratise access to coders.

        • kibwen 2 hours ago
          Closed-source models aren't "democratizing" access to anything. If you wanted to hire a contractor to write some code for you, that's always been possible.
          • fragmede 1 hour ago
            Part of democracy is that it's available to all citizens, and not just for the rich. Yes, it's always been possible to find someone, but not for $200/month that will work tirelessly wherever you want them to. 9:00 am Monday? great. 7pm Tuesday? Also great. 4 am on Sunday? Just as great, for an LLM.
            • dns_snek 21 minutes ago
              How long will this heavily subsidized price of $200/month last? Do you really think these companies are going to let you pocket all the surplus value forever?

              We all know that the music is going to stop eventually and that the landscape after that is going to look very different. Subsidies will stop and investors will want their trillions in returns. Talking about "democratization" while everyone is just using other people's money is completely premature.

              Airbnb "democratized travel" for a while and now they're more expensive than their predecessors.

    • InMice 2 hours ago
      I like this take. "How things are today + LLM" is in some ways the best we can approximate because one is all we know and other side is the future unfolding before our eyes. One of the coolest things about vibe coding I find is starting with a base like django then using vibe coding to build models and templates exactly how one wants for a UIUX. Basically maybe we still need humans for the guts and low level stuff but that provides a base for fast, easy personalized customization.

      I had a job where in short we had a lot of pain points with software that we had no resources permitted to fix them. With a mix past experience, googling I started writing some internal web based tools to fix these gaps. Everyone was happy. This is where I see vibe coding being really helpful in the higher level stuff like higher level scripting and web based tools. Just my opinion based on my experience.

  • nicoburns 2 hours ago
    Something I've noticed is that AI code generation makes it easier/faster to generate code while shifting more work of the work of keeping code correct and maintainable to the code review stage. That can be highly problematic for open source projects that are typically already bottlenecked by maintainer review bandwidth.

    It can be mitigated by PR submitters doing a review and edit pass prior to submitting a PR. But a lot of submitters don't currently do this, and in my experience the average quality of PRs generated by AI is definitely significantly lower than those not generated by AI.

    • pgroves 1 hour ago
      I was expecting this to be the point of the article when I saw the title. Popular projects appear to be drowning in PRs that are almost certainly AI generated. OpencodeCli has 1200 open at the moment[1]. Aider, which is sort of abandoned has 200 [2]. AFAIK, both projects are mostly one maintainer.

      [1] https://github.com/anomalyco/opencode/pulls [2] https://github.com/Aider-AI/aider/pulls

    • trey-jones 2 hours ago
      To me, an old guy, I would rather have LLM doing (assisting with) the code review than the actual code production. Is that stupid?
      • electroly 1 hour ago
        LLMs are great at reviewing. This is not stupid at all if it's what you want; you can still derive benefit from LLMs this way. I like to have them review at the design level where I write a spec document, and the LLM reviews and advises. I don't like having the LLM actually write the document, even though they are capable of it. I do like them writing the code, but I totally get it; it's no different than me and the spec documents.
        • trey-jones 29 minutes ago
          Right, I'd say this is the best value I've gotten out of it so far: I'm planning to build this thing in this way, does that seem like a good idea to you? Sometimes I get good feedback that something else would be better.
      • groundzeros2015 1 hour ago
        This makes sense to me.

        I need to make decisions about how things are implemented. Even if it can pick “a way” that’s not necessarily going to be a coherent design that I want.

        In contrast for review I already made the choices and now it’s just providing feedback. More information I can choose to follow or ignore.

      • Leynos 47 minutes ago
        Take a look at CodeRabbit and Sourcery if you want to give that a go.
    • echelon 2 hours ago
      The maintainers can now do all the work themselves.

      With the time they save using AI, they can get much more work done. So much that having other engineers learn the codebase is probably not worth it anymore.

      Large scale software systems can be maintained by one or two folks now.

      Edit: I'm not going to get rate limited replying to everyone, so I'll just link another comment:

      https://news.ycombinator.com/item?id=46765785

      • wooderson_iv 2 hours ago
        Do you have anecdotes or evidence of this or is it speculative?
      • j16sdiz 2 hours ago
        Those are the most mentally exhausting task. Are you sure putting this burden on single person is good?
      • shafyy 2 hours ago
        Not sure if you're being sarcastic or not?
  • marginalia_nu 2 hours ago
    > When OSS is monetized only through direct user engagement, greater adoption of vibe coding lowers entry and sharing, reduces the availability and quality of OSS, and reduces welfare despite higher productivity. Sustaining OSS at its current scale under widespread vibe coding requires major changes in how maintainers are paid.

    I can't think of even a single example of OSS being monetized through direct user engagement. The bulk of it just isn't monetized at all, and what is monetized (beyond like a tip jar situation where you get some coffee money every once in a while) is primarily sponsored by enterprise users, support license sales, or through grants, or something like that. A few projects like Krita sell binaries on the steam store.

    • dfox 2 hours ago
      There is this kind of webdev-adjacent niche where the model of using documentation (or even intentionally sub-par documentation) as a marketing funnel for consulting and/or "Pro" versions is a thing. These projects are somewhat vocal about vibe coding killing their business models. If these projects really create any meaningful value is another question.
    • nprateem 41 minutes ago
      Terraform, ansible, countless others. No community=no enterprise version, no awareness
  • delegate 2 hours ago
    There's some irony in the fact that LLMs are in large part possible because of open source software.

    From the tools which were used to design and develop the models (programming languages, libraries) to the operating systems running them to the databases used for storing training data .. plus of course they were trained mostly on open source code.

    If OSS didn't exist, it's highly unlikely that LLMs would have been built.

    • blibble 1 hour ago
      > If OSS didn't exist, it's highly unlikely that LLMs would have been built.

      would anyone want SlopHub Copilot if it had been trained exclusively on Microsoft's code?

      (rhetorical question)

  • cheema33 2 hours ago
    I am a huge proponent of using AI tools for software development. But until I see a vibe coded replacement for the Linux kernel, PostgreSQL, gcc, git or Chromium, I am just going to disagree with this premise. If I am on a system without Python installed, I don't see Claude saying, oh, you don't need to download it, I'll write the Python interpreter for you.
    • Quarrel 2 hours ago
      > I am a huge proponent of using AI tools for software development. But until I see a vibe coded replacement for the Linux kernel, PostgreSQL, gcc, git or Chromium, I am just going to disagree with this premise.

      Did you read it?

      It isn't saying that LLMs will replace major open source software components. It said that the "reward" for providing, maintaining and helping curate these OSS pieces; which is the ecosystem they exist in, just disappears if there is no community around it, just an LLM ingesting open source code and spitting out a solution good or bad.

      We've already seen curl buckle under the pressure, as their community minded, good conscious effort to give back to security reports, collapsed under the weight of slop.

      This is largely about extending that thesis to the entire ecosystem. No GH issues, no PRs, no interaction. No kudos on HN, no stars on github, no "cheers mate" as you pass them at a conference after they give a great talk.

      Where did you get that you needed to see a Linux kernel developed from AI tools, before you think the article's authors have a point?

      • volkercraig 2 hours ago
        > This is largely about extending that thesis to the entire ecosystem. No GH issues, no PRs, no interaction. No kudos on HN, no stars on github, no "cheers mate" as you pass them at a conference after they give a great talk.

        Oh... so nothing's gonna change for me then...

  • sanskritical 17 minutes ago
    Open source software, by the admission of this article, is a critical input to AI agents useful for code generation. So the way I see it is that there is now an entire industry that is incentivized to financially support open source software for entirely new reasons. To keep the models trained on new languages, libraries, and developments in computer science, they need to make sure that high quality modern code is still freely available, forever.

    Vibe coding eventually creates more value for FOSS, not less.

    • gegtik 10 minutes ago
      Seems you are arguing that The Tragedy Of The Commons could never happen because people benefit from the Commons
    • reustle 11 minutes ago
      There is a reasonable argument that there won’t be many new languages now that models are sufficiently trained. If anything, we may see languages optimized for models and not humans.
  • tomaytotomato 2 hours ago
    I have been trying to use Claude code to help improve my opensource Java NLP location library.

    However trying to get it to do anything other than optimise code or fix small issues it struggles. It struggles with high level abstract issues.

    For example I currently have an issue with ambiguity collisions e.g.

    Input: "California"

    Output: "California, Missouri"

    California is a state but also city in Missouri - https://github.com/tomaytotomato/location4j/issues/44

    I asked Claude several times to resolve this ambiguity and it suggested various prioritisation strategies etc. however the resulting changes broke other functionality in my library.

    In the end I am redesigning my library from scratch with minimal AI input. Why? because I started the project without the help of AI a few years back, I designed it to solve a problem but that problem and nuanced programming decisions seem to not be respected by LLMs (LLMs dont care about the story, they just care about the current state of the code)

    • Cthulhu_ 2 hours ago
      > I started the project in my brain and it has many flaws and nuances which I think LLMs are struggling to respect.

      The project, or your brain? I think this is what a lot of LLM coders run into - they have a lot of intrinsic knowledge that is difficult or takes a lot of time and effort to put into words and describe. Vibes, if you will, like "I can't explain it but this code looks wrong"

      • tomaytotomato 2 hours ago
        I updated my original comment to explain my reasoning a bit more clearly.

        Essentially I ask an LLM to look at a project and it just sees the current state of the codebase, it doesn't see the iterations and hacks and refactors and reverts.

        It also doesn't see the first functionality I wrote for it at v1.

        This could indeed be solved by giving the LLM a git log and telling it a story, but that might not solve my issue?

        • michaelbuckbee 2 hours ago
          I'm now letting Claude Code write commits + PRs (for my solo dev stuff), but the benefits have been pretty immense as it's basically Claude keeping a history of it's work that can then be referenced at any time that's also outside the code context window.

          FWIW - it works a lot better to have it interact via the CLI than the MCP.

        • alright2565 2 hours ago
          I personally don't have any trouble with that. Using Sonnet 3.7 in Claude Code, I just ask it to spelunk the git history for a certain segment of the code if I think it will be meaningful for its task.
          • gibspaulding 2 hours ago
            Out of curiosity, why 3.7 Sonnet? I see lots of people saying to always use the latest and greatest 4.5 Opus. Do you find that it’s good enough that the increased token cost of larger/more recent models aren’t worth it? Or is there more to it?
            • alright2565 25 minutes ago
              I misremembered :(

              4.5 Sonnet, but because I've been stuck on 3.7 Sonnet for so long due to corporate policy I wrote the wrong thing.

              And yeah corporate policy. Opus is not available. I prefer Codex for my personal coding but I have not needed to look in the Git history here yet.

            • azuanrb 2 hours ago
              Opus is pretty overkill sometimes. I use Sonnet by default. Haiku if I have clearer picture of what I'm trying to solve. Opus only when I notice any of the models struggle. All 4.5 though. Not sure why 3.7. Curious about that too.
            • neko-kai 2 hours ago
              I suspect they use the LLM for help with text editing, rather than give it standalone tasks. For that purpose a model with 'thinking' would just get in the way.
            • fragmede 1 hour ago
              speed > thinking longer for smaller tasks.
      • cpursley 2 hours ago
        Yes, a lot of coders are terrible at documentation (both doc files and code docs) as well as good test coverage. Software should not need to live in ones head after written, it should be well architected and self-documenting - and when it is, both humans and LLMs navigate it pretty well (when augmented with good context management, helper mcps, etc).
      • nevi-me 2 hours ago
        I've been a skeptic, but now that I'm getting into using LLMs, I'm finding being very descriptive and laying down my thoughts, preferences, assumptions, etc, to help greatly.

        I suppose a year ago we were talking about prompt engineers, so it's partly about being good at describing problems.

        • faxmeyourcode 2 hours ago
          One trick to get out of this scenario where you're writing a ton is to ask the model to interview until we're in alignment on what is being built. Claude and open code both have an AskUserQuestionTool which is really nice for this and cuts down on explanation a lot. It becomes an iterative interview and clarifies my thinking significantly.
    • epolanski 1 hour ago
      One major part of successful LLM-assisted coding is to not focus on code vomiting but scaffolding.

      Document, document, document: your architecture, best practices, preferences (both about code and how you want to work with the LLM and how do you expect it to behave it).

      It is time consuming, but it's the only way you can get it to assist you semi-successfully.

      Also try to understand that LLM's biggest power for a developer is not in authoring code as much as assistance into understanding it, connecting dots across features, etc.

      If your expectation is to launch it in a project and tell it "do X, do Y" without the very much needed scaffolding you'll very quickly start losing the plot and increasing the mess. Sure, it may complete tasks here and there, but at the price of increasing complexity from which it is difficult for both you and it to dig out.

      Most AI naysayers can't be bothered with the huge amount of work required to setup a project to be llm-friendly, they fail, and blame the tool.

      Even after the scaffolding, the best thing to do, at least for the projects you care (essentially anything that's not a prototype for quickly validating an idea) you should keep reading and following it line by line, and keep updating your scaffolding and documentation as you see it commit the same mistakes over and over. And part of scaffolding requires also to put the source code of your main dependencies. I have a _vendor directory with git subtrees for major dependencies. LLMs can check the code of the dependencies, the tests, and figure out what they are doing wrong much quicker.

      Last but not least, LLMs work better with certain patterns, such as TDD. So instead of "implement X", it's better to "I need to implement X, but before we do so, let's setup a way for testing and tracking our progress against". You can build an inspector for a virtual machine, you can setup e2es or other tests, or just dump line by line logs in some file. There's many approaches depending on the use case.

      In any case, getting real help for LLMs for authoring code (editing, patching, writing new features) is highly dependent on having good context, good setup (tests, making it write a plan for business requirements and one for implementation) and following and improving all these aspects as you progress.

      • tomaytotomato 1 hour ago
        I agree to an extent

        My project is quite well documented and I created a Prompt a while back along with some mermaid diagrams

        https://github.com/tomaytotomato/location4j/tree/master/docs

        I can't remember the exact prompt I gave to the LLM but I gave it a Github issue ticket and description.

        After several iterations it fixed the issue, but my unit tests failed in other areas. I decided to abort it because I think my opinionated code was clashing with the LLM's solution.

        The LLM's solution would probably be more technically correct, but because I don't do l33tcode or memorise how to implement Trie or BST my code does it my way. Maybe I just need to force the LLM to do it my way and ignore the other solutions?

    • skybrian 2 hours ago
      I find that asking it to write a design doc first and reviewing that (both you and the bot can do reviews) gets better results.
    • softwaredoug 1 hour ago
      Sounds a lot like model training and I’ve treated this sort of programming with AI exactly like that importantly making sure I have a test/train split

      Make sure there’s a holdout the agent can’t see that it’s measured against. (And make sure it doesn’t cheat)

      https://softwaredoug.com/blog/2026/01/17/ai-coding-needs-tes...

    • krona 1 hour ago
      If Claude read the entire commit history, wouldn't that allow it to make choices less incongruent with the direction of the project and general way of things?
    • faxmeyourcode 2 hours ago
      > LLMs dont care about the story, they just care about the current state of the code

      You have to tell it about the backstory. It does not know unless you write about it somewhere and give it as input to the model.

      • krona 1 hour ago
        The commit history of that repo is pretty detailed at first glance.
    • px43 2 hours ago
      > it struggles

      It does not struggle, you struggle. It is a tool you are using, and it is doing exactly what you're telling it to do. Tools take time to learn, and that's fine. Blaming the tools is counterproductive.

      If the code is well documented, at a high level and with inline comments, and if your instructions are clear, it'll figure it out. If it makes a mistake, it's up to you to figure out where the communication broke down and figure out how to communicate more clearly and consistently.

      • smrq 1 hour ago
        "My Toyota Corolla struggles to drive up icy hills." "It doesn't struggle, you struggle." ???

        It's fine to critique your own tools and their strengths and weaknesses. Claiming that any and all failures of AI are an operator skill issue is counterproductive.

      • whateveracct 2 hours ago
        This sounds like coding with plaintext with extra steps.
      • zeroCalories 2 hours ago
        Not all tools are right for all jobs. My spoon struggles to perform open heart surgery.
        • rtp4me 2 hours ago
          But as a heart surgeon, why would you ever consider using a spoon for the job? AI/LLMs are just a tool. Your professional experience should tell you if it is the right tool. This is where industry experience comes in.
  • Sevii 1 hour ago
    Vibecoding is great for open source. Open source is already dominated by strong solo programmers like antirez, linus, etc. People with very strong motivations to create software they see as necessary. Vibecoding makes creating open source projects easier. It makes it easier to get from an idea to "Hey guys check this out!" The only downside to open source is the fly by PRs vibecoding enables which are currently draining maintainer time.
  • antirez 2 hours ago
    I believe we will see a new huge wave of useful open source software. However don't expect the development model to stay the same. I was finally able to resurrect a few projects of mine, and many more will come. One incredible thing was the ability to easily merge what was worth merging from forks, for instance. The new OSS will be driven not much by the amount of code you can produce, but from the idea of software you have, how the software should look like, behave, what it should do to be useful. Today design is more important than coding.
    • m000 2 hours ago
      The real question is how much of the new wave of vibe-coded software will be able to graduate from pet project to community-maintained project.

      It feels that vibe coding may exacerbate fragmentation (10 different vibe-coded packages for the same thing) and abandonment (made it in a weekend and left it to rot) for open source software.

      • antirez 1 hour ago
        I believe the process of accumulation of knowledge / fixes / interesting ideas will be still valid, so there will be a tons of small projects doing things that you can replicate and throw away, but the foundational libraries / tools will be still collaborative. But I don't agree with the idea of fragmentation, AI is very good at merging stuff from different branches, even when they diverged significantly.
    • koakuma-chan 2 hours ago
      I don't trust software that has .claude in its GitHub repo.
      • echelon 2 hours ago
        You won't have to ignore this stuff for long. Pretty soon it'll be mandatory to keep up.

        I've been a senior engineer doing large scale active-active, five nines distributed systems that process billions of dollars of transactions daily. These are well thought out systems with 20+ folks on design document reviews.

        Not all of the work falls into that category, though. There's so much plumbing and maintenance and wiring of new features and requirements.

        On that stuff, I'm getting ten times the amount of work done with AI than I was before. I could replace the juniors on my team with just myself if I needed to and still get all of our combined work done.

        Engineers using AI are going to replace anyone not using AI.

        In fact, now is the time to start a startup and "fire" all of these incumbent SaaS companies. You can make reasonable progress quickly and duplicate much of what many companies do without much effort.

        If you haven't tried this stuff, you need to. I'm not kidding. You will easily 10x your productivity.

        I'm not saying don't review your own code. Please do.

        But Claude emits reasonable Rust and Java and C++. It's not just for JavaScript toys anymore.

        - - - - - - - - - - - -

        Edit:

        Holy hell HN, downvoted to -4 in record time. Y'all don't like what's happening, but it's really happening.

        I'm not lying about this.

        I provided my background so you'd understand the context of my claims. I have a solid background in tech.

        The same thing that happened to illustration and art is happening here, to us and to our career. And these models are quite usable for production code.

        I can point Claude to a Rust HTTP handler and say, "using this example [file path], write a new endpoint that handles video file uploads, extracts the metadata, creates a thumbnail, uploads them to the cloud storage, and creates the relevant database records."

        And it does it in a minute.

        I review the code. It's as if I had written it. Maybe a change here or there.

        Real production Rust code, 100 - 500 LOC, one shotted in one minute. It even installs the routes and understands the HTTP framework DSL. It even codegens Swagger API documentation and somehow understands the proc macro DSL that takes Rust five minutes to compile.

        This tech is wizardry. It's the sci fi stuff we dreamed of as kids.

        I don't get the sour opinions. The only thing to fear is big tech monopolozation.

        I suppose the other thing to worry about is what's going to happen to our cushy $400k salaries. But if you make yourself useful, I think it'll work out just fine.

        Perhaps more than fine if you're able to leverage this to get ahead and fire your employer. You might not need your employer anymore. If you can do sales and wear many hats, you'll do exceedingly well.

        I'm not saying non-engineers will be able to do this. I'm saying engineers are well positioned to leverage this.

        • koakuma-chan 2 hours ago
          I'm not saying that you shouldn't use AI.

          There was a submission to a blog post discussing applications of AI but it got killed for some reason.

          https://news.ycombinator.com/item?id=46750927

          I remain convinced that if you use AI to write code then your product will sooner or later turn into a buggy mess. I think this will remain the case until they figure out how to make a proper memory system. Until then, we still have to use our brains as the memory system.

          One strategy I've seen that I like is using AI to prototype, but then write actual code yourself. This is what the Ghostty guy does I believe.

          I agree that AI can write decent Rust code, but Rust is not a panacea. From what I heard, Cursor has a lot of vibe-coded Rust code, but it didn't save it from being, as I said, a buggy mess.

          • bugglebeetle 2 hours ago
            > I remain convinced that if you write code then your product will sooner or later turn into a buggy mess.

            FYFY

            • volkercraig 2 hours ago
              Yeah the level of depraved code I've had contractors ask me to review... I don't think people realize how low the bar is.
        • joks 1 hour ago
          > "The same thing that happened to illustration and art is happening here"

          What are you talking about? Illustrators and artists are not being replaced by AI or required to use AI to "keep up" in the vast majority of environments.

          > "I don't get the sour opinions."

          The reasoning for folks' "sour opinions" has been very well-documented, especially here on HN. This comment reads like people don't like AI because they think it's slow or something, which is not the case.

          • echelon 43 minutes ago
            > What are you talking about? Illustrators and artists are not being replaced by AI or required to use AI to "keep up" in the vast majority of environments.

            I don't know what jobs have been impacted yet, but there will likely be pressure for all content creators and knowledge workers to use the tools to get more work done.

            We'll probably start seeing this in software development this year. The tools finally feel ready for prime time.

            > This comment reads like people don't like AI because they think it's slow or something, which is not the case.

            I am familiar with the most common arguments in opposition - stealing training data, hallucinations, not understanding logic (this is why "engineers in the loop" matters), big corps owning the tech (I really agree with this one), power usage, etc.

            It feels as though the downvotes are from people that "dislike AI" for any of the aforementioned reasons. In the face of the possibility of losing jobs to engineers that leverage AI to get more quality work done, however, I don't know why HN engineers downvote anecdotes about real world usage. This is vital to know and understand. I would think one would want more evidence to consider about the state of things.

            This is a quickly developing story. Your jobs are or will be on the line.

            It doesn't matter what your personal misgivings are if your job will soon require the use of AI. You can hate it all you want, but if people are getting 10x more work done than you, you really don't have a choice.

            This will be the same in every career sector with AI models that can be deployed to automate work -- marketing, editing, film, animation, VFX, software, music production, 3D modeling, game design, etc.

            I don't think the jobs are going away, but I do think they're going to change. Fast.

            No sense in sour grapes.

        • nicoburns 1 hour ago
          > Holy hell HN, downvoted to -4 in record time. Y'all don't like what's happening, but it's really happening. > > I'm not lying about this. > > I provided my background so you'd understand the context of my claims. I have a solid background in tech.

          There are lots of people claiming this. Many of whom have a solid background. Every now and then I check out someone's claim (checking the code they've generated). I've yet to find an AI-generated codebase that passed that check so far.

          Perhaps yours is the one that does, but as we can't see the code for ourselves, there's no way for us to really know. And it's hard to take your word for it when there are so many people falsely making the same claims.

          I expect a lot of HNers have had this experience.

        • koakuma-chan 1 hour ago
          > Holy hell HN, downvoted to -4 in record time. Y'all don't like what's happening, but it's really happening.

          I gave you an upvote FWIW, after all, I mean, my job's codebase is already a buggy mess, so it doesn't hurt to throw AI on it, which is what I do.

          > You might not need your employer anymore. If you can do sales and wear many hats, you'll do exceedingly well.

          Wasn't this the case before AI as well?

        • blibble 2 hours ago
          > I've been

          so not now, then?

          • jen20 1 hour ago
            “I’ve been alive for fifty years” does not imply one is dead.
          • GuinansEyebrows 35 minutes ago
            "i used to do drugs. i still do, but i used to, too."
    • avaer 2 hours ago
      We need a new git. (could be built on the current git)

      > One incredible thing was the ability to easily merge what was worth merging from forks, for instance

      I agree, this is amazing, and really reduces the wasted effort. But it only works if you know what exists and where.

      • mg74 2 hours ago
        More we need a new GitHub.
        • avaer 2 hours ago
          Also this.

          But IMO the primitives we need are also fundamentally different with AI coding.

          Commits kind of don't matter anymore. Maybe PR's don't matter either, except as labels. But CI/hard proof that the code works as advertised is gold, and this is something git doesn't store by default.

          Additionally, as most software moves to being built by agents, the "real" git history you want is the chat history with your agent, and its CoT. If you can keep that and your CI runs, you could even throw away your `git` and probably still have a functionally better AI coding system.

          If we get a new Github for AI coding I hope it's a bit of a departure from current git workflows. But git is definitely extensible enough that you could build this on git (which is what I think will ultimately happen).

        • pietro72ohboy 2 hours ago
          May I recommend SourceHut (https://sr.ht/)
        • the__alchemist 2 hours ago
          There are a pile of alternatives which have similar UIs.
      • wasmainiac 2 hours ago
        That sounds a little extreem, why not just a new auto merge feature?
      • forgotpwd16 2 hours ago
        Jujutsu?
  • dev_l1x_be 32 minutes ago
    I think it is not killing opensource. It is changing it. There are more smaller scoped project created for specific purposes instead of creating a huge project that has gazillion features supporting everything. At least this is my experience.
  • ozten 56 minutes ago
    Generative AI is a major setback to OSS licensing. I've been on projects where we needed to do a "cleanroom" implementation and vet the team has never viewed the source code of competing products. Now in the gen AI era, coding agents are IP laundering machines. They are trained on OSS code, but the nuances of the original licenses are lost.

    On the whole, I think it is a net gain for civilization, but if we zoom into OSS licensing... not good.

    • kode-targz 49 minutes ago
      It could be a net gain for civilization if it stayed open, decentralized and off the hands of private companies, but that's not at all the case. Only tecchies care or even know about open models
  • pmarreck 1 hour ago
    Related but not sure how much attention it's getting:

    GPL is a dead man walking since you can have any LLM cleanroom a new implementation in a new language from a public spec with verifiable "never looked at the original source" and it can be more permissively-licensed however you wish (MIT, BSD etc).

    case in point, check out my current deps on the project I'm currently working on with LLM assist: https://github.com/pmarreck/validate/tree/yolo/deps

    "validate" is a project that currently validates over 100 file formats at the byte level; its goal is to validate as many formats as possible, for posterity/all time.

    Why did I avoid GPL (which I am normally a fan of) since this is open-source? I have an even-higher-level project I'm working on, implementing automatic light parity protection (which can proactively repair data without a RAID/ZFS setup) which I want to make for sale, whose code will (initially) be private, and which uses this as a dependency (no sense in protecting data that is already corrupted).

    Figured I'd give this to the world for free in the meantime. It's already found a bunch of actually-corrupt files in my collection (note that there's still some false-positive risk; I literally released this just yesterday and it's still actively being worked on) including some cherished photos from a Japan trip I took a few years ago that cannot be replaced.

    It has Mac, Windows and Linux builds. Check the github actions page.

    • dahauns 2 minutes ago
      >verifiable "never looked at the original source"

      ...erm.

      To adress the elephant in the room: Who exactly is supposed to be verifiable to never have looked at the original source? You or the LLM?

    • digiown 22 minutes ago
      > which means full reads and scrubs touch more bits and inevitably brush against those error rates

      Does this make sense at all? ZFS scrubs only reads the data you have, not the whole drive, and repairs data if possible. The more data you have, the more you have to validate regardless of the tools used. The BER is also just a terrible metric and is not reflective of how drives actually behave.

    • natebc 1 hour ago
      Did something change? Is LLM generated stuff now able to be protected with copyrights?

      I was under the impression that copyright was only available for works created by people.

      • pmarreck 18 minutes ago
        It is created by people using a tool.

        Here's the test for that: If something goes wrong with the code, who is still responsible? If it's the person, then you're wrong.

  • mellosouls 5 minutes ago
    Sustaining OSS at its current scale under widespread vibe coding requires major changes in how maintainers are paid

    A recent discussion on a related topic, apparently following the same misguided idea of how OSS is motivated:

    https://news.ycombinator.com/item?id=46565281

    (All my new code will be closed-source from now on: 93 points, 112 comments)

  • tracker1 21 minutes ago
    I reject the assertion that AI Vibe Coding has to have any affect on how OSS maintainers get paid (or not). Most of those that are paid, are paid because their job is related but the OSS library/project itself is not directly monetized. I don't see AI/Vibe coding changing this... except that now the maintainer can choose or not to accept or use those tools on their project or not.

    But the assertion that everything needs to change is absurd. Articles like this are similar in my mind to arguments for communism because every artist deserves a living wage... that's just not how society can sustain itself in reality. Maybe in a world without scarcity, but I don't see scarcity going away any time soon.

  • Olshansky 1 hour ago
    Yes and no.

    ---

    Concrete example of a no: I set up [1] in such a way that anyone can implement a new blog -> rss feed; docs, agents.md, open-source, free, etc...

    Concrete example of a yes: Company spends too much money on simple software.

    --- Our Vision ---

    I feel the need to share: https://grove.city/

    Human Flywheel: Human tips creator <-> Creator engages with audience

    Agent Flywheel: Human creates creative content <-> Agent tips human

    Yes, it uses crypto, but it's just stablecoins.

    This is going to exist in some fashion and all online content creation (OSS and other) will need it.

    ---

    As with everything, it Obvious

    [1] https://github.com/Olshansk/rss-feeds

  • barelysapient 1 hour ago
    I think LLMs also kill off most programming languages. I think we’ll end up with a handful of languages that LLMs most proficient at writing for and the languages required for device or processor compatibility.

    The cost improvement for an LLM to emit a feature (with an engineer in the loop) is too much of an improvement. We’ll look at engineers coding in C the same way we look at engineers today who code in assembly. LLM enabled development becomes the new abstraction; probably with a grammar and system for stronger specification formalization.

    • delaminator 1 hour ago
      I've done a bit of experimentation with that. And the irony is Rust seems to fare best because of the error messages of the compiler.

      I have had a lot of conversations with Claude about it and it supports that theory.

      • 13rac1 42 minutes ago
        Warning: Claude may support the theory, because Claude is a sycophant.
        • delaminator 0 minutes ago
          I have a well-calibrated ego.
  • program_whiz 1 hour ago
    All this talk about how you can vibecode all your apps now, "why use OSS?" is making me laugh. Sure for a little website or a small tool, maybe even a change to your somewhat complex codebase that you thoroughly check and test.

    Is anyone replacing firefox, chromium, postgres, nginx, git, linux, etc? It would be idiotic to trade git for a vibe coded source control. I can't even imagine the motivations, maybe "merges the way I like it"?

    Not sure, but anyone who's saying this stuff hasn't even taken the basic first level glance at what it would entail. By all means, stop paying $10 a month to "JSON validator SaSS", but also don't complain with the little niggling bugs, maintenance and organization that comes with it. But please stop pretending you can just vibe code your own Kafka, Apache, Vulkan, or PostGRES.

    Yes, you can probably go faster (possibly not in the right direction if inexperienced), but ultimately, something like that would still require very senior, experienced person, using the tool in a very guided way with heavy review. By why take on the maintenance, the bug hunting, and everything else, unless that is your main business objective?

    Even if you can 10x, if you use that to just take on 10x more maintenance, you haven't increased velocity. To really go faster, that 10x must be focused on the right objective -- distinctive business value. If you use that 10x to generate hundreds of small tools you now have to juggle and maintain, that have no docs or support, no searchable history of problems solved, you may have returned yourself to 1x (or worse).

    This is the old "we'll write our own inhouse programming language" but leaking out to apps. Sure, java doesn't work _exactly_ the way you want it to, you probably have complaints. But writing your own lang will be a huge hit to whatever it was you actually wanted to use the language for, and you lose all the docs, forums, LSP / debugging tools, ecosystem, etc.

  • Sharlin 2 hours ago
    No problem! Just give the agents the ability to autonomously report issues, submit patches, and engage with library authors. Surely nothing can go wrong.
  • j4coh 2 hours ago
    I am not sure if it kills open source, but it probably kills open core. You can just take a project like GitLab and ask an LLM, conveniently trained on the GitLab enterprise edition source code, to generate you a fresh copy of whatever features of EE you care about, but with the license laundered.
  • bluejay2387 2 hours ago
    Does it seem to anyone else that author's have created a definition for 'vibe coding' that is specifically designed to justify their paper? Also that their premise is based on the assumption that developers will be irresponsible about the use of these tools ("often without users directly reading documentation, reporting bugs, or otherwise engaging with maintainers") so that it would actually be people killing open source not 'Vibe Coding'? Just a guess on my part, but once developers learn to use these tools and we get over the newness I think this will be great for open source. With these tools open source projects can compete with an army of corporate developers while alleviating some of the pressure on overworked under-rewarded maintainers.
    • korenmiklos 1 hour ago
      Author here. We have the productvity-increasing effect of AI coding agents in the model (you're right, we're using "vibe coding" as a catch-all here). Our claim is that rewards to OSS developers (visibility and recognition in the dev community, future sponsorships, upsells to business users etc) fall faster than productivity increases. OSS devs lose the incentives to create software for others and respond to their needs.
  • donatj 1 hour ago
    As the maintainer of a handful of small projects, what I have seen for better or worse is tickets and pull requests completely dry up.

    My guess is instead of Googling "library that does X" people are asking AI to solve the problem and it's regurgitating a solution in place? That's my theory anyway.

  • alentred 2 hours ago
    Not an answer to all of our problems, but I wonder if we will see a wider adoption of more complex contribution models. Like "Lieutenants Workflow" Linux was known for, for example. Many possible workflows are explored in the Git Book [1].

    [1] https://git-scm.com/book/en/v2/Distributed-Git-Distributed-W...

  • contravariant 2 hours ago
    I'm never quite sure what to think of papers that have a conclusion and then build a mathematical model to support it.
    • tonyedgecombe 2 hours ago
      Science starts with hypothesis and predictions.
      • pixl97 1 hour ago
        Yes, but it's easy, and incorrect, to start with an answer and build your work backwards rather than proving your hypothesis with evidence.
  • verdverm 2 hours ago
    This study seems flawed at the assumptions and from the start

    "most" maintainers make exactly zero dollars. Further, OSS monetization rarely involves developer engagement, it's been all about enterprise feature gating

  • linuxftw 2 hours ago
    Is arxiv.org the new medium.com now? Seems like recently there has been a plethora of blog-level submissions from there to HN recently.
  • sailfast 2 hours ago
    It _might_ kill open source. It might lower revenue opportunities according to this abstract. Bit of a click-bait paper title.
  • rtp4me 2 hours ago
    I wonder how many OSS projects are using AI to actively squash bugs so their projects are more rock-solid than before. Also, seems to me if your project underwent a full AI standardized code-quality check (using 2 or 3 AI models), it would be considered the "standard" from which other projects could use. For example, if you needed a particular piece of code for your own project, the AI tooling could suggest leveraging an existing gold-standard project.
  • lukan 2 hours ago
    "Vibe coding raises productivity by lowering the cost of using and building on existing code, but it also weakens the user engagement through which many maintainers earn returns."

    I think the title is clickbait.

    The conclusion is:

    "Vibe coding represents a fundamental shift in how software is produced and consumed. The productivity gains are real and large. But so is the threat to the open source ecosystem that underpins modern software infrastructure. The model shows that these gains and threats are not independent: the same technology that lowers costs also erodes the engagement that sustains voluntary contribution."

    The dangers I see rather in projects drowning in LLM slop PR's, instead of less engagement.

    And the benefits of LLMs to open source in lowering the cost to revive and maintain (abandoned) projects.

    • earino 2 hours ago
      Two of the authors are engaging on bluesky regarding the "clickbaityness" of the paper:

      https://bsky.app/profile/gaborbekes.bsky.social/post/3md4rga...

      (Note, I receive a thanks in the paper.)

      • korenmiklos 1 hour ago
        author here. indeed, a more preceise title could be

        > given everything we know about OSS incentives from prior studies and how easy it is to load an OSS library with your AI agent, the demand-reducing effect of vibe coding is larger than the productivity-increasing effect

        but that would be a mouthful

    • wolfi1 2 hours ago
      do you have experience in reviving an abandoned project? which way did you go? what would be a sensible approach?
      • lukan 2 hours ago
        I am currently in the process of finding out.

        LLM's did help with quickly researching dependencies unknown to me and investigating build errors, but ideally I want to set it up in a way, that the agent can work on its own, change -> try to build -> test it. Once that works half automated, I call it success.

    • jorvi 2 hours ago
      > The productivity gains are real and large

      This is also just untrue. There is a study showing that the productivity gain is -20%, developers (and especially managers) just assume it is +25%. And when they are told about this they still feel they are +20% faster. It's the dev equivalent of mounting a cool-looking spoiler to your car.

      There are productivity gains, but they're in the fuzzy tasks like generating documentation and breaking up a project into bite-sized tasks. Or finding the right regex or combination of command line flags, and that last one I would triple verify if it was anything difficult to reverse.

    • positron26 2 hours ago
      What even is "engagement" here? Seems like abstract harm that rationalizes whatever emotion the reader already feels.
  • BoredPositron 2 hours ago
    I have written so many small scripts and apps that do exactly what I want. The general purpose OSS projects are always a compromise. I believe if LLMs mature some more years we will see a decline in these general purpose projects and will see a rise in personal apps. I dont think its something to worry about.
  • ktallett 2 hours ago
    I think vibe coding would greatly struggle with large open source projects unless your planning was exceptional and your comments on optimal coding style was exceptional, however...... For those small open source tools that many of us use daily and find invaluable, I actually think vibe coding is ideal for that. It can make a functional version quickly and you can iterate and improve it, and feel no loss for making it free to use.

    I was very sceptical but I will admit I think vibe coding has a place in society, just what it is yet is still to be determined. It can't help most for sure but it can help some in some situations.

    • Cthulhu_ 2 hours ago
      > For those small open source tools that many of us use daily and find invaluable, I actually think vibe coding is ideal for that.

      If they don't exist, AND the author is comitted to maintaining them instead of just putting it online, sure. But one issue I see is that a lot of these tools you describe already exist, so creating another one (using code assist tools or otherwise) just adds noise IMO.

      The better choice is to research and plan (as you say in your first sentence) before comitting resources. The barrier to "NIH" is lowered through code assistants, which risks reducing collaboration in open source land in favor of "I'll just write my own".

      Granted, "I'll write my own" has always felt like it has a lower barrier to entry than "I'm going to search for this tool and learn to use it".

    • data-ottawa 2 hours ago
      There are three or four projects I've always wanted to do, but were frontloaded with a lot of complexity and drudgery.

      Maybe the best feature of vibe coding is that it makes the regret factor of poor early choices much lower. Its kind of magic to go "you know what, I was wrong, let's try this approach instead" and not having to spend huge amounts of time fixing things or rewriting 80% of the project.

      It's made it a lot more fun to try building big projects on my own, where I would go into decision paralysis or prematurely optimize and never start the meat or learning of the core project.

      Its also been nice to have agents review my projects for major issues, so I feel more confident sharing them.

      • fc417fc802 2 hours ago
        > go into decision paralysis or prematurely optimize

        Setting out to implement a feature only to immediately get bogged down in details that I could probably get away with glossing over. LLMs short circuit that by just spitting something out immediately. Of course it's of questionable quality, but once you get something working you can always come back and improve it.

    • hayd 2 hours ago
      I think one of the things that will need to be embraced is carefully curating .md context files to give the prompts better/shared direction to contributors. Things like any new feature or fix should include a test case (in the right place), functions should re-use existing library code wherever possible, function signatures should never change in a backwards-incompatible way, any code changes should pass the linter, etc etc. And ideally ensure the agent writes code that's going to be to the maintainer's "taste".

      I haven't worked out how to do this for my own projects.

      Once you've set it up it's not too hard to imagine an AI giving an initial PR assessment... to discard the worst AI slop, offer some stylistic feedback, or suggest performance concerns.

    • cess11 2 hours ago
      The authors try to study the effect of people not engaging directly with OSS projects because they substitute for this with a gang of chatbots, and draw the conclusion that this lack of contact with actual people means they'll be less likely to help finance OSS development.
  • p0nce 2 hours ago
    It is effective but once cost of creating something is down, then you have less reason to collaborate and depend on each other vs asking your own LLM to build your own bubble. When paired with new-found cognitive laziness and lack of motivation when you then use no AI it's not sure of the second order effects.
    • pixl97 1 hour ago
      >then you have less reason to collaborate and depend on each other vs asking your own LLM to build your own bubble

      What's interesting in reading comments like this is reading the same type of message across a bunch of different fields and aspects of life.

      "When continents move, not only the weather changes"

      If GenAI keeps increasing it's abilities and doesn't bankrupt a number of companies first, I think it's going to make a lot of people bubbles that encompass their entire lives. It's not difficult to imagine little pockets of hyperreality were some peoples lives are only feed by generated content and their existence starts to behave more like a video game than having any grounding in the physical. It's going to be interesting what the fractured remains of society look like in that future.

  • OrvalWintermute 2 hours ago
    maybe we just need License-Aware Vibe Coding that can link back to code snippet provenance similar to SBOMs?
  • tinyhouse 2 hours ago
    There's a balance between coding by hand and vibe coding that is important. The less you understand the code, the more boring maintaining the software becomes. It's OK for throw away code, but not for serious open source projects. Use it as a powerful tool rather than your replacement.
  • dizhn 2 hours ago
    I don't really read papers and haven't read this one either but that summary.

    > In vibe coding, an AI agent builds software by selecting and assembling open-source software (OSS),

    Are they talking about indirectly due to prior training of the model? No agent I use is selecting and assembling open source software. That's more of an integration type of job not software development. Are they talking about packages and libraries? If yes, that's exactly how most people use those too.

    I mean like this:

    > often without users directly reading documentation, reporting bugs, or otherwise engaging with maintainers.

    and then,

    > Vibe coding raises productivity by lowering the cost of using and building on existing code, but it also weakens the user engagement through which many maintainers earn returns.

    Maintainers who earn "returns" must be such a small niche as to be insignificant. Or do they mean things like github stars?

    > When OSS is monetized only through direct user engagement, greater adoption of vibe coding lowers entry and sharing, reduces the availability and quality of OSS, and reduces welfare despite higher productivity.

    Now the hypothesis is exactly the opposite. Do agents not "select and assamble" OSS anymore? And what does this have to do with how OSS is "monetized"?

    > Sustaining OSS at its current scale under widespread vibe coding requires major changes in how maintainers are paid.

    Sustaining OSS insofar as maintainers do it for a living requries major changes. Period. I don't see how vibe coding which makes all of this easier and cheaper is changing that equation. Quality is a different matter altogether and can still be achieved.

    I am seeing a bunch of disjointed claims taken as truth that I frankly do not agree with in the first place.

    What would the result of such a study even explain?

    • korenmiklos 1 hour ago
      Author here. By "returns" we mean any reward the developer is aiming for, whether money, recognition, world fame, future jobs, helping fellow developers. Sorry, econ jargon.

      AI agents can select and load the appropriate packages and libraries without the user even knowing the name of the library, let alone that of the developer. This reduces the visibility of developers among users, who are now less likely to give a star, sponsor, offer a job, recommend the library to others etc.

      Even as a business user, say an agency building websites, I could have been a fedn of certain js frameworks, hosting meetups, buying swags, sponsoring development. I am less likely to do that if I have no idea what framework is powering the websites I build.

      Our argument is that rewards fall faster with vibe coding than productivity increases. OSS developers lose motivation, they stop maintaining existing libraries, don't bother sharing new ones (even if they keep writing a lot of code for themselves).

  • gyanchawdhary 2 hours ago
    interesting as an econ thought experiment .. but it assumes OSS revenue comes from direct developr engagement .. In practice .. most successful OSS is funded by enterprises .. cloud vendors .. or consulting engagements .. where broader adoption, including AI mediated usage, often increases demand of said OSS project
  • maximgeorge 2 hours ago
    [dead]
  • DarkSource 2 hours ago
    [dead]
  • ipaddr 2 hours ago
    For me spending time on my open source projects doesn't make sense anymore.

    People (the community and employers) previously were impressed because of the amount of work required. Now that respect is gone as people can't automatically tell on the surface if this is a low effort vibe code or something else.

    Community engagement has dropped. Stars aren't being given out as freely. People aren't actively reading your code like they use to.

    For projects done before llms you can still link effort and signal but for anything started now.. everyone assumes it's llm created. No one want to read that code and not in the same way you would read other humans. Fewer will download the project.

    Many of the reasons why I wrote open source is gone. And knowing the biggest/only engagement will come from llms copying your work giving you no credit.. what's the point?

    • 9dev 2 hours ago
      Eh, I don't believe that. Smartphones have amazing cameras, and we still have photographers. There are CNC saws and mills that will ship you your perfectly realised CAD prints, yet there are still carpenters and a vibrant community of people making their own furniture. These examples go on and on.

      Without any kind of offence implied: As maintainer of a few open source projects, I'm happy if it stops being an employability optimisation vector. Many of the people who don't code for fun but to get hired by FAANG aren't really bringing joy to others anyway.

      If we end up with a small web of enthusiasts who write software for solving challenges, connecting intellectually with likeminded people, and altruism—then I'm fine with that. Let companies pay for writing software! Reduce the giant dependency chains! Have less infrastructure dedicated to distributing all that open source code!

      What will remain after that is the actual open source code true to the idea.

      • ipaddr 1 hour ago
        Photographers use cameras so increasing cameras makes more photographers.

        CNC saws use to take pencil draws as input and now they can handle files. People always made handmade furniture while CNCs existed.

        Open source projects around a need will continue. Things like youtube downloader fills a need. But many projects were showing off what you as a developer can write to impress a community. Those are dead. Projects that showcased new coding styles or ways to do things are dead.

        Faang open source employment was never a thing. Faang filtered by leetcode, referrals, clout and h1 visas.

      • em-bee 1 hour ago
        exactly this. FOSS was always driven by those who could code and did so driven by their own intrinsic motivation. those people won't disappear. there may be less people because some are more driven by quick results and while in the past they had to code to get there, now they don't, which means they won't discover the joy of coding.

        but for others coding will become an art and craft like woodworking or other hobbies that require mastery.

    • Cthulhu_ 2 hours ago
      But effort / amount of work shouldn't be a deciding factor - I think anyone can churn out code if they choose to. But it's the type and quality of it.

      Nobody cares if you wrote 5000 LOC, what they care about is what it does, how it does it, how fast and how good it does it, and none of those qualifiers are about volume.

  • tosh 2 hours ago
    generative ai increases ambition, lowers barriers

    more open source, better open source

    perhaps also more forking (not only absolute but also relative)

    contribution dynamics are also changing

    I'm fairly optimistic that generative ai is good for open source and the commons

    what I'm also seeing is open source projects that had not so great ergonomics or user interfaces in general are now getting better thanks to generative ai

    this might be the most directly noticeable change for users of niche open source

    • avaer 2 hours ago
      What do you think of the paper's research claims that the returns for maintainers are reduced and sharing is decreasing?
      • positron26 2 hours ago
        Without real finance model innovation, what returns?
        • avaer 2 hours ago
          The same kind of returns that power research academia, where the amount of money you make is determined by the number of citations on your papers.

          Except it's on Github and it's forks and starts.

        • mr_spothawk 2 hours ago
          I upvoted your comment.

          Also, it's a scarcity mindset.

          I don't agree that sibling to my comment: "make money by getting papers cited". it is not a long-term solution, much as Ad revenue is broken model for free software, also.

          I'm hopeful that we see some vibe-coders get some products out that make money, and then pay to support the system they rely on for creating/maintaining their code.

          Not sure what else to hope for, in terms of maintaining the public goods.

  • neko-kai 2 hours ago
    On the contrary, I hope vibe coding revives Linux desktop into a truly usable platform.

    e.g. Vibe coding defeats GNOME developers' main argument for endlessly deleting features and degrading user experience - that features are ostensibly "hard to maintain".

    Well, LLMs are rapidly reducing development costs to 0.

    The bottleneck for UI development is now testing, and here desktop Linux has advantage - Linux users have been trained like Pavlov's dogs to test and write detailed upstream bug reports, something Windows and macOS users just don't do.

    • Cthulhu_ 2 hours ago
      Is the maintenance due to code or due to people / politics / etc? LLMs won't change that.

      Also it's a formal system and process, "vibe" coding is anything but. Call me curmudgeony (?) but I don't think "vibe coding" should be a phrase used to describe LLM assisted software engineering in large / critical systems.

    • croes 2 hours ago
      You don't think the current prices of LLMs will stay?

      At some point the investors want to see profit.

    • rvz 2 hours ago
      > On the contrary, I hope vibe coding revives Linux desktop into a truly usable platform.

      Oh sweet summer child.

      > Well, LLMs are rapidly reducing development costs to 0.

      And maintainance costs along with technical debt rapidly goes up.